The ability to lock pages in memory is used on 64-bit systems to help prevent the operating system from paging out SQL Server’s working set.  This may be enabled when the DBA starts seeing errors like the following:

 

"A significant part of sql server process memory has been paged out. This may result in a performance degradation."

 

If you’re running SQL Server 2005/2008 Enterprise, you would take the steps to lock pages in memory and you’re done with it.   If you’re on SQL Server 2005/2008 Standard Edition, you still have a ways to go.  The ability to lock pages in memory for standard

This flag will not help in this situation

This flag will not help in this situation

edition is handled through  a trace flag.  For SQL Server 2005 SP3, you need to apply CU4 .  For SQL Server 2008 SP1, you need to apply CU2.    Once those CUs have been applied, set trace flag 845 as a startup parameter.  Here’s a good ServerFault question that explains how to set a trace flag as a startup parameter.

 

Once the trace flag was enabled, the memory issues were solved.   Day saved, once again. :)  

 

As with anything, this has the potential to degrade system performance.   In this article, scroll to the section entitled “Important considerations before you assign “Lock Pages in memory” user right for an instance of a 64-bit edition of SQL Server”.  Read it thoroughly prior to making any changes to your production systems.

I ran across an installation issue with SQL Server 2008 on a Windows Server 2008 server the other day that baffled me a little bit.  I was installing an additional instance of SQL Server 2008 on a server that already had a SQL Server 2008 instance and right before the installation completed, it died with the error:  “A MOF Syntax error occurred.”  Further investigation into the Setup Bootstrap logs gave this detail:

An error occurred while processing item 1 defined on lines 14 – 16 in file D:\Program Files\Microsoft SQL Server\MSSQL10.TMS_MODELING\MSSQL\Binn\etwcls.mof.transformed:

 

2010-05-18 13:41:02 Slp: Compiler returned error 0×800700a4Error Number: 0×800700a4, Facility: Win32

 

2010-05-18 13:41:02 Slp: Description: No more threads can be created in the system.

 

2010-05-18 13:41:02 Slp:

 

2010-05-18 13:41:02 Slp: Sco: Compile operation for mof file D:\Program Files\Microsoft SQL Server\MSSQL10.TMS_MODELING\MSSQL\Binn\etwcls.mof.transformed failed. Exit code 3

 

2010-05-18 13:41:02 Slp: Configuration action failed for feature SQL_Engine_Core_Inst during timing ConfigNonRC and scenario ConfigNonRC.

 

Much investigation on the internet turned up a lot of people that have been having this issue, but very few answers.  After many installs and uninstalls, I finally tried the following, which seemed to work:

 

  • I ran the setup.exe as an administrator (right click on setup.exe and click “Run as administrator) even though I am a local administrator on the box.
  • I installed SQL Server using the Network Service account instead of the normal domain service account.
  • The installation succeeded and I just went into Configuration Manager and changed the service account to the domain account after the installation.

As DBAs, we are increasingly being asked to manage more and more technology.  Some of that is the result of internal pressures (i.e. taking on additional roles) and some of that is the result of Microsoft bundling an ever increasing array of different technologies within SQL Server.  Dealing with these various technologies has become a weekly issue for me, but it really came to light today when I had a SQL Server Reporting Services 2008 server that was consuming 100% of the CPU.  Doing some poking around, I realized not only did I not really know anything about this beast called “SQL Server Reporting Services”, the tools to manage it are extremely lacking (now, that is my opinion coming from a position of complete ignorance about this technology).  I connected to the SSRS 2008 service with SSMS and, from there, I could only view three things:  SSRS jobs, security, and shared schedules.  I determined that none of the shared schedules were responsible for the utilization since nothing was scheduled to run anywhere near the time that the problem started, so that was a dead end.

 

Next, I connected to the Report Service by hitting http://[servername]/reports.  From here, I could look at all of the various reports that had been deployed to the instance, general site settings, security, both instance-wide and at a report level, and I could look at my own subscriptions.  The one thing that seemed to elude me was visibility into what, if anything, users were running on the SSRS server.

 

Frustrated, I connected to database instance through SSMS that hosts the ReportServer database.  I figured there had to be something in the database I could query to give me some visibility into what my SSRS instance does all day.  Thinking like a DBA, the first thing I did was look under “System Views” in the ReportServer database.  I saw two views, ExecutionLog and ExecutionLog2, so I decided to do a simple SELECT TOP 100* from each to see what they would give me.  This is where I stumbled upon my gold nugget for the day.  Right there in the ExecutionLog2 system view was all of the information that I had been looking for.  Running the following query, you can get a wealth of valuable information on what reports users are running, when they are running them, what parameters they used, and how long the report took to generate (broken down into data retrieval time, processing time, and rendering time) – all key information for trending the load that your server is under and <gasp> justifying new hardware, if needed.

 

SELECT InstanceName
       , ReportPath
       , UserName
       , RequestType
       , Format
       , Parameters
       , ReportAction
       , TimeStart
       , TimeEnd
       , TimeDataRetrieval
       , TimeProcessing
       , TimeRendering
       , Source
       , Status
       , ByteCount
       , RowCount
       , AdditionalInfo
FROM ExecutionLog2
WHERE CONVERT(VARCHAR(10),timeend,101) >= -- Some start date that you supply
AND CONVERT(VARCHAR(10),timeend,101) <= -- Some end date that you supply

 

To many of you who regularly use SSRS this may be very remedial, but I figured I would throw this out there for those DBAs who like me, have to learn this stuff on the fly in a crisis.

 

By the way, as a side note, for those who are curious about why the ExecutionLog2 system view was used instead of the ExecutionLog system view, it appears that the ExecutionLog system view exists for backward compatibility for SQL Server 2005 Reporting Services upgrades.  The ExecutionLog2 system view provides much more information than the ExecutionLog system view.

/*Note – I have yet to see if this breaks anything – this post is for observational use only.  If it does break replication, I’ll add some notes to the bottom of this post */

 

Late last week, we had the fabulous opportunity to see just what happens when the dual power supplies for your SAN are plugged into the same PDU and that PDU gets powered off.  As you can imagine, this caused all sorts of neat things to occur.  Fortunately, the situation was remedied fairly quickly, but not before breaking replication for one of my databases.

 

The error that I received was: “The concurrent snapshot for publication ‘xx’ is not available because it has not been fully generated…”.  There were probably different routes to go to fix this issue, but I decided to reinitialize the subscription from a snapshot and went ahead and created the snapshot.  This particular database is a little under 500GB, so I knew that this would take awhile.

 

The next morning when I came to work, I could see that the snapshot had completed and the subscription had been reinitialized.   I knew that it would take some time for all of the commands that had queued up in the distribution database to be written to the subscription and so I went about my day.

 

Later, I noticed that the number of undistributed commands didn’t seem to be going down at all and was in fact growing.  This is a fairly heavily utilized database with writes going on very frequently, but it still seemed odd.   I did some querying to try to figure out just how behind we were, but, to my surprise, the subscription was completely up to date.  This is where it started to get strange.

 

I queried the MSdistriibution_status view to get an idea of what was still outstanding and saw the following:

Article_id Agent_id UndelivCmdsInDistDB DelivCmdsInDistDB
1 3 0 2130
2 3 0 3295
3 3 0 8275
4 3 0 2555
5 3 0 8310
6 3 0 2521
1 -2 2130 0
2 -2 3295 0
3 -2 8275 0
4 -2 2555 0
5 -2 8310 0
6 -2 2521 0
1 -1 2130 0
2 -1 3295 0
3 -1 8275 0
4 -1 2555 0
5 -1 8310 0
6 -1 2521 0

 

You’ll notice that the row counts for agents -1 and -2, the number of undistributed commands for each article was the same as the distributed commands for agent 3.

 

When I queried the MSDistribution_Agents table, I saw three distribution agents.  One was named identically to the distribution agent job.  The other two had subscriber_ids with negative numbers and the subscriber_db was ‘virtual’. 

 

I did some digging around on some unnamed search engine to try to figure out what virtual subscriptions were.  To be honest, there wasn’t  a lot of information to be found.  I did find a question on Eggheadcafe.com from 2007 where it seemed like the person was having a similar issue.  Reading through, it looked like it had something to do with the settings for allowing anonymous subscriptions and immediately synchronizing the snapshot.  I looked at my publication and both those were set to true.

 

Of course, I posted a question on Twitter at the same time and Hilary Cotter [Blog/Twitter], stated that the virtual subscriptions “are to keep your snapshot up to date, ignore them”.  That seemed to be consistent with what I was reading, but it still seemed odd that they appeared to be holding onto undistributed commands.  This, in turn, was causing my distribution database to grow.

 

Since, at this point, I figured that I was going to have to tear the whole thing down and rebuild it, I thought that I’d try to modify the publication first.  I knew that I didn’t want to allow anonymous subscriptions and I didn’t need to immediately synch from a snapshot, so I ran the following commands:

 

exec sp_changepublication @publication = 'xxxxx', @property = N'allow_anonymous', @value = 'false'

exec sp_changepublication @publication = 'xxxxx', @property = N'immediate_sync', @value = 'false'

 

I reran my query of the MSdistribution_status and voila, the undistributed commands were gone.  When I queried MSDistribution_agents, there was still a virtual subscriber, but it was no longer filling up my distribution database with undelivered commands.

 

I still don’t know what caused this – it seems like something occurred when I reinitialized the snapshot.  So far, replication is continuing to run without errors.  I’d be interested to hear if anyone has run into a similar issue.

It would be so nice if something made sense for a change.

It would be so nice if something made sense for a change.

A couple of weeks ago, we had a database that had become corrupted due to a disk failure.  After running a DBCC CHECKDB, it appeared that there was one table with a corrupted cluster index.  Paul Randal (Twitter/Blog) verified that this was the case and that it was, unfortunately, non-recoverable without data loss.  Fortunately, we had backups and this database was a replication subscription, so between the backup and publication, we were able to rebuild the database without losing any data.  We then had to rebuild a kind of goofy replication set up, but that’s another story.

 

As I was doing some validation about the data that was contained in the backups and what needed to be pulled down from the publication, I realized that there was at least one other table that was corrupted.  This is where it became confusing.   I went back through the output from the CHECKDB and noticed that the table in question was not mentioned.   At this point, I went through and matched up all of the tables that were listed in the CHECKDB output against the actual tables in the database and found that there were three tables in the database that were not listed on the DBCC output.  I ran DBCC CHECKTABLE against the missing tables and while two of them came back with no error, one was definitely corrupted.  The CHECKTABLE command actually “terminated abnormally” on the corrupted table.   I went back through the the error logs and dump files that had been generated at that time.  Aside from the references to the CHECKTABLE that was run, the only object referenced was the table that had shown up in the CHECKDB output.

 

As a note, this is a 2008 SP1 database.  I know that, when running CHECKDB from SSMS, it will only display the first 1000 errors, but there were only 274 errors mentioned in the output, so I didn’t think that was the issue.  I asked Paul Randal (mentioned above) and his thought was that perhaps the metadata had also been corrupted and because of that CHECKDB may not have seen those tables as existing. 

 

We’ve recovered from the incident, but it was a curious experience.  Up until now, I’ve gone off of the CHECKDB output, but this gives me reason to think that there might be tables that aren’t included. I am interested to know whether anyone else has ever run into a similar incident.  Hopefully it was just an (odd) learning experience.

SQL Search – a review

2 February 2010

SQL Search a new (free!) SSMS plugin from Red Gate, was released yesterday, February 1st.    This tool allows you to search for text within SQL Server objects.    I’ve had occasions where I needed to search legacy stored procedures for decremented code (backup log … with no_log – egad!).  There have also been times where database servers were renamed and any references to the previous server name had to be changed. Prior to the beta version of this tool, I’d used the syscomments system table as a way to find text within stored procedures, but the results definitely weren’t pretty.

 

SQL Search allows you to enter a piece of text, specify either all objects or specific objects and whether you want to search all databases or a single database.

 

In the example below, I searched for the term ‘Reject’ occurring within all objects on the Reporting database.  The list of objects, their types and how the match was made is displayed in the upper half of the window.  When the object that is highlighted in the top of the screen is a stored procedure or function, the text is displayed in the bottom half of the screen, with the text highlighted.

sqlsearch

 

One of the most helpful features is the link in the middle of the screen to ‘Select object in Object Explorer’.  Just as it sounds, a click will take you directly to that object.

 

In working with it, I haven’t noticed many downsides.  It takes a bit of time the first time a database is used as SQL Search indexes it, but after that, searches seem to go pretty quickly.  The first time that you search a database on a server, SQL Search needs to be opened within the context of that server, but after that point, it is added to the dropdown box of servers that are selectable.

 

SQL Search seems to be a helpful tool and, as I mentioned before, it’s free.  To download, just click here to go to the Red Gate site.

 | Posted by tledwards | Categories: Administration, DBAs | Tagged: |

Several weeks ago on Twitter, Colin Stasiuk (BlogTwitter), ) asked if anyone had a script to pull back job statuses from the SQL Server Agent.  I had been doing some work on a SQL Server DBA Console and had written some PowerShell scripts to give me various pieces of information and put together a PowerShell script that could be run as a SQL Agent job to periodically report the statuses of SQL Agent jobs.  Eventually, I think Colin went with a different solution, but I figured I would go ahead and post the PowerShell script that I came up with.  This solution has been tested against SQL Server 2000/2005/2008.

 

This first script is just a SQL script to create the SQL Server table that the PowerShell script writes the status information to and can be downloaded here, LastJobStatus_Table.sql.

CREATE TABLE [dbo].[LastJobStatus](
	[ServerName] [nvarchar](128) NULL,
	[Job_Name] [nvarchar](128) NULL,
	[Run_Date] [datetime] NULL,
	[Job_Duration] [time](7) NULL,
	[Run_Status] [varchar](50) NULL,
	[Sample_Date] [datetime] NULL
) ON [PRIMARY]

 

The next is the PowerShell script that does all of the work bringing back the SQL Agent job statuses.  It takes a parameter of Server, which is the name of the SQL Server that you want job statuses from.  Make sure that you change the name “MgtServer” to whatever the name is of the SQL Server where you intend to store the results from this script.   You’ll also need to change the root directory for where your scripts are loaded to match your environment. This script can be downloaded here, Get-LastJobStatusServer.ps1.

param([string]$Server=$(Throw "Parameter missing: -Server ServerName"))
[void][reflection.assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo")

##Point to library files
$scriptRoot = "D:\DBAScripts"

--change script root directory to match your environment
#$scriptRoot = Split-Path (Resolve-Path $myInvocation.MyCommand.Path)
. $scriptRoot\LibrarySmo.ps1	#Part of Chad Miller's SQLPSX project on CodePlex
. $scriptRoot\DataTable.ps1	    #Included with the scripts for this blog, also from a CodePlex project (http://www.codeplex.com/PSObject)

Set-Alias -Name Test-SqlConn -Value D:\DBAScripts\Test-SqlConn.ps1

##Define variables

## open database connection
$conn = New-Object System.Data.SqlClient.SqlConnection("Data Source=$server;
Initial Catalog=master; Integrated Security=SSPI")
$conn.Open()
$cmd = $conn.CreateCommand()
$cmd.CommandText= "	CREATE TABLE #JobsRun (	ServerName nvarchar(128),
											Job_Name nvarchar(128),
											Run_Date datetime,
											Job_Duration time(7),
											Run_Status varchar(50),
											Sample_Date datetime
										  );
					insert into #JobsRun
					select	@@SERVERNAME AS ServerName
							,j.name Job_Name
							,(msdb.dbo.agent_datetime(jh.run_date,jh.run_time)) As Run_Date
							,substring(cast(run_duration + 1000000 as varchar(7)),2,2) + ':' +
									substring(cast(run_duration + 1000000 as varchar(7)),4,2) + ':' +
									substring(cast(run_duration + 1000000 as varchar(7)),6,2) Job_Duration
							,case when run_status = 0
										then 'Failed'
								when run_status = 1
										then 'Succeed'
								when run_status = 2
										then 'Retry'
								when run_status = 3
										then 'Cancel'
								when run_status = 4
										then 'In Progress'
							end as Run_Status
							,GETDATE() As Sample_Date
					FROM msdb.dbo.sysjobhistory jh
						join msdb.dbo.sysjobs j
							on jh.job_id = j.job_id
					where	step_id = 0
					and		enabled = 1
					order by cast(cast(run_date as char) + ' ' +
								substring(cast(run_time + 1000000 as varchar(7)),2,2) + ':' +
								substring(cast(run_time + 1000000 as varchar(7)),4,2) + ':' +
								substring(cast(run_time + 1000000 as varchar(7)),6,2)  as datetime) desc

					delete from MgtServer.DBA_Console.dbo.LastJobStatus where ServerName = '$server'                       -- Change 'MgtServer' to the name of whatever the SQL Server is in
                                                                                                                                               -- your env that will house the LastJobStatus table which stores the
                                                                                                                                               -- results of this script
					insert into MgtServer.DBA_Console.dbo.LastJobStatus (ServerName, Job_Name, Run_Date, Job_Duration, Run_Status, Sample_Date)
					select	jr.ServerName,
							jr.Job_Name,
							jr.Run_Date,
							jr.Job_Duration,
							jr.Run_Status,
							jr.Sample_Date
					from	#JobsRun jr
					where	Run_Date = (	select	max(jr1.Run_Date)
											from	#JobsRun jr1
											where	jr1.Job_Name = jr.Job_Name)
					drop table #JobsRun; "
$cmd.ExecuteNonQuery()
$conn.Close()

 

There are references in the above script to a LibrarySMO.ps1 script that can be obtained from CodePlex (see the comments in the script for URL) and a DataTable.ps1 script (also from CodePlex, but included in the download file for this blog, for your convenience.)

# Taken from out-dataTable script from the PowerShell Scripts Project
# http://www.codeplex.com/PsObject/WorkItem/View.aspx?WorkItemId=7915

Function out-DataTable {

  $dt = new-object Data.datatable
  $First = $true  

  foreach ($item in $input){
    $DR = $DT.NewRow()
    $Item.PsObject.get_properties() | foreach {
      if ($first) {
        $Col =  new-object Data.DataColumn
        $Col.ColumnName = $_.Name.ToString()
        $DT.Columns.Add($Col)       }
      if ($_.value -eq $null) {
        $DR.Item($_.Name) = "[empty]"
      }
      elseif ($_.IsArray) {
        $DR.Item($_.Name) =[string]::Join($_.value ,";")
      }
      else {
        $DR.Item($_.Name) = $_.value
      }
    }
    $DT.Rows.Add($DR)
    $First = $false
  } 

  return @(,($dt))

}

Going with your gut

11 January 2010

This post is a response to Tim Ford’s Whose Blog Is It Anyway  challenge.  The opportunity to use the words: pony, nude and bong in a blog post about an actual experience was too much to pass up.

 

I’m a DBA and I’m logical – coldy logical, if you listen to K. Brian Kelley - but I’m here to tell you that sometimes you just have to go with your gut.  Most of us have an inner voice that clues us in on the things that you know but can’t rationalize or aren’t ready to deal with yet.  That’s the voice that lets you know that the URL in your son’s internet history with the word ‘PussyCat’, probably isn’t a site featuring live, nude cats.  Not everyone trusts that voice – just ask George “Let’s Have Padme Die Of A Broken Heart Instead Of Anakin Crushing Her To Death” Lucas – that would have been a far more awesome scene.

 

I’m here to talk about such a time, early in my career.   I had a great DBA to learn from, but he had moved on to another position.  I felt pretty firm in my knowledge and knew that, whatever came up I could fix or handle by simply using some magical tool, library or bong.

 

That’s when I ran into it – the problem that I couldn’t fix, but was going to cause me pain.  On a Friday evening I started seeing error messages in the SQL Server error logs that indicated that we were having disk issues.  Of course the error message didn’t read ‘Your disks are failing’, but everything that I was reading seemed to indicate that.  One thing that bears noting – this was the central order entry/documentation application for the hospital that I worked at – there was no acceptable downtime.  I contacted our system administrator who did some research and let me know that all of the disks looked fine.  Now, this was an SA whose knowledge I respected – he’d been in the field forever (not like ENIAC forever, but pretty close).   He alluded to the fact that I was still a newbie and probably didn’t diagnose the problem correctly.  At this point it was around noon on Saturday and even though I was getting tired of looking around, I figured that I’ve started, so I’ll finish.  I wasn’t able to find any information that was clearer than I had provided before, but I knew, in my gut, that we were about to have problems – big problems.  I called the SA again and tried to encourage him to look a little more closely.  I asked him to pretend that the RAID array was a horse stable.  From the outside, it might sound pretty happy.  On the inside it might look good initially, but as soon as he looks down he’s going to be very sad about the pony.  For some reason, that analogy didn’t work…

 

As the afternoon progressed, I kept with my gut feeling and bugged the heck out of that SA.  The disks actually gave up the ghost and 40 hours later (non-stop), with our application vendor and Microsoft on the line, we finally got the disks replaced and the application back online.  At the post-mortum, once everyone had gotten a few hours sleep, that same SA wanted to know how I knew what the problem was since none of the errors were absolutely clear about the cause.  I just told him that I knew in my gut that something bad was about to happen.  He said, “Well, if you’d told me that in the beginning, I would have done more research!”.

 

While the story above is true, some of the particulars have been changed to protect the innocent.

 | Posted by tledwards | Categories: Administration, DBAs, Uncategorized | Tagged: , , |

I was tagged by Jorge Segarra (BlogTwitter) who had been tagged by Thomas LaRock (BlogTwitter) in his post about his goals and themeword for 2010.  I was going to try to remain blissfully ignorant about being tagged, but then Tim went and posted his goals.  So I guess I’m on the line now.  My theme word for this year is:

 

Recharge

While there are many things that I want to accomplish this year, I don’t know that (m)any of them will occur until I can figure out a way to recharge.  I’m typically a self motivated type of person, but it seems like, during the previous year, I’ve hit the wall.

 

I’m not entirely sure what has caused this, but I’m guessing that it is some combination of the cyclical nature of job satisfaction, having a boatload of things going on at home and the disconnect between the amount of things that I would like to learn and the amount of free time that I have.

 

recharger

Is there a human connector on that thing?

I realize that there is no magic button that will instantly recreate the hunger for knowledge that I had when I began learning to be a DBA.  What I can do, though, is set some goals, work hard to follow through on them and be patient.   My hope is that in the process of achieving these goals, I’ll rejuvenate my love of this career path.

 

Goals

Pick one or two topics to focus on

I have at least three SQL Server books sitting on my desk and more at home that I haven’t done much more than flip through.  Rather than setting a goal to read all 3000 pages (doable, but daunting), I’m going to pick a couple of subjects to focus on and learn them as thoroughly as possible.  This is ongoing – if it’s March and I know everything there is to know about database corruption (or whatever it is I end up focusing on), I’ll move on to the next subjects.

 

Blog more

My first love is teaching.  It invigorates me and gives me purpose.  Blogging provides me an arena to hopefully teach people that are learning to be DBAs and the chance to share what I’ve learned.

 

Become more involved with PASS

As I’ve mentioned in previous posts and as Tim mentioned in his goals for 2010, we’ve talked often about starting a PASS chapter in Tucson.  This ties into my love of teaching and will help us to connect with folks locally who have similar interests.  I would also like to take part in other committees within PASS as needed.    This will definitely require a balancing act with work and family, so I’ll be taking baby steps to ensure that I don’t shortchange other areas in my life.

 

What does this all mean?

None of these individual goals are earth-shattering and that’s intentional.  I have a tendency to swing for the bleachers, but end up hitting to the pitcher and it makes me grumpy.  My hope here is that I make some good, solid line drives and then I’ll be set up to hit it out of the park.

 

I’m tagging a couple of people that have unknowingly helped me to recharge (some thank you, eh?) 

TJay Belt (BlogTwitter)

Wendy Pastrick (BlogTwitter)

Kendal Van Dyke (BlogTwitter)

A few days ago, in his blog Goals and Themeword for 2010, Jorge Segarra (Blog - Twitter) tagged Lori and me to write a blog about our goals and theme word for 2010.  While the title of my blog is somewhat sarcastic, it really reflects facing a year that will be full of immense opportunity and challenges as the result of a successful 2009. 

 

In 2010, I face opportunities on all fronts in my life, professional and personal, so here I outline some of those and end with what I feel will have to be my theme word for the year.

 

Professional

On the professional front, 2010 is going to be a year full of many opportunities.  The biggest challenge will be to take full advantage of these opportunities without letting everything else slip.  Here are the major opportunities, as I see them:

  1.  It is starting off with me shifting my focus in my position with my employer from more of a support role (production DBA) to more of a strategic role by leading a team of DBAs and System Architects and striving to make them stronger as a team as they take on what seems to be an impossible list of projects.  As anyone who has made this shift knows, the key to doing it is learning to delegate effectively, however, as most that have made this transition also know, you never get completely out of the support role.  As part of this, I have the unique opportunity to get to mentor someone who has recently shown a great deal of aptitude on our help desk and help mold him into a junior DBA, something I know will be extremely exciting and rewarding to participate in.
  2. As I move out of my DBA comfort zone, I will be required to learn a ton of technologies that my team will be responsible for that I haven’t really had to worry about up to this point.  Some of these technologies (SharePoint, SCOM, PowerShell, SSRS) I find to be extremely exciting opportunities and will be a great chance for me to learn and grow.  Some of the technologies I will be required to learn (i.e., SSIS) are technologies that I have tried to avoid and would rather not deal with, but circumstances dictate that I must and I know they will end up being growth opportunities for me, nonetheless. 
  3. Getting to attend the 2010 SQL PASS Summit.  Unfortunately, I missed the 2009 PASS Summit and, consequently, missed many great opportunities to learn and network.  I have promised myself that this year I am going if I have to beg, borrow or steal (okay, that might be a bit of an overstatement, but you get my point :) ).
  4. Starting a PASS chapter in Tucson, AZ.  This is something that Lori and I talked about for a while and that we are going to be very passionate about in 2010.

 

Of course, this new focus and need to learn these new technologies means a huge investment of time which leads me to my next challenge/opportunity – work/life balance. 

 

Personal

Throughout 2009, one of the biggest things that I never thought I could get right was balancing the needs of my job with the needs of my family.  In 2010, this challenge will become exponentially more difficult.  If 2009 taught me anything it is that I need to go into 2010 with some sort of system or plan to try to make sure that I give my family the time that they deserve while still living up to my work commitments.  This is a challenge that I am still working on cracking.  Some of the personal opportunities and challenges I face in 2010, other than spending more time with my family include:

  1. Dealing with some personal issues of one of our kids as he strives to find out who he is on the way to adulthood.  We have had some challenges with this over the last year and are seeking some supplemental help, but the challenge will be to define and stick to a plan that will help our son become a happy, productive, well-adjusted adult.
  2. Getting more involved in our church.  This has actually been on the list for a while now, but it needs to become a priority.  This is where we lead by example, not only for our church, but for our family as well, and is something that I see as essential for us to get to where we need to be spiritually.  I know that we have been blessed with many great gifts and talents in our family and it is time that we use those to give back.
  3. Continuing to grow the relationships that we have cultivated with our many friends in the SQL Server community.  I have to say that getting to network with the SQL Server community around the world in 2009 via virtual conferences and social media was one of the most unexpected and rewarding professional experiences of 2009 and, probably, my career.    Most people would put this as a professional goal, but as I have interacted with many of you, I see the friendships that are cultivating as much more than professional connections and feel blessed to have been able to have these friendships.
  4. Blogging more.  Again, this could be go either way, professional or personal, but I consider it a personal goal as it isn’t something that is really required by my employer (or something that probably a lot of my coworkers even know I do) and our blogs aren’t always technical in nature.  If you told Lori and me at this point last year that we would we start a blog in 2009 and be syndicated by SQLServerPedia, we would have laughed and said that would be ridiculous because we couldn’t come up with enough to write about that anyone would want to read.  Fortunately, we did get the blog off the ground in the last few months of 2009 and wrote some articles that people had some interest in, so the next challenge for us is to keep putting out content from our professional and personal experiences that, hopefully, people will want to continue to read.  In this way, we can feel like we are contributing something back to the great SQL Server community that has helped us out so much.

 

Theme word

So, all of this leads to my theme word for 2010, management!  In order to have a shot at accomplishing all of these goals, it is going to require management; management of time, priorities, expectations, and resources.  This is going to be probably the biggest challenge that I have faced so far, but if I am successful, the rewards will be great and have an impact on my life and my family that will pay dividends for many years to come.

Here is hoping that all of you have a successful and happy 2010!