I was tagged by Jorge Segarra (BlogTwitter) who had been tagged by Thomas LaRock (BlogTwitter) in his post about his goals and themeword for 2010.  I was going to try to remain blissfully ignorant about being tagged, but then Tim went and posted his goals.  So I guess I’m on the line now.  My theme word for this year is:

 

Recharge

While there are many things that I want to accomplish this year, I don’t know that (m)any of them will occur until I can figure out a way to recharge.  I’m typically a self motivated type of person, but it seems like, during the previous year, I’ve hit the wall.

 

I’m not entirely sure what has caused this, but I’m guessing that it is some combination of the cyclical nature of job satisfaction, having a boatload of things going on at home and the disconnect between the amount of things that I would like to learn and the amount of free time that I have.

 

recharger

Is there a human connector on that thing?

I realize that there is no magic button that will instantly recreate the hunger for knowledge that I had when I began learning to be a DBA.  What I can do, though, is set some goals, work hard to follow through on them and be patient.   My hope is that in the process of achieving these goals, I’ll rejuvenate my love of this career path.

 

Goals

Pick one or two topics to focus on

I have at least three SQL Server books sitting on my desk and more at home that I haven’t done much more than flip through.  Rather than setting a goal to read all 3000 pages (doable, but daunting), I’m going to pick a couple of subjects to focus on and learn them as thoroughly as possible.  This is ongoing – if it’s March and I know everything there is to know about database corruption (or whatever it is I end up focusing on), I’ll move on to the next subjects.

 

Blog more

My first love is teaching.  It invigorates me and gives me purpose.  Blogging provides me an arena to hopefully teach people that are learning to be DBAs and the chance to share what I’ve learned.

 

Become more involved with PASS

As I’ve mentioned in previous posts and as Tim mentioned in his goals for 2010, we’ve talked often about starting a PASS chapter in Tucson.  This ties into my love of teaching and will help us to connect with folks locally who have similar interests.  I would also like to take part in other committees within PASS as needed.    This will definitely require a balancing act with work and family, so I’ll be taking baby steps to ensure that I don’t shortchange other areas in my life.

 

What does this all mean?

None of these individual goals are earth-shattering and that’s intentional.  I have a tendency to swing for the bleachers, but end up hitting to the pitcher and it makes me grumpy.  My hope here is that I make some good, solid line drives and then I’ll be set up to hit it out of the park.

 

I’m tagging a couple of people that have unknowingly helped me to recharge (some thank you, eh?) 

TJay Belt (BlogTwitter)

Wendy Pastrick (BlogTwitter)

Kendal Van Dyke (BlogTwitter)

A few days ago, in his blog Goals and Themeword for 2010, Jorge Segarra (Blog - Twitter) tagged Lori and me to write a blog about our goals and theme word for 2010.  While the title of my blog is somewhat sarcastic, it really reflects facing a year that will be full of immense opportunity and challenges as the result of a successful 2009. 

 

In 2010, I face opportunities on all fronts in my life, professional and personal, so here I outline some of those and end with what I feel will have to be my theme word for the year.

 

Professional

On the professional front, 2010 is going to be a year full of many opportunities.  The biggest challenge will be to take full advantage of these opportunities without letting everything else slip.  Here are the major opportunities, as I see them:

  1.  It is starting off with me shifting my focus in my position with my employer from more of a support role (production DBA) to more of a strategic role by leading a team of DBAs and System Architects and striving to make them stronger as a team as they take on what seems to be an impossible list of projects.  As anyone who has made this shift knows, the key to doing it is learning to delegate effectively, however, as most that have made this transition also know, you never get completely out of the support role.  As part of this, I have the unique opportunity to get to mentor someone who has recently shown a great deal of aptitude on our help desk and help mold him into a junior DBA, something I know will be extremely exciting and rewarding to participate in.
  2. As I move out of my DBA comfort zone, I will be required to learn a ton of technologies that my team will be responsible for that I haven’t really had to worry about up to this point.  Some of these technologies (SharePoint, SCOM, PowerShell, SSRS) I find to be extremely exciting opportunities and will be a great chance for me to learn and grow.  Some of the technologies I will be required to learn (i.e., SSIS) are technologies that I have tried to avoid and would rather not deal with, but circumstances dictate that I must and I know they will end up being growth opportunities for me, nonetheless. 
  3. Getting to attend the 2010 SQL PASS Summit.  Unfortunately, I missed the 2009 PASS Summit and, consequently, missed many great opportunities to learn and network.  I have promised myself that this year I am going if I have to beg, borrow or steal (okay, that might be a bit of an overstatement, but you get my point :) ).
  4. Starting a PASS chapter in Tucson, AZ.  This is something that Lori and I talked about for a while and that we are going to be very passionate about in 2010.

 

Of course, this new focus and need to learn these new technologies means a huge investment of time which leads me to my next challenge/opportunity – work/life balance. 

 

Personal

Throughout 2009, one of the biggest things that I never thought I could get right was balancing the needs of my job with the needs of my family.  In 2010, this challenge will become exponentially more difficult.  If 2009 taught me anything it is that I need to go into 2010 with some sort of system or plan to try to make sure that I give my family the time that they deserve while still living up to my work commitments.  This is a challenge that I am still working on cracking.  Some of the personal opportunities and challenges I face in 2010, other than spending more time with my family include:

  1. Dealing with some personal issues of one of our kids as he strives to find out who he is on the way to adulthood.  We have had some challenges with this over the last year and are seeking some supplemental help, but the challenge will be to define and stick to a plan that will help our son become a happy, productive, well-adjusted adult.
  2. Getting more involved in our church.  This has actually been on the list for a while now, but it needs to become a priority.  This is where we lead by example, not only for our church, but for our family as well, and is something that I see as essential for us to get to where we need to be spiritually.  I know that we have been blessed with many great gifts and talents in our family and it is time that we use those to give back.
  3. Continuing to grow the relationships that we have cultivated with our many friends in the SQL Server community.  I have to say that getting to network with the SQL Server community around the world in 2009 via virtual conferences and social media was one of the most unexpected and rewarding professional experiences of 2009 and, probably, my career.    Most people would put this as a professional goal, but as I have interacted with many of you, I see the friendships that are cultivating as much more than professional connections and feel blessed to have been able to have these friendships.
  4. Blogging more.  Again, this could be go either way, professional or personal, but I consider it a personal goal as it isn’t something that is really required by my employer (or something that probably a lot of my coworkers even know I do) and our blogs aren’t always technical in nature.  If you told Lori and me at this point last year that we would we start a blog in 2009 and be syndicated by SQLServerPedia, we would have laughed and said that would be ridiculous because we couldn’t come up with enough to write about that anyone would want to read.  Fortunately, we did get the blog off the ground in the last few months of 2009 and wrote some articles that people had some interest in, so the next challenge for us is to keep putting out content from our professional and personal experiences that, hopefully, people will want to continue to read.  In this way, we can feel like we are contributing something back to the great SQL Server community that has helped us out so much.

 

Theme word

So, all of this leads to my theme word for 2010, management!  In order to have a shot at accomplishing all of these goals, it is going to require management; management of time, priorities, expectations, and resources.  This is going to be probably the biggest challenge that I have faced so far, but if I am successful, the rewards will be great and have an impact on my life and my family that will pay dividends for many years to come.

Here is hoping that all of you have a successful and happy 2010!

I went to install SQL Server 2008 on a Windows Server 2008 R2 box today for the first time and was greeted with the error “You must use the Role Management Tool to install or configure Microsoft .NET Framework 3.5.”  The text of this error was basically about as decipherable as the voice of the adults in the old Peanuts cartoons, so I fired up my old friend Google to find out what to really do.  It seems that Windows Server 2008 R2 ships with .NET Framework 3.5.1 and in order to proceed, you need to go to Server Manager (that window that pops up when you login and stays unless you close it) and enable (really, install) the .NET Framework 3.5.1 and any prerequisites.

ss2008_error_1

 

ss2008_error_2

Last week Microsoft released CTP3 for SQL Server 2008 R2.  At first blush, it doesn’t sound like it should be a huge release.  It is really a minor version release (10.5) as the next major release is scheduled to be SQL Server 2011, so one would expect to see maybe some additional or increased functionality, but not really any huge changes. 

 

As it turns out, there are some additional features with this release, but the big change is coming in the editions and pricing of SQL Server.  Microsoft is adding two additional “premium” editions of SQL Server, Datacenter Edition and Parallel Data Warehouse Edition, which are not only going to be more expensive than the Enterprise Edition, but will be required to take advantage of some of the key features of SQL Server 2008 R2, such as multi-server management of more than 25 instances, StreamInsight (complex event processing), Star Join Queries and parallel database copy (Parallel Data Warehouse Edition). 

 

The retail pricing for these two new “premium” editions are both set at $57,498 per processor, with no option for per server plus CAL pricing.  Enterprise Edition pricing is set at $28,749 per processor or $13,969 per server with 25 CALs, while Standard Edition pricing is set at $7,499 per processor or $1,849 with 25 CALs.

 

The Enterprise Edition will be required to take advantage of new features such as PowerPivot, multi-server management of up to 25 instances and Master Data Services.

 

As part of SQL Server 2008 R2, Microsoft has announced that features such as backup data compression (which can reduce the size of backups by up to 60%) will be available in the Standard Edition where it was previously only available in Enterprise Edition.

The company I work for is currently in the midst of solidifying their SQL Server high availability and disaster recovery scheme.  While doing this, I did a comparison of all the available options within SQL Server for HA/DR for recommendation to my management.  We eventually went with a third party tool and this blog isn’t an endorsement of that tool, but rather is intended to give insight into how one company (the one I work for) approached looking at our disaster recovery options.  Since I had to put this together, I figured that maybe some of this might be helpful to someone else out there, so I thought I would write it up as a blog post.

 

What Do The Terms High Availability And Disaster Recovery Mean?

Before we get too deep into this, I figured I would define the terms High Availability and Disaster Recovery since they are quite often used together and sometimes, mistakenly, interchangeably.

 

High Availability – Protects against hardware/software issues that may cause downtime.  An example could be a bad processor or memory issue.

Disaster Recovery – The ability to recover from a catastrophic event, such as a natural disaster or, on a more local level, a data center disaster (fire, flood, cooling outage, etc) and be able to continue business operations with minimal loss and minimal downtime.

 

These two concepts really go hand in hand in ensuring business continuity and both should be built into a business continuity solution.

 

High Availability/Disaster Recovery Options In SQL Server

SQL Server has a number of native features built in to provide some basic measure of high availability/disaster recovery.  These include:

 

  • Database backups – this is probably the most basic form of disaster recovery for SQL Server and one that should be practiced in every situation, regardless of what other HA/DR solutions are in place.
  • Clustering – this provides a way of binding two or more like Windows servers together in what is known as a cluster.  Each of the servers in the cluster is considered a node and, typically, one node is “active” (processing transactions) and the other nodes are “passive”.  There is a private network that runs between the nodes so that if the active node fails to deliver a “heartbeat” that the other node(s) can detect, an automatic failover is invoked and one of the passive nodes is promoted to active.
    • Pros of Clustering:  Clustering provides redundancy in the case of server hardware failure and provides a fairly quick (within 5 minutes), automatic solution to move processing to another server.
    • Cons of Clustering
      • Does not protect against disk issues since all nodes share the database on the same disk.
      • Only protects against issues with that specific server, not data center-wide since all nodes are located in same data center.
      • Only addresses availability, not disaster recovery
  • Database Mirroring – new in SQL Server 2005, database mirroring offers a way to mirror a database to another server (and disk).  All transactions are sent to the mirror server as they are committed on the production server.  Depending on how it is implemented, can automate failover, similar to clustering.
    • Pros of Database Mirroring
      • Provides some form of both HA and DR since mirror can be located in another data center, thus protecting you from hardware failure and disaster.
      • Fast.  Mirror is updated virtually instantly
    • Cons of Database Mirroring
      • Only done at the database level, not the instance level and only user databases can be mirrored, not system databases.  This means that some other form of synchronizing logins and other system database objects has to be devised.
      • To be get all features of database mirroring, Enterprise Edition has to be used. 
      • Any SQL Agent jobs must be manually enabled on the mirror if a failover takes place.
  • Log Shipping – this is one of the oldest forms DR available in SQL Server and involves setting up a warm standby server with a copy of the user database on it that is to be protected and backups of the transaction log from the production database are periodically shipped to the standby server and applied.
    • Pros of Log Shipping:
      • Tried and true technology that has been around for a long time.
      • At the database level, can provide both HA and DR protection because warm standby can be located in another data center.
    • Cons of Log Shipping:
      • Amount of potential data loss is higher than with the other options because logs are usually shipped no more frequently than every 5 minutes and typically, more like every 30 minutes to an hour.
      • Failover is fairly manual and time intensive.  Takes longer than other options to bring warm standby online.
      • Like database mirroring, this only protects a database, not the entire instance.
      • For SQL Server 2000, this feature is only available in Enterprise Edition.  Available in Standard Edition from SQL Server 2005 forward.
      • Does not transfer non-logged transactions or schema changes (security, addition of database objects
  • Replication – while not necessarily intended as an HA/DR solution replication can be used in both scenarios.
    • Pros of Replication:
      • Real-time data updates.  Minimal data loss, if at all.
      • Can be used for both HA/DR as publisher and subscriber can be in different data centers.
    • Cons of Replication:
      • Complicated to setup and maintain.
      • No provided failover mechanism.  This has to be created as part of solution.
      • Again, only database specific solution, not instance specific.

Given that these native solutions were really only database and not instance based, we chose to look at third party options.  The product that we settled on was Double-Take.  While certainly not an easy solution to implement, Double-Take was attractive to us because it allowed us to set up a stand-by server in our hot site for each of our mission critical SQL Servers and then continuously replicate the entire instance to the stand-by server.  It also provides for either automated (if the server stops responding) or manual failover (we have opted for manual) through a control applet that automatically swaps DNS entries between the production and the standby server when a failover is initiated.

 

Double-Take:  How It Works

Both the production and the standby server have to have the exact same SQL Server configuration (Edition, build, directory structure/location, instances, etc) installed.  The Double-Take software is then installed on both the production and standby server and then, through the Double-Take software, the production server is configured as the source and the standby server is configured as the target.

 

During the configuration process, you can configure Double-Take to compress the data before it replicates it over the WAN to the stand-by server.  This can save a ton of bandwidth and makes sure that the transactions are queued on the target server as quickly as possible ensuring minimal data loss in the event of a failure.

 

Additionally, Double-Take will also generate scripts for the failover, failback, and restore of the databases back to the production server when it is back in commission.  These scripts and/or the replication can be customized by overriding the automatically generated rules that Double-Take creates.

More tidbits

23 October 2009

Work has been hectic for both of us, so please pardon the absence of full-bodied blog posts.  Until we get our feet under us, here are a couple of (hopefully) helpful tidbits.

 

Since you've been good and patient, you can have a treat

Since you've been good and patient, you can have a treat

Sys.dm_exec_requests

I’m not the most patient person in the world and have a tendency to think that a process has stopped when it’s actually processing away.  A few months ago, I needed to run a DBCC on a database with corruption errors.  This wasn’t a large database and the DBCC had already been running for about 2 1/2 hours.  I put something out on Twitter about whether I should continue the DBCC (which I knew that I should) or worry about whether something had gone horribly wrong.  Paul Randal quickly tweeted back to not stop the DBCC and to check to see how far it had gone.  That’s when he gave me the information that has helped me through a number of waiting games:

 

Sys.dm_exec_requests is a DMV that returns a row for every request that is currently executing.    Since I was certain that this was the only dbcc checkdb running on this server, the following query eased my mind:

 

select percent_complete
 from sys.dm_exec_requests
 where command = ‘dbcc checkdb’

 

Keeping an eye on that let me know that the process was progressing and kept me from making a stupid mistake.  I’ve since used that on long running processes like backups and restores.  It also provides a way to tell users that, yes, this is going and it’s x% done
One caveat – the precentage_done field is for specific operations like backups, restores, rollbacks, dbccs, etc.  It is not populated for queries

 

Altering a table in SSMS 2008 (without t-sql)

While I know that we’d all rather use an sqlcmd query through a DAC connection to do anything SQL Server related, there may be times that it might be more expeditious to make the change within SSMS using the Design option.

 

If you’re in SQL Server 2008 and the table is populated, making any changes within the Design option may cause the following error:

alter_table

While we were able to do this in SQL Server 2005, it seems like it’s unavailable in 2008.  Fortunately, you can do this in SSMS, you just need to set the option to make it available.

 

In SSMS, go to Tools->Options->expand Designers.  Then under Table and Database Designers, uncheck the box next to ‘Prevent saving changes that require table re-creation’.  Voila!  You can now make all the changes that you want. 

 

As a last minute bit of news, Tim just found out that he’s now a Friend of RedGate.  Congratulations, Tim!

This is a quickie blog post, but I thought I’d post it before I forgot about it.

 

We have transactional replication turned on for one of our production databases.  This database includes XML fields and we recently saw the following error:

 

           Length of LOB data (78862) to be replicated exceeds configured maximum 65536

 

Looking into it, this is a server setting and the default value for Max Text Replication Size is 65536 (bytes).   Because our XML data is of varying sizes, I made the decision to set this to the maximum allowable value – 2147483647 (bytes). 

 

To change it through SSMS:

  •                Right click on the server and choose Properties
  •                In the Advanced page – change the Max Text Replication Size to 2147483647

In T-SQL:

EXEC sp_configuremax text repl size’, 2147483647

GO

RECONFIGURE
GO
 
Hope this helps someone out! 
 
 

 

 

 

A couple of weeks ago, I tweeted a question out to the community about whether there was any way to run PowerShell scripts from within a SQL Server Reporting Services 2008 DataSet query.  The answers that I received back were discouraging.  Even though Microsoft is touting PowerShell as the best thing since sliced bread (I have to admit, it is pretty darned cool!) and integrating into all of its current products, the integration with SQL Server 2008 seems to have stopped at giving you a mini PowerShell console (SQLPS) and a SQL Server provider to give you easier access to running SQL Server commands from within PowerShell.  This integration hasn’t gone beyond the database engine, so if you want to run PowerShell scripts from within Reporting Services, you have to get creative.  That posed a big problem for me because the report I was writing depended on some PowerShell scripts that I had written. 

 

After walking away from the problem for an hour or two, it finally hit me.  Since a Reporting Services 2008 DataSet query runs T-SQL code, including stored procedures, why don’t I just write a stored procedure that I can use to run a PowerShell script.  Below, is the stored procedure that I wrote.  It is really pretty simple.  It takes as a parameter, the command line command that you would normally type in at a PowerShell command line to run your script.  This information would include the script path\name and any parameters that the PowerShell script requires.

 

USE [MyDatabase]
GO

IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N’[Utility].[RunPowerShellScript]‘) AND type in (N’P', N’PC’))
DROP PROCEDURE [Utility].[RunPowerShellScript]
GO
 

USE [MyDatabase]
GO
 

SET ANSI_NULLS ON
GO
 SET QUOTED_IDENTIFIER ON
GO
 

CREATE PROCEDURE [Utility].[RunPowerShellScript]
      @Script           varchar(2000)
AS

set nocount on;

declare @cmd  varchar(2000);
set         @cmd = ’sqlps -c ‘ + ‘”‘ + @Script + ‘”‘
exec master..xp_cmdshell @cmd, no_output;
GO

 

In the above code, “[MyDatabase]” of course refers to the database that you would want this stored procedure to be stored in.  So, walking through the code, all this script really does is create a stored procedure called Utility.RunPowerShellScript that runs the xp_cmdshell extended stored procedure with a command string that calls the SQL Server PowerShell mini console (SQLPS) in command line mode and passes to that command line whatever you passed into the stored procedure as a parameter.  For my own purposes, I have created a schema called “Utility” so that I can easily identify stored procedures, such as this one, as helper stored procedures.  Feel free to omit this if you like.  So an example of how you could use this stored procedure after you have created it would be as follows.  Say you wanted to run a PowerShell script called “Get-DriveSpace” that returns total size and free space information for the drives on a server that you pass in and resides in the D:\Scripts folder.  All you would need to do is type:

 

exec DBA_Console.Utility.RunPowerShellScript‘”D:\Scripts\Get-DiskSpace.ps1 -Server MyServer”‘             

  

Where “MyServer” is the name of the server that you are passing into the PowerShell script as a parameter.

 

That’s it.  Short and simple and now I have a mechanism to call all of the PowerShell scripts that I want from within a Reporting Service DataSet query.

I have been managing DBAs for over ten years now and one question that always seems to come up, usually from management, is what do DBAs do anyway?  Hopefully, you have read my prior post, “Yes, Production DBAs Are Necessary!” and you know where I stand on this issue. 

 

I fully believe in the role of a production DBA and, as someone who has been in this role for well over a decade, would like to define, based on my experience, what I think that role should be.  The most effective way of doing this, I believe, is to define what a production DBA should and shouldn’t be expected to do.  Granted, this won’t work perfectly for every situation and this is not intended to be an exhaustive list, but the following should be a good guideline for defining what a production DBA does and why they are critical to ensuring that your data is safe and available.

 

User data

A production DBA should not have anything to do with data. There, I said it. This statement alone is going to generate tons of controversy, but let me explain and specify that I’m referring to user data in user databases. This confuses many people because, after all, DBA does stand for database administrator. Let me clear up that confusion right now by saying that database is one word and is a manageable entity just like a server. Data lives inside a database, much like an application lives on a server. Typically, there is a lot of data in a database, just like there can be many applications on a server, each of which may have their own application administrator. As in the case of the application analogy, we don’t expect the system/server administrator to necessarily manage all of the applications on the server, the production DBA should not be expected to manage the data in the database. The production DBA is responsible for the uptime of the database, the performance of that SQL Server instance and the safety (backups/DR) of the data in the database, not for the data itself. There should be other roles in the enterprise responsible for determining how the data gets into the database and how it is used from there, namely, data architects, database developers, etc.

 

Optimizing T-SQL

A production DBA should work with database developers to help them optimize T-SQL code. I emphasize work because production DBAs should not be writing code, outside of administration automation. This goes hand in hand with #1 above and, when you think about it in practical terms, makes sense. A production DBA may be responsible for managing hundreds or thousands of user databases all containing data for different uses. A production DBA can’t practically understand all of this data and be able to write code against it effectively. What the production DBA can do, though, is help the database developers optimize code by looking at available indexes and discussing with them how to best to arrange joins and where clauses to make the most effective use of indexes and/or create additional indexes. Additionally, if your organization falls under the Sarbanes-Oxley or PCI regulations, this segregation may be mandatory. Your developers should not have access to production databases. Their code should be promoted by the production DBA, who does have access to the production databases. This also means that you should have a development environment that reasonably approximates your production environment.

 

Managing Instances

The production DBA should manage the SQL Server instance(s). This is a big job and if your data is important and you have more than a handful of instances, it is a full time job. Let me breakdown what this actually includes to illustrate just how large a job it is:

  1. Install/patch SQL Server instances – The production DBA is responsible for installing any new instances of SQL Server and making sure that existing instances stay current on Microsoft patches. This isn’t just a matter of running through an install wizard. A number of things have to be considered by the production DBA when a new instance of SQL Server is installed. These include:
    • Collation settings. Questions have to be asked about how the application that uses this data expects the database to handle the case of words, accents on words, code pages that the application expects to be used (this gets into localization and language for those shops in other countries or that are responsible for servers that reside or are going to be accessed by users in other countries).
    • Drive/File Layout – To make the database instance run optimally, the production DBA has to consider what drive resources are going to be available and how the database files should be laid out across those resources. During this phase of the installation the production DBA has to consider whether only local storage is available or whether storage will be housed on a SAN. If it is going to be housed on a SAN, the production DBA needs to work with the SAN administrator to ensure that the LUNs are set up appropriately for optimal SQL Server access, which in and of itself requires a lot of knowledge and experience.
    • Scalability – The production DBA should be involved in developing the specifications for the hardware. Questions that the production DBA will be asking of the various parties include, how many concurrent users will be accessing the data, how many databases will there be, is the data static or changing and how does it change (i.e., batch load, transactional updates), etc. This will give the production DBA a better idea of what kind of resource utilization to expect and drive the hardware specification process. It will also help determine recovery model and backup strategies.
  2. Create, and periodically test, a backup and recovery scheme for your enterprise’s data. Things the DBA has to consider as part of this include:
    • Is the data development or production data? Yes, development data gets backed up and is considered in an effective backup/recovery scheme because, after all, you don’t want to lose any of your development effort; it just isn’t prioritized as highly as production data.
    • How often data is updated in databases and how is it updated (online transactions, batch, etc)? This determines the recovery model that should be used and leads the DBA to ask another question, what is the maximum acceptable data loss in terms of time? With the answers to these questions, the DBA can more effectively determine the type and the frequency of backups (full, differential, transaction log, etc).
  3. While somewhat tied with backup/recovery, the DBA is also tasked with helping to come up with a disaster recovery strategy for the SQL Server environment, within the constraints of the enterprise’s available resources.
  4. Uptime/performance – The DBA is tasked with managing/monitoring those things that would impact the availability of the databases. This includes CPU utilization, I/O performance, disk space, reviewing event and SQL Server logs,etc.
  5. Help design the security model for SQL Server instances. Within SQL Server, there are a number of built in security roles, both at the instance level and database level that users can made members of in addition to the ability to create custom roles. Additionally, SQL Server can operate in Windows Authentication mode, which integrates your domain security and uses the users’ Windows domain accounts or Mixed (or SQL Server and Windows Authentication Mode) Mode, which allows accounts to either be Windows domain or native SQL Server accounts. There are a number of factors to be considered here to make sure that your users can use your system while still protecting and safeguarding sensitive data and the SQL Server configuration.

 
So as you can see, the production DBA has plenty of things to contend with in order to effectively and efficiently maintain a SQL Server environment.  The bottom line comes down to how valuable is your data to you and how much are you willing to risk by not allowing your DBA to dedicate themselves to maintaining and safeguarding it.

I know that, for many, this is a controversial topic.  There are those that believe that there really is no such thing as a SQL Server production DBA and that DBAs should be jacks of all trades doing anything from database development to OLAP/BI development to .NET programming to being a webmaster or network/server/system administrator.  It seems that everywhere I turn anymore, job postings, sharing horror stories with colleagues, and even blogs from members of the SQL Server community, I see references to production DBAs being more than just a DBA.  It as if, somewhere, managers are thinking that they need to have someone manage their SQL Server databases, but they don’t know what that means, so the first thought is “let’s just make it part of someone’s duties that we already have on staff.”  The question is constantly asked, “Why can’t the database developer handle that, he/she already has to use SQL Server?” or the common mistake, “let’s just let the server administrator handle it.”  There has even been a term coined for this, “the reluctant DBA.”  I have actually heard SQL Server compared to Access since it has wizards and, because Access doesn’t require a production DBA, why the heck should SQL Server?  In my experience, this perception is especially prevalent in the small and medium-size (SMB) market, but there are large companies and managers that have grown up in medium-size companies that seem to reinforce this misconception.

 

Microsoft hasn’t really done anything to correct this misconception.  From a marketing perspective, I guess it is in their best interest for prospective SQL Server customers to think that it doesn’t take much to manage a production SQL Server environment.  When companies purchase Oracle or move their applications to Oracle databases, it is a foregone conclusion that dedicated production DBAs are necessary and so, these companies typically build that into their cost calculations.  I guess Microsoft feels that if customers don’t know a DBA is required, it makes their product look that much less expensive.

 

Now don’t get me wrong, I am not saying that database developers, server administrators, network administrators, etc. can’t become DBAs, they just can’t do it without training and experience, just like they didn’t fall into their current jobs without training and experience and the job of production DBA certainly can’t be done (at least not effectively) in someone’s “spare time.”  As SQL Server has matured, it has become extremely flexible and full featured, but these features come with a cost: complexity.  SQL Server can be used and configured in a myriad of different ways and for many different scenarios, but who is going to manage that and recommend how it should be configured and used for a particular purpose if you don’t have a production DBA?

 

In my next post, I will discuss what a production DBA does and what they shouldn’t do.