SQL Search – a review

2 February 2010

SQL Search a new (free!) SSMS plugin from Red Gate, was released yesterday, February 1st.    This tool allows you to search for text within SQL Server objects.    I’ve had occasions where I needed to search legacy stored procedures for decremented code (backup log … with no_log – egad!).  There have also been times where database servers were renamed and any references to the previous server name had to be changed. Prior to the beta version of this tool, I’d used the syscomments system table as a way to find text within stored procedures, but the results definitely weren’t pretty.

 

SQL Search allows you to enter a piece of text, specify either all objects or specific objects and whether you want to search all databases or a single database.

 

In the example below, I searched for the term ‘Reject’ occurring within all objects on the Reporting database.  The list of objects, their types and how the match was made is displayed in the upper half of the window.  When the object that is highlighted in the top of the screen is a stored procedure or function, the text is displayed in the bottom half of the screen, with the text highlighted.

sqlsearch

 

One of the most helpful features is the link in the middle of the screen to ‘Select object in Object Explorer’.  Just as it sounds, a click will take you directly to that object.

 

In working with it, I haven’t noticed many downsides.  It takes a bit of time the first time a database is used as SQL Search indexes it, but after that, searches seem to go pretty quickly.  The first time that you search a database on a server, SQL Search needs to be opened within the context of that server, but after that point, it is added to the dropdown box of servers that are selectable.

 

SQL Search seems to be a helpful tool and, as I mentioned before, it’s free.  To download, just click here to go to the Red Gate site.

 | Posted by tledwards | Categories: Administration, DBAs | Tagged: |

Several weeks ago on Twitter, Colin Stasiuk (BlogTwitter), ) asked if anyone had a script to pull back job statuses from the SQL Server Agent.  I had been doing some work on a SQL Server DBA Console and had written some PowerShell scripts to give me various pieces of information and put together a PowerShell script that could be run as a SQL Agent job to periodically report the statuses of SQL Agent jobs.  Eventually, I think Colin went with a different solution, but I figured I would go ahead and post the PowerShell script that I came up with.  This solution has been tested against SQL Server 2000/2005/2008.

 

This first script is just a SQL script to create the SQL Server table that the PowerShell script writes the status information to and can be downloaded here, LastJobStatus_Table.sql.

CREATE TABLE [dbo].[LastJobStatus](
	[ServerName] [nvarchar](128) NULL,
	[Job_Name] [nvarchar](128) NULL,
	[Run_Date] [datetime] NULL,
	[Job_Duration] [time](7) NULL,
	[Run_Status] [varchar](50) NULL,
	[Sample_Date] [datetime] NULL
) ON [PRIMARY]

 

The next is the PowerShell script that does all of the work bringing back the SQL Agent job statuses.  It takes a parameter of Server, which is the name of the SQL Server that you want job statuses from.  Make sure that you change the name “MgtServer” to whatever the name is of the SQL Server where you intend to store the results from this script.   You’ll also need to change the root directory for where your scripts are loaded to match your environment. This script can be downloaded here, Get-LastJobStatusServer.ps1.

param([string]$Server=$(Throw "Parameter missing: -Server ServerName"))
[void][reflection.assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo")

##Point to library files
$scriptRoot = "D:\DBAScripts"

--change script root directory to match your environment
#$scriptRoot = Split-Path (Resolve-Path $myInvocation.MyCommand.Path)
. $scriptRoot\LibrarySmo.ps1	#Part of Chad Miller's SQLPSX project on CodePlex
. $scriptRoot\DataTable.ps1	    #Included with the scripts for this blog, also from a CodePlex project (http://www.codeplex.com/PSObject)

Set-Alias -Name Test-SqlConn -Value D:\DBAScripts\Test-SqlConn.ps1

##Define variables

## open database connection
$conn = New-Object System.Data.SqlClient.SqlConnection("Data Source=$server;
Initial Catalog=master; Integrated Security=SSPI")
$conn.Open()
$cmd = $conn.CreateCommand()
$cmd.CommandText= "	CREATE TABLE #JobsRun (	ServerName nvarchar(128),
											Job_Name nvarchar(128),
											Run_Date datetime,
											Job_Duration time(7),
											Run_Status varchar(50),
											Sample_Date datetime
										  );
					insert into #JobsRun
					select	@@SERVERNAME AS ServerName
							,j.name Job_Name
							,(msdb.dbo.agent_datetime(jh.run_date,jh.run_time)) As Run_Date
							,substring(cast(run_duration + 1000000 as varchar(7)),2,2) + ':' +
									substring(cast(run_duration + 1000000 as varchar(7)),4,2) + ':' +
									substring(cast(run_duration + 1000000 as varchar(7)),6,2) Job_Duration
							,case when run_status = 0
										then 'Failed'
								when run_status = 1
										then 'Succeed'
								when run_status = 2
										then 'Retry'
								when run_status = 3
										then 'Cancel'
								when run_status = 4
										then 'In Progress'
							end as Run_Status
							,GETDATE() As Sample_Date
					FROM msdb.dbo.sysjobhistory jh
						join msdb.dbo.sysjobs j
							on jh.job_id = j.job_id
					where	step_id = 0
					and		enabled = 1
					order by cast(cast(run_date as char) + ' ' +
								substring(cast(run_time + 1000000 as varchar(7)),2,2) + ':' +
								substring(cast(run_time + 1000000 as varchar(7)),4,2) + ':' +
								substring(cast(run_time + 1000000 as varchar(7)),6,2)  as datetime) desc

					delete from MgtServer.DBA_Console.dbo.LastJobStatus where ServerName = '$server'                       -- Change 'MgtServer' to the name of whatever the SQL Server is in
                                                                                                                                               -- your env that will house the LastJobStatus table which stores the
                                                                                                                                               -- results of this script
					insert into MgtServer.DBA_Console.dbo.LastJobStatus (ServerName, Job_Name, Run_Date, Job_Duration, Run_Status, Sample_Date)
					select	jr.ServerName,
							jr.Job_Name,
							jr.Run_Date,
							jr.Job_Duration,
							jr.Run_Status,
							jr.Sample_Date
					from	#JobsRun jr
					where	Run_Date = (	select	max(jr1.Run_Date)
											from	#JobsRun jr1
											where	jr1.Job_Name = jr.Job_Name)
					drop table #JobsRun; "
$cmd.ExecuteNonQuery()
$conn.Close()

 

There are references in the above script to a LibrarySMO.ps1 script that can be obtained from CodePlex (see the comments in the script for URL) and a DataTable.ps1 script (also from CodePlex, but included in the download file for this blog, for your convenience.)

# Taken from out-dataTable script from the PowerShell Scripts Project
# http://www.codeplex.com/PsObject/WorkItem/View.aspx?WorkItemId=7915

Function out-DataTable {

  $dt = new-object Data.datatable
  $First = $true  

  foreach ($item in $input){
    $DR = $DT.NewRow()
    $Item.PsObject.get_properties() | foreach {
      if ($first) {
        $Col =  new-object Data.DataColumn
        $Col.ColumnName = $_.Name.ToString()
        $DT.Columns.Add($Col)       }
      if ($_.value -eq $null) {
        $DR.Item($_.Name) = "[empty]"
      }
      elseif ($_.IsArray) {
        $DR.Item($_.Name) =[string]::Join($_.value ,";")
      }
      else {
        $DR.Item($_.Name) = $_.value
      }
    }
    $DT.Rows.Add($DR)
    $First = $false
  } 

  return @(,($dt))

}

Going with your gut

11 January 2010

This post is a response to Tim Ford’s Whose Blog Is It Anyway  challenge.  The opportunity to use the words: pony, nude and bong in a blog post about an actual experience was too much to pass up.

 

I’m a DBA and I’m logical – coldy logical, if you listen to K. Brian Kelley - but I’m here to tell you that sometimes you just have to go with your gut.  Most of us have an inner voice that clues us in on the things that you know but can’t rationalize or aren’t ready to deal with yet.  That’s the voice that lets you know that the URL in your son’s internet history with the word ‘PussyCat’, probably isn’t a site featuring live, nude cats.  Not everyone trusts that voice – just ask George “Let’s Have Padme Die Of A Broken Heart Instead Of Anakin Crushing Her To Death” Lucas – that would have been a far more awesome scene.

 

I’m here to talk about such a time, early in my career.   I had a great DBA to learn from, but he had moved on to another position.  I felt pretty firm in my knowledge and knew that, whatever came up I could fix or handle by simply using some magical tool, library or bong.

 

That’s when I ran into it – the problem that I couldn’t fix, but was going to cause me pain.  On a Friday evening I started seeing error messages in the SQL Server error logs that indicated that we were having disk issues.  Of course the error message didn’t read ‘Your disks are failing’, but everything that I was reading seemed to indicate that.  One thing that bears noting – this was the central order entry/documentation application for the hospital that I worked at – there was no acceptable downtime.  I contacted our system administrator who did some research and let me know that all of the disks looked fine.  Now, this was an SA whose knowledge I respected – he’d been in the field forever (not like ENIAC forever, but pretty close).   He alluded to the fact that I was still a newbie and probably didn’t diagnose the problem correctly.  At this point it was around noon on Saturday and even though I was getting tired of looking around, I figured that I’ve started, so I’ll finish.  I wasn’t able to find any information that was clearer than I had provided before, but I knew, in my gut, that we were about to have problems – big problems.  I called the SA again and tried to encourage him to look a little more closely.  I asked him to pretend that the RAID array was a horse stable.  From the outside, it might sound pretty happy.  On the inside it might look good initially, but as soon as he looks down he’s going to be very sad about the pony.  For some reason, that analogy didn’t work…

 

As the afternoon progressed, I kept with my gut feeling and bugged the heck out of that SA.  The disks actually gave up the ghost and 40 hours later (non-stop), with our application vendor and Microsoft on the line, we finally got the disks replaced and the application back online.  At the post-mortum, once everyone had gotten a few hours sleep, that same SA wanted to know how I knew what the problem was since none of the errors were absolutely clear about the cause.  I just told him that I knew in my gut that something bad was about to happen.  He said, “Well, if you’d told me that in the beginning, I would have done more research!”.

 

While the story above is true, some of the particulars have been changed to protect the innocent.

 | Posted by tledwards | Categories: Administration, DBAs, Uncategorized | Tagged: , , |

We’ve been in the process of setting up reporting servers for some of our larger clients.  We’re also in the process of setting up DEV and QA environments (yes, Colin, I know, I know).  This involves restoring databases onto the new servers.

 

Transferring logins

There is a fairly detailed knowledge base article from Microsoft outlining transferring logins between servers.  This post is going to outline the gist of that article.  The first step is creating the  sp_hexadecimal stored procedure in the master database.  This stored procedure will be used by the sp_help_revlogin to create a hashed value for the login password.  The second is to create the sp_help_revlogin procedure – there is a script for SQL Server_2000 and one for SQL Server 2005/2008.  The output from running the  sp_help_revlogin procedure on the source server is a series of CREATE LOGIN statements for all of the existing logins.

 

There are three main benefits for using this method to transfer logins:

  1. It’s fast.  No scripting out individual logins.
  2. If the logins and user are from the same servers, the user and logins SIDS will match, avoiding any ‘orphaned users’.
  3.  The passwords come across with the logins.  Definitely preferable to keeping an Excel spreadsheet tucked away with all of the logins and associated passwords.

 

As with any process that is automatically generated, it’s best to review the scripts carefully before running them on the destination server.  A couple of issues that you need to watch out for are:

  • If a login has a defined default database that doesn’t exist on the destination server, obviously the CREATE LOGIN statement will fail.
  • If there are existing logins on the destination server, there is the possibility that a SID may already be in use.  In that case, you will need to create a new login and map the existing users.

Orphaned Users

CantGetThere

 One issue that comes up when transferring logins is that the database users may not be mapped to the server logins.   This occurs when there isn’t a corresponding login SID for the user SIDs in the database.  If the database(s) that are being restored are from a single server and the logins were transferred from that same server the SIDs will match.  If the SIDs do not match, the database users will not be attached to a server login and the user is said to be ‘orphaned’.

 

 There are some cases where orphaned users can’t be avoided.  In development environments, the databases on the destination server may be consolidated from more than one production server, logins may overlap and, as a result, the login and user SIDs don’t match.  Another instance would be when a production database is restored to a development server and the rights for the users in the development environment are different from their rights in the production environment.  For both of these cases, steps will need to be taken to match database users with their respective logins on the server.

 

 To determine whether there are any orphaned users, run the following stored procedure:

EXEC sp_change_users_login ‘report’

 

There are a few ways to handle these orphaned users.  The sp_change_users_login stored procedure can be used to update the connection between logins and users in SQL Server 2000.  While the sp_change_users_login has the option  to auto_fix logins, that action assumes that the username and login name match.  If they don’t, it will fail.  Using the Update_One option is a safer and the preferable way to handle it.  For SQL Server 2005/2008, the ALTER USER statement is the preferred method for mapping users to logins.  Greg Low’s article ‘Much ado about logins and SIDs’ provides a good explanation for these methods.

 

Quick script to map users to logins

 

If you’re sure that the usernames match the login names that they map to, here are a couple of quick scripts that you can run in each database that you restore from your production server to your development server to map the orphaned users – yes, they’re cursors, but they run quickly.

 

SQL Server 2000

DECLARE @Username varchar(100), @cmd varchar(100)
DECLARE userLogin_cursor CURSOR FAST_FORWARD
FOR
SELECT UserName = name FROM sysusers
WHERE issqluser = 1 and (sid IS NOT NULL AND sid <> 0×0)
    AND suser_sname(sid) IS NULL
ORDER BY name
FOR READ ONLY
OPEN userLogin_cursor

FETCH NEXT FROM userLogin_cursor INTO @Username
WHILE @@fetch_status = 0
  BEGIN
    SET @cmd = ‘EXEC sp_change_users_login ‘ + char(39) + ‘AUTO_FIX’ + char(39) + ‘, ‘ + @Username
    EXECUTE(@cmd)
    FETCH NEXT FROM userLogin_cursor INTO @Username
  END
CLOSE userLogin_cursor
DEALLOCATE userLogin_cursor

 

SQL Server 2005/2008

DECLARE @Username varchar(100), @cmd varchar(100)
DECLARE userLogin_cursor CURSOR FAST_FORWARD
FOR
SELECT UserName = name FROM sysusers
WHERE issqluser = 1 and (sid IS NOT NULL AND sid <> 0×0)
    AND suser_sname(sid) IS NULL
ORDER BY name
FOR READ ONLY
OPEN userLogin_cursor
 
FETCH NEXT FROM userLogin_cursor INTO @Username
WHILE @@fetch_status = 0
  BEGIN
    SET @cmd = ‘ALTER USER ‘+@username+‘ WITH LOGIN ‘+@username
    EXECUTE(@cmd)
    FETCH NEXT FROM userLogin_cursor INTO @Username
  END
CLOSE userLogin_cursor
DEALLOCATE userLogin_cursor

I went to install SQL Server 2008 on a Windows Server 2008 R2 box today for the first time and was greeted with the error “You must use the Role Management Tool to install or configure Microsoft .NET Framework 3.5.”  The text of this error was basically about as decipherable as the voice of the adults in the old Peanuts cartoons, so I fired up my old friend Google to find out what to really do.  It seems that Windows Server 2008 R2 ships with .NET Framework 3.5.1 and in order to proceed, you need to go to Server Manager (that window that pops up when you login and stays unless you close it) and enable (really, install) the .NET Framework 3.5.1 and any prerequisites.

ss2008_error_1

 

ss2008_error_2

Last week Microsoft released CTP3 for SQL Server 2008 R2.  At first blush, it doesn’t sound like it should be a huge release.  It is really a minor version release (10.5) as the next major release is scheduled to be SQL Server 2011, so one would expect to see maybe some additional or increased functionality, but not really any huge changes. 

 

As it turns out, there are some additional features with this release, but the big change is coming in the editions and pricing of SQL Server.  Microsoft is adding two additional “premium” editions of SQL Server, Datacenter Edition and Parallel Data Warehouse Edition, which are not only going to be more expensive than the Enterprise Edition, but will be required to take advantage of some of the key features of SQL Server 2008 R2, such as multi-server management of more than 25 instances, StreamInsight (complex event processing), Star Join Queries and parallel database copy (Parallel Data Warehouse Edition). 

 

The retail pricing for these two new “premium” editions are both set at $57,498 per processor, with no option for per server plus CAL pricing.  Enterprise Edition pricing is set at $28,749 per processor or $13,969 per server with 25 CALs, while Standard Edition pricing is set at $7,499 per processor or $1,849 with 25 CALs.

 

The Enterprise Edition will be required to take advantage of new features such as PowerPivot, multi-server management of up to 25 instances and Master Data Services.

 

As part of SQL Server 2008 R2, Microsoft has announced that features such as backup data compression (which can reduce the size of backups by up to 60%) will be available in the Standard Edition where it was previously only available in Enterprise Edition.

The company I work for is currently in the midst of solidifying their SQL Server high availability and disaster recovery scheme.  While doing this, I did a comparison of all the available options within SQL Server for HA/DR for recommendation to my management.  We eventually went with a third party tool and this blog isn’t an endorsement of that tool, but rather is intended to give insight into how one company (the one I work for) approached looking at our disaster recovery options.  Since I had to put this together, I figured that maybe some of this might be helpful to someone else out there, so I thought I would write it up as a blog post.

 

What Do The Terms High Availability And Disaster Recovery Mean?

Before we get too deep into this, I figured I would define the terms High Availability and Disaster Recovery since they are quite often used together and sometimes, mistakenly, interchangeably.

 

High Availability – Protects against hardware/software issues that may cause downtime.  An example could be a bad processor or memory issue.

Disaster Recovery – The ability to recover from a catastrophic event, such as a natural disaster or, on a more local level, a data center disaster (fire, flood, cooling outage, etc) and be able to continue business operations with minimal loss and minimal downtime.

 

These two concepts really go hand in hand in ensuring business continuity and both should be built into a business continuity solution.

 

High Availability/Disaster Recovery Options In SQL Server

SQL Server has a number of native features built in to provide some basic measure of high availability/disaster recovery.  These include:

 

  • Database backups – this is probably the most basic form of disaster recovery for SQL Server and one that should be practiced in every situation, regardless of what other HA/DR solutions are in place.
  • Clustering – this provides a way of binding two or more like Windows servers together in what is known as a cluster.  Each of the servers in the cluster is considered a node and, typically, one node is “active” (processing transactions) and the other nodes are “passive”.  There is a private network that runs between the nodes so that if the active node fails to deliver a “heartbeat” that the other node(s) can detect, an automatic failover is invoked and one of the passive nodes is promoted to active.
    • Pros of Clustering:  Clustering provides redundancy in the case of server hardware failure and provides a fairly quick (within 5 minutes), automatic solution to move processing to another server.
    • Cons of Clustering
      • Does not protect against disk issues since all nodes share the database on the same disk.
      • Only protects against issues with that specific server, not data center-wide since all nodes are located in same data center.
      • Only addresses availability, not disaster recovery
  • Database Mirroring – new in SQL Server 2005, database mirroring offers a way to mirror a database to another server (and disk).  All transactions are sent to the mirror server as they are committed on the production server.  Depending on how it is implemented, can automate failover, similar to clustering.
    • Pros of Database Mirroring
      • Provides some form of both HA and DR since mirror can be located in another data center, thus protecting you from hardware failure and disaster.
      • Fast.  Mirror is updated virtually instantly
    • Cons of Database Mirroring
      • Only done at the database level, not the instance level and only user databases can be mirrored, not system databases.  This means that some other form of synchronizing logins and other system database objects has to be devised.
      • To be get all features of database mirroring, Enterprise Edition has to be used. 
      • Any SQL Agent jobs must be manually enabled on the mirror if a failover takes place.
  • Log Shipping – this is one of the oldest forms DR available in SQL Server and involves setting up a warm standby server with a copy of the user database on it that is to be protected and backups of the transaction log from the production database are periodically shipped to the standby server and applied.
    • Pros of Log Shipping:
      • Tried and true technology that has been around for a long time.
      • At the database level, can provide both HA and DR protection because warm standby can be located in another data center.
    • Cons of Log Shipping:
      • Amount of potential data loss is higher than with the other options because logs are usually shipped no more frequently than every 5 minutes and typically, more like every 30 minutes to an hour.
      • Failover is fairly manual and time intensive.  Takes longer than other options to bring warm standby online.
      • Like database mirroring, this only protects a database, not the entire instance.
      • For SQL Server 2000, this feature is only available in Enterprise Edition.  Available in Standard Edition from SQL Server 2005 forward.
      • Does not transfer non-logged transactions or schema changes (security, addition of database objects
  • Replication – while not necessarily intended as an HA/DR solution replication can be used in both scenarios.
    • Pros of Replication:
      • Real-time data updates.  Minimal data loss, if at all.
      • Can be used for both HA/DR as publisher and subscriber can be in different data centers.
    • Cons of Replication:
      • Complicated to setup and maintain.
      • No provided failover mechanism.  This has to be created as part of solution.
      • Again, only database specific solution, not instance specific.

Given that these native solutions were really only database and not instance based, we chose to look at third party options.  The product that we settled on was Double-Take.  While certainly not an easy solution to implement, Double-Take was attractive to us because it allowed us to set up a stand-by server in our hot site for each of our mission critical SQL Servers and then continuously replicate the entire instance to the stand-by server.  It also provides for either automated (if the server stops responding) or manual failover (we have opted for manual) through a control applet that automatically swaps DNS entries between the production and the standby server when a failover is initiated.

 

Double-Take:  How It Works

Both the production and the standby server have to have the exact same SQL Server configuration (Edition, build, directory structure/location, instances, etc) installed.  The Double-Take software is then installed on both the production and standby server and then, through the Double-Take software, the production server is configured as the source and the standby server is configured as the target.

 

During the configuration process, you can configure Double-Take to compress the data before it replicates it over the WAN to the stand-by server.  This can save a ton of bandwidth and makes sure that the transactions are queued on the target server as quickly as possible ensuring minimal data loss in the event of a failure.

 

Additionally, Double-Take will also generate scripts for the failover, failback, and restore of the databases back to the production server when it is back in commission.  These scripts and/or the replication can be customized by overriding the automatically generated rules that Double-Take creates.

More tidbits

23 October 2009

Work has been hectic for both of us, so please pardon the absence of full-bodied blog posts.  Until we get our feet under us, here are a couple of (hopefully) helpful tidbits.

 

Since you've been good and patient, you can have a treat

Since you've been good and patient, you can have a treat

Sys.dm_exec_requests

I’m not the most patient person in the world and have a tendency to think that a process has stopped when it’s actually processing away.  A few months ago, I needed to run a DBCC on a database with corruption errors.  This wasn’t a large database and the DBCC had already been running for about 2 1/2 hours.  I put something out on Twitter about whether I should continue the DBCC (which I knew that I should) or worry about whether something had gone horribly wrong.  Paul Randal quickly tweeted back to not stop the DBCC and to check to see how far it had gone.  That’s when he gave me the information that has helped me through a number of waiting games:

 

Sys.dm_exec_requests is a DMV that returns a row for every request that is currently executing.    Since I was certain that this was the only dbcc checkdb running on this server, the following query eased my mind:

 

select percent_complete
 from sys.dm_exec_requests
 where command = ‘dbcc checkdb’

 

Keeping an eye on that let me know that the process was progressing and kept me from making a stupid mistake.  I’ve since used that on long running processes like backups and restores.  It also provides a way to tell users that, yes, this is going and it’s x% done
One caveat – the precentage_done field is for specific operations like backups, restores, rollbacks, dbccs, etc.  It is not populated for queries

 

Altering a table in SSMS 2008 (without t-sql)

While I know that we’d all rather use an sqlcmd query through a DAC connection to do anything SQL Server related, there may be times that it might be more expeditious to make the change within SSMS using the Design option.

 

If you’re in SQL Server 2008 and the table is populated, making any changes within the Design option may cause the following error:

alter_table

While we were able to do this in SQL Server 2005, it seems like it’s unavailable in 2008.  Fortunately, you can do this in SSMS, you just need to set the option to make it available.

 

In SSMS, go to Tools->Options->expand Designers.  Then under Table and Database Designers, uncheck the box next to ‘Prevent saving changes that require table re-creation’.  Voila!  You can now make all the changes that you want. 

 

As a last minute bit of news, Tim just found out that he’s now a Friend of RedGate.  Congratulations, Tim!

This is a quickie blog post, but I thought I’d post it before I forgot about it.

 

We have transactional replication turned on for one of our production databases.  This database includes XML fields and we recently saw the following error:

 

           Length of LOB data (78862) to be replicated exceeds configured maximum 65536

 

Looking into it, this is a server setting and the default value for Max Text Replication Size is 65536 (bytes).   Because our XML data is of varying sizes, I made the decision to set this to the maximum allowable value – 2147483647 (bytes). 

 

To change it through SSMS:

  •                Right click on the server and choose Properties
  •                In the Advanced page – change the Max Text Replication Size to 2147483647

In T-SQL:

EXEC sp_configuremax text repl size’, 2147483647

GO

RECONFIGURE
GO
 
Hope this helps someone out! 
 
 

 

 

 

I have been managing DBAs for over ten years now and one question that always seems to come up, usually from management, is what do DBAs do anyway?  Hopefully, you have read my prior post, “Yes, Production DBAs Are Necessary!” and you know where I stand on this issue. 

 

I fully believe in the role of a production DBA and, as someone who has been in this role for well over a decade, would like to define, based on my experience, what I think that role should be.  The most effective way of doing this, I believe, is to define what a production DBA should and shouldn’t be expected to do.  Granted, this won’t work perfectly for every situation and this is not intended to be an exhaustive list, but the following should be a good guideline for defining what a production DBA does and why they are critical to ensuring that your data is safe and available.

 

User data

A production DBA should not have anything to do with data. There, I said it. This statement alone is going to generate tons of controversy, but let me explain and specify that I’m referring to user data in user databases. This confuses many people because, after all, DBA does stand for database administrator. Let me clear up that confusion right now by saying that database is one word and is a manageable entity just like a server. Data lives inside a database, much like an application lives on a server. Typically, there is a lot of data in a database, just like there can be many applications on a server, each of which may have their own application administrator. As in the case of the application analogy, we don’t expect the system/server administrator to necessarily manage all of the applications on the server, the production DBA should not be expected to manage the data in the database. The production DBA is responsible for the uptime of the database, the performance of that SQL Server instance and the safety (backups/DR) of the data in the database, not for the data itself. There should be other roles in the enterprise responsible for determining how the data gets into the database and how it is used from there, namely, data architects, database developers, etc.

 

Optimizing T-SQL

A production DBA should work with database developers to help them optimize T-SQL code. I emphasize work because production DBAs should not be writing code, outside of administration automation. This goes hand in hand with #1 above and, when you think about it in practical terms, makes sense. A production DBA may be responsible for managing hundreds or thousands of user databases all containing data for different uses. A production DBA can’t practically understand all of this data and be able to write code against it effectively. What the production DBA can do, though, is help the database developers optimize code by looking at available indexes and discussing with them how to best to arrange joins and where clauses to make the most effective use of indexes and/or create additional indexes. Additionally, if your organization falls under the Sarbanes-Oxley or PCI regulations, this segregation may be mandatory. Your developers should not have access to production databases. Their code should be promoted by the production DBA, who does have access to the production databases. This also means that you should have a development environment that reasonably approximates your production environment.

 

Managing Instances

The production DBA should manage the SQL Server instance(s). This is a big job and if your data is important and you have more than a handful of instances, it is a full time job. Let me breakdown what this actually includes to illustrate just how large a job it is:

  1. Install/patch SQL Server instances – The production DBA is responsible for installing any new instances of SQL Server and making sure that existing instances stay current on Microsoft patches. This isn’t just a matter of running through an install wizard. A number of things have to be considered by the production DBA when a new instance of SQL Server is installed. These include:
    • Collation settings. Questions have to be asked about how the application that uses this data expects the database to handle the case of words, accents on words, code pages that the application expects to be used (this gets into localization and language for those shops in other countries or that are responsible for servers that reside or are going to be accessed by users in other countries).
    • Drive/File Layout – To make the database instance run optimally, the production DBA has to consider what drive resources are going to be available and how the database files should be laid out across those resources. During this phase of the installation the production DBA has to consider whether only local storage is available or whether storage will be housed on a SAN. If it is going to be housed on a SAN, the production DBA needs to work with the SAN administrator to ensure that the LUNs are set up appropriately for optimal SQL Server access, which in and of itself requires a lot of knowledge and experience.
    • Scalability – The production DBA should be involved in developing the specifications for the hardware. Questions that the production DBA will be asking of the various parties include, how many concurrent users will be accessing the data, how many databases will there be, is the data static or changing and how does it change (i.e., batch load, transactional updates), etc. This will give the production DBA a better idea of what kind of resource utilization to expect and drive the hardware specification process. It will also help determine recovery model and backup strategies.
  2. Create, and periodically test, a backup and recovery scheme for your enterprise’s data. Things the DBA has to consider as part of this include:
    • Is the data development or production data? Yes, development data gets backed up and is considered in an effective backup/recovery scheme because, after all, you don’t want to lose any of your development effort; it just isn’t prioritized as highly as production data.
    • How often data is updated in databases and how is it updated (online transactions, batch, etc)? This determines the recovery model that should be used and leads the DBA to ask another question, what is the maximum acceptable data loss in terms of time? With the answers to these questions, the DBA can more effectively determine the type and the frequency of backups (full, differential, transaction log, etc).
  3. While somewhat tied with backup/recovery, the DBA is also tasked with helping to come up with a disaster recovery strategy for the SQL Server environment, within the constraints of the enterprise’s available resources.
  4. Uptime/performance – The DBA is tasked with managing/monitoring those things that would impact the availability of the databases. This includes CPU utilization, I/O performance, disk space, reviewing event and SQL Server logs,etc.
  5. Help design the security model for SQL Server instances. Within SQL Server, there are a number of built in security roles, both at the instance level and database level that users can made members of in addition to the ability to create custom roles. Additionally, SQL Server can operate in Windows Authentication mode, which integrates your domain security and uses the users’ Windows domain accounts or Mixed (or SQL Server and Windows Authentication Mode) Mode, which allows accounts to either be Windows domain or native SQL Server accounts. There are a number of factors to be considered here to make sure that your users can use your system while still protecting and safeguarding sensitive data and the SQL Server configuration.

 
So as you can see, the production DBA has plenty of things to contend with in order to effectively and efficiently maintain a SQL Server environment.  The bottom line comes down to how valuable is your data to you and how much are you willing to risk by not allowing your DBA to dedicate themselves to maintaining and safeguarding it.