If you have used the Central Management Server (CMS) feature in SQL Server 2008+, by now you know that you can’t add the SQL Server instance that you are using to host the CMS as a registered server to the CMS. That is, you can’t directly add it. There is, however, a workaround as long as you connect to your servers via TCP/IP. To work around this limitation in the CMS functionality:

  1. Right click on the CMS and click “New Server Registration”

  2. In the “Server name:” text box type in the IP address of the server, a comma and the port number that the instance is listening on. In this example, the test server’s IP address is 192.168.1.1 and the instance that I want to add, which also houses the CMS, is listening on port 1433.

  3. In the “Registered server name:” text box, type the name of the instance (in this case, Test) and voilà, we have now been able to register the CMS instance on the CMS.

Over the last couple of years, I have met a number of you, who as I, are required to struggle daily to make CommVault work as the enterprise backup solution for your SQL Server databases. Given that, I thought I would share with you one of the issues that we have run into and possibly any third party product could run into that uses Microsoft’s Volume Shadow Copy Services (VSS). To be fair, I have to give the credit for the research and most of the write up of this issue and solution to one of the DBA’s that works for me (whom I am sure many of you know J), Samson Loo (twitter: @ayesamson).

Problem:

I/O operations are frozen on one or more databases due to CommVault issuing a “BACKUP DATABASE WITH SNAPSHOT” command and remain frozen until the operation completes successfully or is cancelled. (This is appears to be known behavior of VSS. If you wish to dig further into how VSS works, I would suggest reading these articles: http://msdn.microsoft.com/en-us/library/windows/desktop/aa384615(v=vs.85).aspx and http://msdn.microsoft.com/en-us/library/aa384589(v=VS.85).aspx).

This particular message gets logged in the SQL Server error log whenever any backup service makes use of the SQL Server Virtual Device interface to backup the database with snapshot. Microsoft Backup (ntbackup.exe), Volume Shadow Copy Services, Data Protection Manager, Symantec Backup Exec and other third party tools, in addition to CommVault can cause this message to be logged.

If ntbackup.exe is configured to take a backup of a drive that happens to house SQL Server data files, then the command “BACKUP DATABASE WITH SNAPSHOT” is issued to ensure the backup is consistent since the data files are in use. During this time, the I/O for the database that is currently being backed up is frozen until the backup operation is complete.

The message that you will typically see logged is:

    Error:

I/O is frozen on database master. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup.

 

 

    Note: In the example error message above, the database master was referenced, but this could be any database on your instance that is being backed up.

Solution:

Disable VSS in the subclient following the steps below:

  1. Open the CommVault client (often called the “Simpana CommCell Console”)
  2. Navigate to the target subclient
  3. Right click anywhere in the white area

  4. Select Properties
  5. Uncheck “Use VSS” and click OK

 

Again, extreme thanks go out to Samson Loo (twitter: @ayesamson) for providing most of this content!

I ran across an installation issue with SQL Server 2008 on a Windows Server 2008 server the other day that baffled me a little bit.  I was installing an additional instance of SQL Server 2008 on a server that already had a SQL Server 2008 instance and right before the installation completed, it died with the error:  “A MOF Syntax error occurred.”  Further investigation into the Setup Bootstrap logs gave this detail:

An error occurred while processing item 1 defined on lines 14 – 16 in file D:\Program Files\Microsoft SQL Server\MSSQL10.TMS_MODELING\MSSQL\Binn\etwcls.mof.transformed:

 

2010-05-18 13:41:02 Slp: Compiler returned error 0×800700a4Error Number: 0×800700a4, Facility: Win32

 

2010-05-18 13:41:02 Slp: Description: No more threads can be created in the system.

 

2010-05-18 13:41:02 Slp:

 

2010-05-18 13:41:02 Slp: Sco: Compile operation for mof file D:\Program Files\Microsoft SQL Server\MSSQL10.TMS_MODELING\MSSQL\Binn\etwcls.mof.transformed failed. Exit code 3

 

2010-05-18 13:41:02 Slp: Configuration action failed for feature SQL_Engine_Core_Inst during timing ConfigNonRC and scenario ConfigNonRC.

 

Much investigation on the internet turned up a lot of people that have been having this issue, but very few answers.  After many installs and uninstalls, I finally tried the following, which seemed to work:

 

  • I ran the setup.exe as an administrator (right click on setup.exe and click “Run as administrator) even though I am a local administrator on the box.
  • I installed SQL Server using the Network Service account instead of the normal domain service account.
  • The installation succeeded and I just went into Configuration Manager and changed the service account to the domain account after the installation.
It would be so nice if something made sense for a change.

It would be so nice if something made sense for a change.

A couple of weeks ago, we had a database that had become corrupted due to a disk failure.  After running a DBCC CHECKDB, it appeared that there was one table with a corrupted cluster index.  Paul Randal (Twitter/Blog) verified that this was the case and that it was, unfortunately, non-recoverable without data loss.  Fortunately, we had backups and this database was a replication subscription, so between the backup and publication, we were able to rebuild the database without losing any data.  We then had to rebuild a kind of goofy replication set up, but that’s another story.

 

As I was doing some validation about the data that was contained in the backups and what needed to be pulled down from the publication, I realized that there was at least one other table that was corrupted.  This is where it became confusing.   I went back through the output from the CHECKDB and noticed that the table in question was not mentioned.   At this point, I went through and matched up all of the tables that were listed in the CHECKDB output against the actual tables in the database and found that there were three tables in the database that were not listed on the DBCC output.  I ran DBCC CHECKTABLE against the missing tables and while two of them came back with no error, one was definitely corrupted.  The CHECKTABLE command actually “terminated abnormally” on the corrupted table.   I went back through the the error logs and dump files that had been generated at that time.  Aside from the references to the CHECKTABLE that was run, the only object referenced was the table that had shown up in the CHECKDB output.

 

As a note, this is a 2008 SP1 database.  I know that, when running CHECKDB from SSMS, it will only display the first 1000 errors, but there were only 274 errors mentioned in the output, so I didn’t think that was the issue.  I asked Paul Randal (mentioned above) and his thought was that perhaps the metadata had also been corrupted and because of that CHECKDB may not have seen those tables as existing. 

 

We’ve recovered from the incident, but it was a curious experience.  Up until now, I’ve gone off of the CHECKDB output, but this gives me reason to think that there might be tables that aren’t included. I am interested to know whether anyone else has ever run into a similar incident.  Hopefully it was just an (odd) learning experience.

I was tagged by TJay Belt (Twitter/Blog) in this latest series of blog stories.  I believe that it was started by Paul Randal (Twitter/Blog), carried on by Tom LaRock (Twitter/Blog) and then went viral.  Since ‘New Year’ seems to be synonymous with ‘everything going to heck in a handbasket’, it’s taken me awhile to respond, but here goes.

 

I’ll start by saying that if anyone would have told me that I’d be a DBA (or anything computer related for that matter)

You may ask yourself, well, how did I get here?

You may ask yourself, well, how did I get here?

when I was in college, I would have fallen down laughing.    My step-father was a biomechanical engineer and one of my main goals in life was not to be a geek like I thought he was.  I majored in Communications with a minor in English.   At the time of my

graduation I had never touched a computer or even wanted to.  So, how did I get to be a DBA?  Sheer coincidence.

 

IBM

Back in my kid-free days, I worked for IBM.  I actually had to use a computer (odd for me), but my responsibilities were working with IBM’s resellers and the maintenance plans they resold.  It was all about soft skills and I spent a ton of time on the phone with resellers.  All of the information that we gathered was stored in a(wait for it…) DB2 database.   After awhile, I took on the responsibility for putting together reports.  While there was definitely no administration going on, it was kind of fascinating to play with all of that data.  That all stopped, though, for my next life changing event.

 

And they looked so sweet...

And they looked so sweet...

Kids

I left my job at IBM just before I gave birth to my first child and became a stay-at-home mom.  Around the time my

 second child was born, I started to feel the desire to go back to school.  The odd thing is that the field that I was drawn to was computer science.  I’m not sure if it was due to some strange chemical imbalance or the need to spend time with something that actually had logic behind it, but I began my computer science degree shortly after my youngest son turned one. 

Going back to school with two little ones running around was definitely a challenge.  Getting to the end of an 800 line assembly language project and have my son smack his hand on the keyboard deleting it, helped me learn the value of saving and saving often.  I’m sure that trying to learn recursion while dealing with a cranky toddler helped my ability to persevere.   Eventually, though.  I completed the program and became a computer science instructor.  Teaching was and is still the field that provides me with the greatest amount of satisfaction.  I enjoyed it immensely and felt that I was good at it.  Unfortunately, though, by that time I was a single mother of two boys and job satisfaction doesn’t exactly pay the bills.

 

My first *real* job

After leaving my teaching position at the college, I was able to get a job teaching the medical staff at our local hospital the new order entry/documentation application.   I knew that this had to be temporary and that I needed to become a part of a more technical division.  During the process of keeping our training environment up to date, I ended up interfacing with our DBA group on a regular basis.  One of the DBAs left and that provided me the opportunity to join the team.   Our lead DBA was pure awesomeness and provided me with a good solid platform of knowledge.  That was back in 2003, completed my MCDBA in 2005 and the rest is, well, the rest is now.    Still working, still learning.

 

It was a crazy, twisted road to get here and I’m looking forward to the road ahead.  I’m not tagging anyone with this, but I’m thankful to TJay for giving me the chance to share my story.

I went to install SQL Server 2008 on a Windows Server 2008 R2 box today for the first time and was greeted with the error “You must use the Role Management Tool to install or configure Microsoft .NET Framework 3.5.”  The text of this error was basically about as decipherable as the voice of the adults in the old Peanuts cartoons, so I fired up my old friend Google to find out what to really do.  It seems that Windows Server 2008 R2 ships with .NET Framework 3.5.1 and in order to proceed, you need to go to Server Manager (that window that pops up when you login and stays unless you close it) and enable (really, install) the .NET Framework 3.5.1 and any prerequisites.

ss2008_error_1

 

ss2008_error_2

Last week Microsoft released CTP3 for SQL Server 2008 R2.  At first blush, it doesn’t sound like it should be a huge release.  It is really a minor version release (10.5) as the next major release is scheduled to be SQL Server 2011, so one would expect to see maybe some additional or increased functionality, but not really any huge changes. 

 

As it turns out, there are some additional features with this release, but the big change is coming in the editions and pricing of SQL Server.  Microsoft is adding two additional “premium” editions of SQL Server, Datacenter Edition and Parallel Data Warehouse Edition, which are not only going to be more expensive than the Enterprise Edition, but will be required to take advantage of some of the key features of SQL Server 2008 R2, such as multi-server management of more than 25 instances, StreamInsight (complex event processing), Star Join Queries and parallel database copy (Parallel Data Warehouse Edition). 

 

The retail pricing for these two new “premium” editions are both set at $57,498 per processor, with no option for per server plus CAL pricing.  Enterprise Edition pricing is set at $28,749 per processor or $13,969 per server with 25 CALs, while Standard Edition pricing is set at $7,499 per processor or $1,849 with 25 CALs.

 

The Enterprise Edition will be required to take advantage of new features such as PowerPivot, multi-server management of up to 25 instances and Master Data Services.

 

As part of SQL Server 2008 R2, Microsoft has announced that features such as backup data compression (which can reduce the size of backups by up to 60%) will be available in the Standard Edition where it was previously only available in Enterprise Edition.

More tidbits

23 October 2009

Work has been hectic for both of us, so please pardon the absence of full-bodied blog posts.  Until we get our feet under us, here are a couple of (hopefully) helpful tidbits.

 

Since you've been good and patient, you can have a treat

Since you've been good and patient, you can have a treat

Sys.dm_exec_requests

I’m not the most patient person in the world and have a tendency to think that a process has stopped when it’s actually processing away.  A few months ago, I needed to run a DBCC on a database with corruption errors.  This wasn’t a large database and the DBCC had already been running for about 2 1/2 hours.  I put something out on Twitter about whether I should continue the DBCC (which I knew that I should) or worry about whether something had gone horribly wrong.  Paul Randal quickly tweeted back to not stop the DBCC and to check to see how far it had gone.  That’s when he gave me the information that has helped me through a number of waiting games:

 

Sys.dm_exec_requests is a DMV that returns a row for every request that is currently executing.    Since I was certain that this was the only dbcc checkdb running on this server, the following query eased my mind:

 

select percent_complete
 from sys.dm_exec_requests
 where command = ‘dbcc checkdb’

 

Keeping an eye on that let me know that the process was progressing and kept me from making a stupid mistake.  I’ve since used that on long running processes like backups and restores.  It also provides a way to tell users that, yes, this is going and it’s x% done
One caveat – the precentage_done field is for specific operations like backups, restores, rollbacks, dbccs, etc.  It is not populated for queries

 

Altering a table in SSMS 2008 (without t-sql)

While I know that we’d all rather use an sqlcmd query through a DAC connection to do anything SQL Server related, there may be times that it might be more expeditious to make the change within SSMS using the Design option.

 

If you’re in SQL Server 2008 and the table is populated, making any changes within the Design option may cause the following error:

alter_table

While we were able to do this in SQL Server 2005, it seems like it’s unavailable in 2008.  Fortunately, you can do this in SSMS, you just need to set the option to make it available.

 

In SSMS, go to Tools->Options->expand Designers.  Then under Table and Database Designers, uncheck the box next to ‘Prevent saving changes that require table re-creation’.  Voila!  You can now make all the changes that you want. 

 

As a last minute bit of news, Tim just found out that he’s now a Friend of RedGate.  Congratulations, Tim!

This is a quickie blog post, but I thought I’d post it before I forgot about it.

 

We have transactional replication turned on for one of our production databases.  This database includes XML fields and we recently saw the following error:

 

           Length of LOB data (78862) to be replicated exceeds configured maximum 65536

 

Looking into it, this is a server setting and the default value for Max Text Replication Size is 65536 (bytes).   Because our XML data is of varying sizes, I made the decision to set this to the maximum allowable value – 2147483647 (bytes). 

 

To change it through SSMS:

  •                Right click on the server and choose Properties
  •                In the Advanced page – change the Max Text Replication Size to 2147483647

In T-SQL:

EXEC sp_configuremax text repl size’, 2147483647

GO

RECONFIGURE
GO
 
Hope this helps someone out! 
 
 

 

 

 

A couple of weeks ago, I tweeted a question out to the community about whether there was any way to run PowerShell scripts from within a SQL Server Reporting Services 2008 DataSet query.  The answers that I received back were discouraging.  Even though Microsoft is touting PowerShell as the best thing since sliced bread (I have to admit, it is pretty darned cool!) and integrating into all of its current products, the integration with SQL Server 2008 seems to have stopped at giving you a mini PowerShell console (SQLPS) and a SQL Server provider to give you easier access to running SQL Server commands from within PowerShell.  This integration hasn’t gone beyond the database engine, so if you want to run PowerShell scripts from within Reporting Services, you have to get creative.  That posed a big problem for me because the report I was writing depended on some PowerShell scripts that I had written. 

 

After walking away from the problem for an hour or two, it finally hit me.  Since a Reporting Services 2008 DataSet query runs T-SQL code, including stored procedures, why don’t I just write a stored procedure that I can use to run a PowerShell script.  Below, is the stored procedure that I wrote.  It is really pretty simple.  It takes as a parameter, the command line command that you would normally type in at a PowerShell command line to run your script.  This information would include the script path\name and any parameters that the PowerShell script requires.

 

USE [MyDatabase]
GO

IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N’[Utility].[RunPowerShellScript]‘) AND type in (N’P', N’PC’))
DROP PROCEDURE [Utility].[RunPowerShellScript]
GO
 

USE [MyDatabase]
GO
 

SET ANSI_NULLS ON
GO
 SET QUOTED_IDENTIFIER ON
GO
 

CREATE PROCEDURE [Utility].[RunPowerShellScript]
      @Script           varchar(2000)
AS

set nocount on;

declare @cmd  varchar(2000);
set         @cmd = ’sqlps -c ‘ + ‘”‘ + @Script + ‘”‘
exec master..xp_cmdshell @cmd, no_output;
GO

 

In the above code, “[MyDatabase]” of course refers to the database that you would want this stored procedure to be stored in.  So, walking through the code, all this script really does is create a stored procedure called Utility.RunPowerShellScript that runs the xp_cmdshell extended stored procedure with a command string that calls the SQL Server PowerShell mini console (SQLPS) in command line mode and passes to that command line whatever you passed into the stored procedure as a parameter.  For my own purposes, I have created a schema called “Utility” so that I can easily identify stored procedures, such as this one, as helper stored procedures.  Feel free to omit this if you like.  So an example of how you could use this stored procedure after you have created it would be as follows.  Say you wanted to run a PowerShell script called “Get-DriveSpace” that returns total size and free space information for the drives on a server that you pass in and resides in the D:\Scripts folder.  All you would need to do is type:

 

exec DBA_Console.Utility.RunPowerShellScript‘”D:\Scripts\Get-DiskSpace.ps1 -Server MyServer”‘             

  

Where “MyServer” is the name of the server that you are passing into the PowerShell script as a parameter.

 

That’s it.  Short and simple and now I have a mechanism to call all of the PowerShell scripts that I want from within a Reporting Service DataSet query.