A couple of weeks ago, I was saddened to learn that someone that I had worked with many years ago when I was a sales associate at Foley’s Department Store, one of the first jobs that I had worked while going to college, had passed away of bladder cancer. When I started at Foley’s, Ralph was an older gentleman who had previously taken early retirement from a middle management position back East and had decided after moving to Tucson that he was not ready to be completely retired. So, he embarked on a second career at Foley’s and I was fortunate to have him be the one to show me the ropes when I started my position there. Ralph was a pretty amazing fellow in that he always saw the bigger picture. Thus, when we he went to train me to be a sales associate, he took time to not only train me on what I needed to know to do the job at hand, he also took it upon himself to impart wisdom that he had learned over the years that he thought would be useful to me way beyond this sales position, which he knew I wasn’t planning to make my career.
As I reflected back on my friendship with Ralph after I had learned of his passing, one piece of advice that he had given me came to mind. This was a piece of advice that I had used constantly throughout the 25 years since I worked at Foley’s and it was Ralph’s mantra. The advice was “plan your work and work your plan.” Oh, how simple that advice seemed at the time it was given to me, but that simplicity is very deceiving. Almost every time that I have screwed something up over the course of my career, it can be traced back to a failure to follow this simple piece of advice. It is not enough just to have a plan; you have to make sure that you actually follow it. How many times do we have processes or checklists that we fail to follow because we think we know how to do the work faster or we think we know the process by heart only to have that come back and bite us because we left something out. This advice has become golden to me over the years and constantly keeps me from doing stupid things. I still find it amazing today that sometimes the simplest advice that we are given can be the most prolific.
In honor of Ralph, I wanted to share this story and bit of advice in the hope that someone might reap the benefits from it that I have.
Rest in peace old friend, you deserve it.
Ralph Pennacchio 1930-2013
If you have used the Central Management Server (CMS) feature in SQL Server 2008+, by now you know that you can’t add the SQL Server instance that you are using to host the CMS as a registered server to the CMS. That is, you can’t directly add it. There is, however, a workaround as long as you connect to your servers via TCP/IP. To work around this limitation in the CMS functionality:
Right click on the CMS and click “New Server Registration”
In the “Server name:” text box type in the IP address of the server, a comma and the port number that the instance is listening on. In this example, the test server’s IP address is 192.168.1.1 and the instance that I want to add, which also houses the CMS, is listening on port 1433.
In the “Registered server name:” text box, type the name of the instance (in this case, Test) and voilà, we have now been able to register the CMS instance on the CMS.
Over the last couple of years, I have met a number of you, who as I, are required to struggle daily to make CommVault work as the enterprise backup solution for your SQL Server databases. Given that, I thought I would share with you one of the issues that we have run into and possibly any third party product could run into that uses Microsoft’s Volume Shadow Copy Services (VSS). To be fair, I have to give the credit for the research and most of the write up of this issue and solution to one of the DBA’s that works for me (whom I am sure many of you know J), Samson Loo (twitter: @ayesamson).
I/O operations are frozen on one or more databases due to CommVault issuing a “BACKUP DATABASE WITH SNAPSHOT” command and remain frozen until the operation completes successfully or is cancelled. (This is appears to be known behavior of VSS. If you wish to dig further into how VSS works, I would suggest reading these articles: http://msdn.microsoft.com/en-us/library/windows/desktop/aa384615(v=vs.85).aspx and http://msdn.microsoft.com/en-us/library/aa384589(v=VS.85).aspx).
This particular message gets logged in the SQL Server error log whenever any backup service makes use of the SQL Server Virtual Device interface to backup the database with snapshot. Microsoft Backup (ntbackup.exe), Volume Shadow Copy Services, Data Protection Manager, Symantec Backup Exec and other third party tools, in addition to CommVault can cause this message to be logged.
If ntbackup.exe is configured to take a backup of a drive that happens to house SQL Server data files, then the command “BACKUP DATABASE WITH SNAPSHOT” is issued to ensure the backup is consistent since the data files are in use. During this time, the I/O for the database that is currently being backed up is frozen until the backup operation is complete.
The message that you will typically see logged is:
I/O is frozen on database master. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup.
Note: In the example error message above, the database master was referenced, but this could be any database on your instance that is being backed up.
Disable VSS in the subclient following the steps below:
- Open the CommVault client (often called the “Simpana CommCell Console”)
- Navigate to the target subclient
Right click anywhere in the white area
- Select Properties
Uncheck “Use VSS” and click OK
Again, extreme thanks go out to Samson Loo (twitter: @ayesamson) for providing most of this content!
These days, I live and die by OneNote. I read a ton of technical blogs and come across a number of great scripts and when I do, I save them to OneNote. I take notes from meetings in OneNote and I even save videos and webcasts that I feel are especially pertinent to what I do in OneNote. I have a ton of notebooks in OneNote each with a bunch of sections and pages (in fact, my OneNote notebooks are about 15GB in size!). But the problem I have always had was that unless I wanted to search through my OneNote notebooks (which, I have to say, Microsoft certainly has included a very capable search functionality in this product), it was hard to find specific things because there didn’t seem to be a way to sort your OneNote sections and pages; basically, they just showed up in the order you created them unless you wanted to manually sort them (but who has the time for that!).
This was a problem until I came across this little lifesaver tool that makes keeping my OneNote notebooks tidy and in order. It is a little program called the “OneNote 2010 Sort Utility.” You can read more about this little golden nugget here.
If you decide that this little free utility might make your life easier, you can download it here.
And by the way, if you are using Microsoft Office 2010 Professional and you haven’t tried OneNote 2010 to organize your life (or at least your personal life). I strongly recommend giving it a spin. At first, it may seem a little daunting, just like writing your first SSIS package, being stared at by that blank screen. But, rest assured, there is help out there and a fairly active community of users. Once you understand it’s metaphor to a physical binder (if you are my age, you might even insert “Trapper Keeper” hereJ), with notebooks for different subjects and then sections within each notebook and then pages within the sections and the fact that you can actually print documents to OneNote 2010 as well as attach any kind of file, it becomes one of those tools that is hard to live without. In fact, it integrates so well with Outlook that if you have OneNote installed, your Outlook meetings will have a OneNote button on them and clicking that creates a page that contains all of the information from the Outlook invitation and then lets you take meeting notes. I could go on and on and, in fact, have because I intended this blog post to really only be about this OneNote 2010 Sort Utility, but OneNote is, unfortunately, one of those things that I am quite passionate about because it has saved my bacon a number of times. At any rate, if you don’t use OneNote or want to know how to use OneNote, here are some links to get you started (some of these might apply to OneNote 2007, which some of you may still be on, but the concepts generally also apply to OneNote 2010):
As many of you have probably noticed, I haven’t blogged in quite a while due to work commitments, health issues and various family commitments (don’t want to go on too long here with excuses J), but I decided a perfect first blog post back might be taking the stored procedure that a friend and his consulting group have so graciously shared with the community. I am, of course, speaking of Brent Ozar’s sp_BLITZ stored procedure intended to help a DBA see what they are inheriting when someone dumps a new server on them. I kind of took a different twist on this and figured that this might be a great tool to use on all of the SQL Servers in an environment periodically by creating a report around it.
I work as the lead DBA in an environment with over 160 SQL Server instances (a number that seems to grow by at least five or so every quarter) and somewhere in excess of 2,000 databases ranging in size from what some of you might consider laughably small to some rather large data warehouse databases, many of which are mission critical to our business. To manage this environment, I have a team of two other DBA’s that I lead. One started with the company the week after I started and the other, a junior DBA, has been with us just over a year. We have a great team, but even with three DBA’s, it is hard to be proactive without some tools to let you know what is going on. Unfortunately, we don’t have the budget for some of the major monitoring tools as the cost for our environment would be rather substantial. Needless to say, it is left to me and my team to be creative and create our tools and instrumentation. That is where Brent’s sp_BLITZ script comes in. With a report written around it that my junior DBA can go through on a weekly or monthly basis, we can be much more proactive with some of the more basic or fundamental settings that someone who shouldn’t have access to change, but always inevitably does, changes without our knowledge.
So, the report itself is pretty simple. Unfortunately, it does require that you have a server that has linked servers to all of your servers (we have a centralized DBA server that we use for this) and the sp_BLITZ script that can be downloaded from here has to be installed on each of these servers. This is a perfect use for the SQL Server 2008 Central Management Server feature that we have set up on our DBA monitoring server. What I have done in the report is created two datasets, one that queries a table that we maintain with an inventory of all of our SQL Servers which will feed the “Server Name” report parameter and the second which actually runs the sp_BLITZ stored procedure on the server that has been chosen from the dropdown. Brent has a great video on exactly what his script does at http://www.brentozar.com/blitz/. This report just gives you a format that you can go out and run off of your Reporting Services site or even schedule to run automatically in a Reporting Services subscription and have it automatically emailed to you or posted out in a document library on a SharePoint site if you are running Reporting Services in SharePoint integrated mode. This report does require that your Reporting Services service is at least 2008 R2 in order to work. One of the nice things about this report is that the URLs that Brent provides in the output for this stored procedure are active links in this report, so if you click in that URL cell, you will be taken to the page on Brent’s site that explains the Finding. Below are some screenshots of the report in collapsed and expanded form (all private information has been blacked out to protect the innocent or at least those who sign my paycheck J):
Figure 1 Collapsed Version of Report
Figure 2 Expanded View of Report
Setting Up The Report
So, to use the report that is freely downloadable at the end of this blog post, all you need to do is go into the Data Source for the report and change it to the name of your monitoring SQL Server or at least a server that has linked servers to all of the servers that you want to manage with this report, like so, replacing the text <Type Your Monitoring Server Here> with the name of your monitoring server.:
The next step is to make sure that you have a table on your monitoring server that has an inventory list of all of the servers from your environment and replace the <ServerName.database.schema.tablename> text in the query in the Servers Dataset with the pertinent information for your environment. See below:
From here, it is just a matter of deploying the report to your Reporting Services server and making sure that Brent’s stored procedure has been created on all of the servers that you wish to monitor.
The report can be downloaded here (you will need to go to Brent’s site mentioned earlier in this blog post to get the latest version of his sp_BLITZ script). I hope that you find this to be one of the many helpful tools in your tool chest to keep your environment in check.
July is definitely a painful time to be in Tucson. It’s hotter than all get out and monsoon season has usually started, so for awhile we have heat AND humidty. Oh joy. Fortunately we have some SQL
At least this calendar has green on it...
Server based events coming up to take our mind off of the disagreeable weather.
Tim’s heading up the new incarnation of the PASS Performance VC. On July 6, Jason Strate (Twitter/Blog) is going to be presenting a webcast for them entitled: ‘Performance Impacts Related to Different Function Types’. It should be a great session.
On July 17, Phoenix is having it’s first SQLSaturday. That in and of itself is pretty exciting, but Tim and I are going to be presenting two sessions there. This is our first time presenting, so it’ll be a great learning opportunity for us and a potential opportunity for up and coming hecklers. If you’re somewhere around Phoenix, you should take advantage of the opportunity. If you’re not around Phoenix, but want to see what it would feel like to step into an oven, come on out. (see note below)
Then on July 21st, Quest is holding another Virtual Training Event on Performance Monitor and Wait Events. Brent Ozar (Twitter/Blog), Kevin Kline (Twitter/Blog), Buck Woody (Twitter/Blog) and Ari Weil (Twitter) will be presenting. It should make for an interesting and potentially hilarious training event. Aside from it being a great training event, it’s relevant here because they’ll be presenting live from beautiful Tucson. Hopefully we’ll be able to meet them for dinner and take them to another top-notch Old Pueblo eatery.
One final note – the final session lineup for the PASS Summit 2010 will be finalized in July. This is due to a huge amount of great work by the volunteers from the Program Committee. If it’s June and you’re reading this, send some good thoughts their way – they’re busy.
Update: The SQLSaturday in Phoenix has been postponed until Jan/Feb 2011. Hopefully many more people will want to come to Phoenix when it’s not 110 degrees out.
The ability to lock pages in memory is used on 64-bit systems to help prevent the operating system from paging out SQL Server’s working set. This may be enabled when the DBA starts seeing errors like the following:
"A significant part of sql server process memory has been paged out. This may result in a performance degradation."
If you’re running SQL Server 2005/2008 Enterprise, you would take the steps to lock pages in memory and you’re done with it. If you’re on SQL Server 2005/2008 Standard Edition, you still have a ways to go. The ability to lock pages in memory for standard
This flag will not help in this situation
edition is handled through a trace flag. For SQL Server 2005 SP3, you need to apply CU4 . For SQL Server 2008 SP1, you need to apply CU2. Once those CUs have been applied, set trace flag 845 as a startup parameter. Here’s a good ServerFault question that explains how to set a trace flag as a startup parameter.
Once the trace flag was enabled, the memory issues were solved. Day saved, once again.
As with anything, this has the potential to degrade system performance. In this article, scroll to the section entitled “Important considerations before you assign “Lock Pages in memory” user right for an instance of a 64-bit edition of SQL Server”. Read it thoroughly prior to making any changes to your production systems.
I always thought that Mars was a planet, but apparently it also has to do with multiple pending requests within a single SQL Server connection. MARS (Multiple Active Result Sets) was introduced in SQL Server 2005 and provided the ability to handle these multiple requests. Like
Apparently this isn't the only Mars out there
anything else, though, it has to be used correctly.
About a week ago, I started seeing the following error on one of my servers:
DESCRIPTION: The server will drop the connection, because the client driver has sent multiple requests while the session is in single-user mode. This error occurs when a client sends a request to reset the connection while there are batches still running in the session, or when the client sends a request while the session is resetting a connection. Please contact the client driver vendor.
After digging around and talking to some of the developers in house, I found that they were making use of MARS, but not always correctly. To avoid the error above, ”MultipleActiveResultSets=True” needs to be added to the connection string. Adding that seems to have fixed the issues.
I ran across an installation issue with SQL Server 2008 on a Windows Server 2008 server the other day that baffled me a little bit. I was installing an additional instance of SQL Server 2008 on a server that already had a SQL Server 2008 instance and right before the installation completed, it died with the error: “A MOF Syntax error occurred.” Further investigation into the Setup Bootstrap logs gave this detail:
An error occurred while processing item 1 defined on lines 14 – 16 in file D:\Program Files\Microsoft SQL Server\MSSQL10.TMS_MODELING\MSSQL\Binn\etwcls.mof.transformed:
2010-05-18 13:41:02 Slp: Compiler returned error 0×800700a4Error Number: 0×800700a4, Facility: Win32
2010-05-18 13:41:02 Slp: Description: No more threads can be created in the system.
2010-05-18 13:41:02 Slp:
2010-05-18 13:41:02 Slp: Sco: Compile operation for mof file D:\Program Files\Microsoft SQL Server\MSSQL10.TMS_MODELING\MSSQL\Binn\etwcls.mof.transformed failed. Exit code 3
2010-05-18 13:41:02 Slp: Configuration action failed for feature SQL_Engine_Core_Inst during timing ConfigNonRC and scenario ConfigNonRC.
Much investigation on the internet turned up a lot of people that have been having this issue, but very few answers. After many installs and uninstalls, I finally tried the following, which seemed to work:
- I ran the setup.exe as an administrator (right click on setup.exe and click “Run as administrator) even though I am a local administrator on the box.
- I installed SQL Server using the Network Service account instead of the normal domain service account.
- The installation succeeded and I just went into Configuration Manager and changed the service account to the domain account after the installation.
Unlike the similar Reese’s dilemmas, these results were far less favorable…
Reese's are yummy
As many of these issues begin, a developer put together a procedure. When he ran it locally, it ran in a matter of seconds. For some reason, though, he wanted the procedure run from a remote server. When he attempted to run that same procedure from the remote server, it took a number of minutes. It was apparent that it wasn’t the result set being returned, as there were OLEDB waits while the query was processing.
In looking at the query, it wasn’t anything spectacular. It dumped some data into a temp table and then joined that table to an existing table and displayed the result. The problem ended up being the temp tables. While the query was being executed against the remote server, the temp tables were being created on the server the query was running on. So, pulling the data into the temp table and joining the two tables all had to occur with data being pulled across the network.
The fix was pretty simple – change the procedure to create a table on the remote server and drop it once the procedure was finished. The procedure ran in the expected time period.