Over the last couple of years, I have met a number of you, who as I, are required to struggle daily to make CommVault work as the enterprise backup solution for your SQL Server databases. Given that, I thought I would share with you one of the issues that we have run into and possibly any third party product could run into that uses Microsoft’s Volume Shadow Copy Services (VSS). To be fair, I have to give the credit for the research and most of the write up of this issue and solution to one of the DBA’s that works for me (whom I am sure many of you know J), Samson Loo (twitter: @ayesamson).

Problem:

I/O operations are frozen on one or more databases due to CommVault issuing a “BACKUP DATABASE WITH SNAPSHOT” command and remain frozen until the operation completes successfully or is cancelled. (This is appears to be known behavior of VSS. If you wish to dig further into how VSS works, I would suggest reading these articles: http://msdn.microsoft.com/en-us/library/windows/desktop/aa384615(v=vs.85).aspx and http://msdn.microsoft.com/en-us/library/aa384589(v=VS.85).aspx).

This particular message gets logged in the SQL Server error log whenever any backup service makes use of the SQL Server Virtual Device interface to backup the database with snapshot. Microsoft Backup (ntbackup.exe), Volume Shadow Copy Services, Data Protection Manager, Symantec Backup Exec and other third party tools, in addition to CommVault can cause this message to be logged.

If ntbackup.exe is configured to take a backup of a drive that happens to house SQL Server data files, then the command “BACKUP DATABASE WITH SNAPSHOT” is issued to ensure the backup is consistent since the data files are in use. During this time, the I/O for the database that is currently being backed up is frozen until the backup operation is complete.

The message that you will typically see logged is:

    Error:

I/O is frozen on database master. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup.

 

 

    Note: In the example error message above, the database master was referenced, but this could be any database on your instance that is being backed up.

Solution:

Disable VSS in the subclient following the steps below:

  1. Open the CommVault client (often called the “Simpana CommCell Console”)
  2. Navigate to the target subclient
  3. Right click anywhere in the white area

  4. Select Properties
  5. Uncheck “Use VSS” and click OK

 

Again, extreme thanks go out to Samson Loo (twitter: @ayesamson) for providing most of this content!

These days, I live and die by OneNote.  I read a ton of technical blogs and come across a number of great scripts and when I do, I save them to OneNote.  I take notes from meetings in OneNote and I even save videos and webcasts that I feel are especially pertinent to what I do in OneNote.  I have a ton of notebooks in OneNote each with a bunch of sections and pages (in fact, my OneNote notebooks are about 15GB in size!).  But the problem I have always had was that unless I wanted to search through my OneNote notebooks (which, I have to say, Microsoft certainly has included a very capable search functionality in this product), it was hard to find specific things because there didn’t seem to be a way to sort your OneNote sections and pages; basically, they just showed up in the order you created them unless you wanted to manually sort them (but who has the time for that!).

This was a problem until I came across this little lifesaver tool that makes keeping my OneNote notebooks tidy and in order.  It is a little program called the “OneNote 2010 Sort Utility.”  You can read more about this little golden nugget here.

If you decide that this little free utility might make your life easier, you can download it here.

And by the way, if you are using Microsoft Office 2010 Professional and you haven’t tried OneNote 2010 to organize your life (or at least your personal life). I strongly recommend giving it a spin. At first, it may seem a little daunting, just like writing your first SSIS package, being stared at by that blank screen. But, rest assured, there is help out there and a fairly active community of users. Once you understand it’s metaphor to a physical binder (if you are my age, you might even insert “Trapper Keeper” hereJ), with notebooks for different subjects and then sections within each notebook and then pages within the sections and the fact that you can actually print documents to OneNote 2010 as well as attach any kind of file, it becomes one of those tools that is hard to live without. In fact, it integrates so well with Outlook that if you have OneNote installed, your Outlook meetings will have a OneNote button on them and clicking that creates a page that contains all of the information from the Outlook invitation and then lets you take meeting notes. I could go on and on and, in fact, have because I intended this blog post to really only be about this OneNote 2010 Sort Utility, but OneNote is, unfortunately, one of those things that I am quite passionate about because it has saved my bacon a number of times. At any rate, if you don’t use OneNote or want to know how to use OneNote, here are some links to get you started (some of these might apply to OneNote 2007, which some of you may still be on, but the concepts generally also apply to OneNote 2010):

http://blogs.msdn.com/b/chris_pratley/archive/2009/03/10/i-heart-onenote.aspx

http://blogs.office.com/b/microsoft-onenote/

http://office.microsoft.com/en-us/onenote-help/getting-started-with-onenote-2010-HA010370233.aspx

http://office.microsoft.com/en-us/onenote-help/basic-tasks-in-onenote-2010-HA101829998.aspx

http://www.onenotepowertoys.com/

 


 

A couple of years ago, I wrote a blog post comparing a couple of different methods and/or products for SQL Server disaster recovery. Over the last couple of weeks, my company has had the opportunity to test the pre-release version of the latest version of Double-Take. I want to make it clear; this blog post is not an endorsement or criticism of the product, rather just some first impressions.

Some History

About four years ago, my company chose Double-Take as a disaster recovery solution because we were in the middle of relocating our data center across the country and we now owned a hotsite. This product, mind you, was chosen by a group of people that completely excluded any DBAs. We stumbled through what seemed to be a very kludgey installation process and finally got it to work on a couple of servers. We then proceeded to install this on all of our servers and were able to successfully use this to transfer all of our SQL Server instances to new hardware in our new data center. Compared to many of the options available four or so years ago, this was considered a big win.

 

Once the new data center was set up, we then proceeded to attempt to get it installed between the new servers in our new data center and the servers in our hotsite. For many of our servers, the installation went as expected (at least from our installation experiences from the data center move exercise) and we quickly got Double-Take mirroring up and running on several servers. The problem came when we tried to use Double-Take to mirror several mission critical servers that happened to sit in our DMZ. Because we were mirroring a server from our production data center that sat in the DMZ, we also had to mirror to a hotsite server that sat in a different DMZ. This exposed a huge weakness in the Double-Take product. Try as we may, we could not get the two servers talking across the two DMZ’s because Double-Take was dependent on WMI calls which meant that the ports used by WMI were dynamic and you could not predict which of the almost 65,000 ports it would choose, not a good thing for a DMZ as our network group was not going to open up all 65,000 ports in the DMZ (and rightfully so) for these two servers just to get Double-Take to work.

 

Today

Fast forward four years and our DR strategy for our DMZ servers hadn’t really progressed much. That is until we pressed our account management team at Vision Solutions (the company that now owns Double-Take) as we were very tempted to just to drop all of the licenses because of the limitations of the software. After meeting with a couple of their engineers, we received a pre-release version of Double-Take 6 which has thankfully removed all dependence on WMI. With Double-Take 6, we only have to open up a maximum of four ports to get this to mirror an instance across the two DMZ’s. The test installation, after a couple of hiccups (this is pre-release software, after all), went fairly well and it is looking promising. We still need to do a comparison against servers running SQL Server 2012 to test its AlwaysOn capabilities against those of Double-Take and compare the costs to see which works best for us in the long run, but for now, I think we finally have a DR solution for our DMZ in Double-Take. And even if the AlwaysOn technology in SQL Server 2012 proves to be just as or more powerful, there is no way that I will be moving 160+ SQL Server instances to SQL Server 2012 any time soon. So here is hoping for continued success with Double-Take as a DR solution in our environment.

 | Posted by tledwards | Categories: Administration, DBAs, HA/DR, Uncategorized |

As many of you have probably noticed, I haven’t blogged in quite a while due to work commitments, health issues and various family commitments (don’t want to go on too long here with excuses J), but I decided a perfect first blog post back might be taking the stored procedure that a friend and his consulting group have so graciously shared with the community. I am, of course, speaking of Brent Ozar’s sp_BLITZ stored procedure intended to help a DBA see what they are inheriting when someone dumps a new server on them. I kind of took a different twist on this and figured that this might be a great tool to use on all of the SQL Servers in an environment periodically by creating a report around it.

Some Background

I work as the lead DBA in an environment with over 160 SQL Server instances (a number that seems to grow by at least five or so every quarter) and somewhere in excess of 2,000 databases ranging in size from what some of you might consider laughably small to some rather large data warehouse databases, many of which are mission critical to our business. To manage this environment, I have a team of two other DBA’s that I lead. One started with the company the week after I started and the other, a junior DBA, has been with us just over a year. We have a great team, but even with three DBA’s, it is hard to be proactive without some tools to let you know what is going on. Unfortunately, we don’t have the budget for some of the major monitoring tools as the cost for our environment would be rather substantial. Needless to say, it is left to me and my team to be creative and create our tools and instrumentation. That is where Brent’s sp_BLITZ script comes in. With a report written around it that my junior DBA can go through on a weekly or monthly basis, we can be much more proactive with some of the more basic or fundamental settings that someone who shouldn’t have access to change, but always inevitably does, changes without our knowledge.

 

The Report

So, the report itself is pretty simple. Unfortunately, it does require that you have a server that has linked servers to all of your servers (we have a centralized DBA server that we use for this) and the sp_BLITZ script that can be downloaded from here has to be installed on each of these servers. This is a perfect use for the SQL Server 2008 Central Management Server feature that we have set up on our DBA monitoring server. What I have done in the report is created two datasets, one that queries a table that we maintain with an inventory of all of our SQL Servers which will feed the “Server Name” report parameter and the second which actually runs the sp_BLITZ stored procedure on the server that has been chosen from the dropdown. Brent has a great video on exactly what his script does at http://www.brentozar.com/blitz/. This report just gives you a format that you can go out and run off of your Reporting Services site or even schedule to run automatically in a Reporting Services subscription and have it automatically emailed to you or posted out in a document library on a SharePoint site if you are running Reporting Services in SharePoint integrated mode. This report does require that your Reporting Services service is at least 2008 R2 in order to work. One of the nice things about this report is that the URLs that Brent provides in the output for this stored procedure are active links in this report, so if you click in that URL cell, you will be taken to the page on Brent’s site that explains the Finding. Below are some screenshots of the report in collapsed and expanded form (all private information has been blacked out to protect the innocent or at least those who sign my paycheck J):

 

    

Figure 1 Collapsed Version of Report

 

    

    

Figure 2 Expanded View of Report

 

Setting Up The Report

So, to use the report that is freely downloadable at the end of this blog post, all you need to do is go into the Data Source for the report and change it to the name of your monitoring SQL Server or at least a server that has linked servers to all of the servers that you want to manage with this report, like so, replacing the text <Type Your Monitoring Server Here> with the name of your monitoring server.:

    

    

 

The next step is to make sure that you have a table on your monitoring server that has an inventory list of all of the servers from your environment and replace the <ServerName.database.schema.tablename> text in the query in the Servers Dataset with the pertinent information for your environment. See below:

 

    

 

 

From here, it is just a matter of deploying the report to your Reporting Services server and making sure that Brent’s stored procedure has been created on all of the servers that you wish to monitor.

 

The report can be downloaded here (you will need to go to Brent’s site mentioned earlier in this blog post to get the latest version of his sp_BLITZ script). I hope that you find this to be one of the many helpful tools in your tool chest to keep your environment in check.

    

Some of you who are my age will recognize the reference in the title as a line from the movie “Top Gun.” Most of you will probably look at the title and think that this blog post is going to be about project management. Unfortunately, you may be disappointed to learn that it is really more of a personal blog post – one about life management.

Not too be confused with the impact of spending a summer in Tucson :) (www.flickr.com/photos/nickdouglas/58786813)

Not too be confused with the impact of spending a summer in Tucson :) (www.flickr.com/photos/nickdouglas/58786813)

 

Much of this year, I have pretty much felt like the title. This last week and weekend, I actually came to realize the impact that moving at this pace for as long as I have has had on me and, more importantly, my family. This weekend was the first time in a long time that I have truly taken a weekend off from work. Initially, it was more out of exhaustion and truly being burned out that I did it, but I came to realize that I got a lot more than rest out of it. It was the first time in a long time that I truly took the time to laugh with and thoroughly enjoy my family without having things like work, studying for MCITP exams, the PASS Virtual Chapter that I am a leader of, etc. nagging at me in the back of my mind. I discovered that you need to be very careful not to let outside responsibilities and activities take over your life and cause you to take your family for granted. Luckily, I have an extremely wonderful and supportive wife and great kids who have been very understanding throughout this hectic year. Such a support structure is a gift that we have to be very careful not to over utilize.

 

In this time where job security is probably at its lowest level in several generations, we have to be careful to leave time for our families and loved ones while also trying to hold on to our jobs. It is easy to lose focus and not give the proper amount of time and attention to those we love because they are not the proverbial squeaky wheel when we have things like projects, training, work travel, conferences, etc. tugging on us. It is a difficult balancing act to be sure, but one that I believe will pay dividends over and over the better we become at it. The thing we have to realize is that our loved ones will probably be the last ones to call us on this, so we have to make sure to be vigilant in keeping things balanced. Because of this, I have decided that even though resolutions are made at the beginning of the year, I am going to start mine early and resolve to try to cut down on the outside activities that have kept me from fully enjoying my family and managing the balance in my life. I think that not only will everyone in my family be better for it, but that I will be more productive and happy in the activities that I do decide to continue engaging in.

 
 | Posted by tledwards | Categories: Personal, Uncategorized | Tagged: |

PASS has a great slate this year – all of the candidates have strengths that will bring value to the PASS organization. I’ve had the opportunity to work closely with Allen Kinsel since March of this year and thought I’d take the opportunity to share why I think that he would be an excellent Board Member.

 

Professionalism – Now this isn’t to say that Allen can’t fully appreciate a ‘colorful’ joke or that there aren’t times that he needs to rant. He’s human like everyone else. In the dealings (that I’ve been a part of) with vendors, Microsoft, volunteers, etc., he’s listened and been respectful. He is able to ask the tough questions and make the comments that need to be heard without coming off as aggressive. While that should be a quality that we should expect of professionals, unfortunately it isn’t always the case. I feel that it’s important for leadership to know the difference between reactionary venting about a perceived wrong and providing the community with comprehensive, balanced information.

 

Transparency – I know that this is a big issue for most of the PASS community. Allen’s blog posts show a consistent effort to keep the community aware of what the Program Committee is up to, the decisions that have been made and the reasons for those decisions.

 

The status quo – Throughout the process of putting together the Summit, we’ve been asking ourselves questions: Does this work? Is it efficient/effective? Is it necessary? Do we need to change it? Allen’s definitely not going through the motions here and I doubt that he would on the Board of Directors either.

 

Involving the community – It should be apparent from Allen’s latest posts (here and here) that he has been striving to increase community involvement in the PASS Summit. The latest experiment, with the community choice sessions, seems to have been extremely well received. Without putting words in Allen’s mouth, I think that he feels that it’s the PASS Summit, so the PASS community should have the opportunity to make some choices about the content delivered there.

 

Working with volunteers – We can start with this – I was and still am a noob as far as the Program Committee is concerned. I had attended one PASS Summit (last year) and my volunteer experience with PASS was negligible. Why was I given the opportunity to work on the Program Committee in the capacity that I am now? I asked to help. He recognizes the need for volunteers and the value that they provide. After the abstract selection teams were finished and the selected abstracts had been announced, Allen went back and had conference calls with all of the teams to get their input on what worked, what needed to be changed and what would make this process better.

 

Allen doesn’t walk on water. He doesn’t travel to the Summit by way of a winged unicorn. His supply of pixie dust is paltry or maybe non-existent. I may have seen him consume bacon, but it was way too late in the night (or early in the morning) for me to be sure. I do know that I have respect for Allen. I’m constantly impressed by his continued enthusiasm about the possibilities of PASS to make a difference for data professionals. Don’t take my word for it – read his blog; read his answers on the election forums. Without question, I’ll be voting for him in the upcoming elections. I think you should, too.

 | Posted by tledwards | Categories: PASS | Tagged: , |

Who is PASS, really?

20 August 2010

After attending the PASS Summit last year, I made a decision to become more active in the PASS Community. During the Summit, I had the opportunity to meet so many incredible people from the community – chapter leaders, regional mentors, speakers, board members and just normal folks like me.

As many of you know, I’ve been a part of the Program Committee for the last six months. Originally, I was tapped to head up a task team – a group that would work on projects that had been on the radar, but hadn’t had the manpower to get them completed. Along the way, I became more involved with other aspects of the Program Committee – the things that need to happen so that the PASS Summit can occur.

I think I was sucked in by the vision of the weekly meetings being wonderful opportunities in which we were magically transported to a beautiful meadow with full-on double rainbows, prancing unicorns and woodland nymphs presenting us with bacon-wrapped treats. It was that, sometimes, but it also was long hours, endless emails, looming deadlines and seemingly insurmountable roadblocks. Even with those, the thought of being involved in pulling together a valuable, enjoyable event for the community pushed us forward. I had the opportunity to work with a huge number of volunteers (many of whom I’ve never met face to face) that put in extraordinary effort and working with members from PASS HQ that were very helpful and hardworking.

Reading the tweets and blogs over the last few days makes me wonder if I’ve been duped. I’m continually seeing that PASS has failed and that PASS doesn’t want to get it right and how people are frustrated with PASS. Apparently PASS is some evil, faceless organization that has committing mayhem and creating obstacles as its sole agenda. I’ve listened while people close to me have become disenchanted with PASS as a community, not because of decisions that have been made, but by the reactions of the community members in these last few days.

I’ve been accused of being a pollyanna before and probably will be now, but I thought PASS was more than the BOD and committees. I thought PASS was the people that lead and contribute to user groups and virtual chapters, speakers, volunteers, Summit and SQLSaturday attendees and all of the rest of the people that participate with PASS in some manner. Things have happened that I disagree with and missteps have been made. I’ve voiced my opinion when I thought it was necessary and tried to address issues with the people that inolved – I strive to listen and understand the reasons behind decisions just as I hope that they listen and try to understand my points. In any large group of passionate, intelligent professionsals, there will always be disagreement. The only difference is how that dissent is expressed and handled.

If I were a data professional that was just beginning to read blogs and get involved with Twitter, I seriously doubt that I would join PASS. I definitely wouldn’t volunteer for anything PASS related. So if you really believe that PASS is irrevocably broken, walk away – people will stop joining and stop volunteering and PASS will eventually fade away.

For me, PASS is the community of its members. That community is valuable to me, so for now, I will continue to volunteer and continue to suggest changes. I will continue to believe that the vast majority of PASS is committed to making this community a valuable organization.

 | Posted by tledwards | Categories: Discussion, PASS | Tagged: , |

I’ve never had the opportunity to be on the abstract selection committee, so it was interesting to see the process in action. To be clear, I was not on one of the selection committees, but I am on the Program Committee so I was still involved in the process.

The abstract selection committees are chosen out of the group of people that apply to volunteer for the Program Committee. We work to ensure that each team includes at least one person that has been on an abstract selection team in the past. Our hope is that they can provide some additional guidance. We also provide at least one training session to go over the tools and answer any initial questions.

Prior to the call to speakers, the number of allocated sessions are set. They are allocated in total to fit the number of rooms that we have available. That total number is then split between the tracks (Application and Database Development, BI Architecture, Development and Administration, BI Client Reporting and Delivery Topics, Enterprise Database Administration and Deployment and Professional Development) to help make certain that we provide a balanced Summit selection.

Once the call to speakers closed, we knew that the abstract review committees were going to be in for a lot of work. Here are the numbers that we were looking at:

Total # of regular session abstracts submitted: 513
# of regular session community slots allocated: 72

Doing the math, that means that only 14% of the abstracts submitted were going to be selected. Within the tracks, that percentage ranged from 11% to 18%.

During the review process, the individuals on each team go through the abstracts in their track and rate them on 4 different areas – Abstract, Topic, Speaker and Subjective. Each of these areas are rated using a 1-10 scale and there is an area for comments. The abstract section has to do with, among other things, whether the abstract was complete (were session goals identified?), clear (was it easy to understand what the session would be about?) and interesting. The topic referred to the interest in and relevancy of the chosen topic. As far as the speaker – the abstract review teams had access to a report that provided previous Summit evaluation data for previous Summit speakers. They could also draw on personal knowledge or other information that they had access to. All of the individual scores added up to a total rating per abstract for the team.

Once the individual team members were finished with the evaluations, they came together as a team to rank the sessions. Along with looking at the total rating, they also looked at the different topics that were covered to ensure that the sessions covered a broad range of topics. Once the abstracts were ranked, the teams updated the session status to Approved, Alternate or Considered (Not accepted). If the status was Considered, the teams provided a reason as to why the abstract was not selected.

At that point the list of sessions came back to the Program Committee managers. We made certain the correct number of sessions per track were chosen and that no speakers had more than two sessions. There were a couple of cases where speakers had more than two sessions – for these cases, we went back to the teams for additional selections.

That’s it. Well, I guess I mean, those are all of the steps – it’s a ton of work and I’m grateful to everyone involved for all of their hard work. We recognize that there are probably ways to improve the process and we’re in the process of setting up meetings with all of the teams to get their input. I hope this provides clarification to some of the questions that people might have about the abstract selection process.

 | Posted by tledwards | Categories: PASS | Tagged: , |

July is definitely a painful time to be in Tucson.  It’s hotter than all get out and monsoon season has usually started, so for awhile we have heat AND humidty.  Oh joy.  Fortunately we have some SQL

At least this calendar has green on it...

At least this calendar has green on it...

Server based events coming up to take our mind off of the disagreeable weather.

 

Tim’s heading up the new incarnation of the PASS Performance VC.  On July 6, Jason Strate (Twitter/Blog)  is going to be presenting a webcast for them entitled: ‘Performance Impacts Related to Different Function Types’.  It should be a great session.

 

On July 17, Phoenix is having it’s first SQLSaturday.  That in and of itself is pretty exciting, but Tim and I are going to be presenting two sessions there.  This is our first time presenting, so it’ll be a great learning opportunity for us and a potential opportunity for up and coming hecklers.   If you’re somewhere around Phoenix, you should take advantage of the opportunity.  If you’re not around Phoenix, but want to see what it would feel like to step into an oven, come on out.  (see note below)

 

Then on July 21st, Quest is holding another Virtual Training Event on Performance Monitor and Wait Events.  Brent Ozar (Twitter/Blog), Kevin Kline (Twitter/Blog), Buck Woody (Twitter/Blog) and Ari Weil (Twitter) will be presenting.   It should make for an interesting and potentially hilarious training event.  Aside from it being a great training event, it’s relevant here because they’ll be presenting live from beautiful Tucson.  Hopefully we’ll be able to meet them for dinner and take them to another top-notch Old Pueblo eatery.

 

One final note – the final session lineup for the PASS Summit 2010 will be finalized in July.    This is due to a huge amount of great work by the volunteers from the Program Committee.  If it’s June and you’re reading this, send some good thoughts their way – they’re busy.

 

 

Update:  The SQLSaturday in Phoenix has been postponed until Jan/Feb 2011.  Hopefully many more people will want to come to Phoenix when it’s not 110 degrees out.

The ability to lock pages in memory is used on 64-bit systems to help prevent the operating system from paging out SQL Server’s working set.  This may be enabled when the DBA starts seeing errors like the following:

 

"A significant part of sql server process memory has been paged out. This may result in a performance degradation."

 

If you’re running SQL Server 2005/2008 Enterprise, you would take the steps to lock pages in memory and you’re done with it.   If you’re on SQL Server 2005/2008 Standard Edition, you still have a ways to go.  The ability to lock pages in memory for standard

This flag will not help in this situation

This flag will not help in this situation

edition is handled through  a trace flag.  For SQL Server 2005 SP3, you need to apply CU4 .  For SQL Server 2008 SP1, you need to apply CU2.    Once those CUs have been applied, set trace flag 845 as a startup parameter.  Here’s a good ServerFault question that explains how to set a trace flag as a startup parameter.

 

Once the trace flag was enabled, the memory issues were solved.   Day saved, once again. :)  

 

As with anything, this has the potential to degrade system performance.   In this article, scroll to the section entitled “Important considerations before you assign “Lock Pages in memory” user right for an instance of a 64-bit edition of SQL Server”.  Read it thoroughly prior to making any changes to your production systems.