PASS Summit 2013
After taking a year off, I’m heading back to the PASS Summit in Charlotte. This year will be a little different for me since I’m attending to help represent my company, SQL Sentry. I’m looking forward to seeing some people that I haven’t seen in a couple of years. I’m also looking forward to seeing a bit of Charlotte – SQL Sentry is actually located in Huntersville, close to Charlotte, but I’ve never really been there.

The first year that I went to the Summit in 2009, I went as a regular attendee. I met some people, attended a ton of sessions, went to some evening events, most notably a very late night breakfast on the last day of the Summit where I met Allen Kinsel (Blog/Twitter).

As a result of that meeting, I worked on the Program Committee for the next two years. Working on the Program Committee is great, a ton of work, a smidge of stress, but I had the opportunity to meet so many people including wonderful volunteers. It’s also one of those volunteer experiences where you actually get to see the results of all of your hard work. Unfortunately, since there is still work to do during the Summit, I probably only went to one or two sessions those two years.

This year, I’ll be working at our booth throughout the Summit. I’ll be demo-ing our awesome software, spending time with our team and talking with Summit attendees. We’ll probably also spend a decent amount of time trying to get Kevin (Blog/Twitter) into a kilt. I’m excited to talk with the folks that come to our booth, to let them know how SQL Sentry might help them, but even to pass on some knowledge that I may have gathered during my time as a production DBA. One of the things that I love best about my job is that – the opportunity to help people out. I doubt that I’ll even make one session this year, but I know that, once again, the Summit is going to be a great experience.

Before attending my first Summit, I had read that meeting other database professionals was equal in value to the knowledge that you get from the sessions. Admittedly, I was skeptical. The Summit has a huge number of great sessions by great speakers. That was the reason that I wanted to attend. Now, though, heading into my fourth Summit, where I know that I’m not going to be attending sessions, I’m just as excited as my first Summit. I’m going to have the opportunity to see old friends and meet new people. The learning, while not in a session hall, continues during discussions with other professionals and hearing the challenges that they experience in their jobs.

The word community, as much as it gets passed around, really applies at the Summit. I’m glad to be a part of it and I’m looking forward to participating in this new role. When you have a moment, come by and say hi to me and the rest of our team!

 | Posted by tledwards | Categories: DBAs, Discussion, PASS | Tagged: , , , |

If you have used the Central Management Server (CMS) feature in SQL Server 2008+, by now you know that you can’t add the SQL Server instance that you are using to host the CMS as a registered server to the CMS. That is, you can’t directly add it. There is, however, a workaround as long as you connect to your servers via TCP/IP. To work around this limitation in the CMS functionality:

  1. Right click on the CMS and click “New Server Registration”

  2. In the “Server name:” text box type in the IP address of the server, a comma and the port number that the instance is listening on. In this example, the test server’s IP address is 192.168.1.1 and the instance that I want to add, which also houses the CMS, is listening on port 1433.

  3. In the “Registered server name:” text box, type the name of the instance (in this case, Test) and voilà, we have now been able to register the CMS instance on the CMS.

Over the last couple of years, I have met a number of you, who as I, are required to struggle daily to make CommVault work as the enterprise backup solution for your SQL Server databases. Given that, I thought I would share with you one of the issues that we have run into and possibly any third party product could run into that uses Microsoft’s Volume Shadow Copy Services (VSS). To be fair, I have to give the credit for the research and most of the write up of this issue and solution to one of the DBA’s that works for me (whom I am sure many of you know J), Samson Loo (twitter: @ayesamson).

Problem:

I/O operations are frozen on one or more databases due to CommVault issuing a “BACKUP DATABASE WITH SNAPSHOT” command and remain frozen until the operation completes successfully or is cancelled. (This is appears to be known behavior of VSS. If you wish to dig further into how VSS works, I would suggest reading these articles: http://msdn.microsoft.com/en-us/library/windows/desktop/aa384615(v=vs.85).aspx and http://msdn.microsoft.com/en-us/library/aa384589(v=VS.85).aspx).

This particular message gets logged in the SQL Server error log whenever any backup service makes use of the SQL Server Virtual Device interface to backup the database with snapshot. Microsoft Backup (ntbackup.exe), Volume Shadow Copy Services, Data Protection Manager, Symantec Backup Exec and other third party tools, in addition to CommVault can cause this message to be logged.

If ntbackup.exe is configured to take a backup of a drive that happens to house SQL Server data files, then the command “BACKUP DATABASE WITH SNAPSHOT” is issued to ensure the backup is consistent since the data files are in use. During this time, the I/O for the database that is currently being backed up is frozen until the backup operation is complete.

The message that you will typically see logged is:

    Error:

I/O is frozen on database master. No user action is required. However, if I/O is not resumed promptly, you could cancel the backup.

 

 

    Note: In the example error message above, the database master was referenced, but this could be any database on your instance that is being backed up.

Solution:

Disable VSS in the subclient following the steps below:

  1. Open the CommVault client (often called the “Simpana CommCell Console”)
  2. Navigate to the target subclient
  3. Right click anywhere in the white area

  4. Select Properties
  5. Uncheck “Use VSS” and click OK

 

Again, extreme thanks go out to Samson Loo (twitter: @ayesamson) for providing most of this content!

A couple of years ago, I wrote a blog post comparing a couple of different methods and/or products for SQL Server disaster recovery. Over the last couple of weeks, my company has had the opportunity to test the pre-release version of the latest version of Double-Take. I want to make it clear; this blog post is not an endorsement or criticism of the product, rather just some first impressions.

Some History

About four years ago, my company chose Double-Take as a disaster recovery solution because we were in the middle of relocating our data center across the country and we now owned a hotsite. This product, mind you, was chosen by a group of people that completely excluded any DBAs. We stumbled through what seemed to be a very kludgey installation process and finally got it to work on a couple of servers. We then proceeded to install this on all of our servers and were able to successfully use this to transfer all of our SQL Server instances to new hardware in our new data center. Compared to many of the options available four or so years ago, this was considered a big win.

 

Once the new data center was set up, we then proceeded to attempt to get it installed between the new servers in our new data center and the servers in our hotsite. For many of our servers, the installation went as expected (at least from our installation experiences from the data center move exercise) and we quickly got Double-Take mirroring up and running on several servers. The problem came when we tried to use Double-Take to mirror several mission critical servers that happened to sit in our DMZ. Because we were mirroring a server from our production data center that sat in the DMZ, we also had to mirror to a hotsite server that sat in a different DMZ. This exposed a huge weakness in the Double-Take product. Try as we may, we could not get the two servers talking across the two DMZ’s because Double-Take was dependent on WMI calls which meant that the ports used by WMI were dynamic and you could not predict which of the almost 65,000 ports it would choose, not a good thing for a DMZ as our network group was not going to open up all 65,000 ports in the DMZ (and rightfully so) for these two servers just to get Double-Take to work.

 

Today

Fast forward four years and our DR strategy for our DMZ servers hadn’t really progressed much. That is until we pressed our account management team at Vision Solutions (the company that now owns Double-Take) as we were very tempted to just to drop all of the licenses because of the limitations of the software. After meeting with a couple of their engineers, we received a pre-release version of Double-Take 6 which has thankfully removed all dependence on WMI. With Double-Take 6, we only have to open up a maximum of four ports to get this to mirror an instance across the two DMZ’s. The test installation, after a couple of hiccups (this is pre-release software, after all), went fairly well and it is looking promising. We still need to do a comparison against servers running SQL Server 2012 to test its AlwaysOn capabilities against those of Double-Take and compare the costs to see which works best for us in the long run, but for now, I think we finally have a DR solution for our DMZ in Double-Take. And even if the AlwaysOn technology in SQL Server 2012 proves to be just as or more powerful, there is no way that I will be moving 160+ SQL Server instances to SQL Server 2012 any time soon. So here is hoping for continued success with Double-Take as a DR solution in our environment.

 | Posted by tledwards | Categories: Administration, DBAs, HA/DR, Uncategorized |

The Generate Scripts task in SSMS is insanely handy.  A few quick clicks and you can have tables, stored procedures, users, etc. scripted out.  I’ve used it quite a bit, but I ran into an unusual situation yesterday.

 

I needed to create a database schema and thought I’d use the handy dandy Generate Scripts task.  Popped through the wizard, clicked finish and it errored!  Here was the error message:

 

I was thoroughly confused – I was running SSMS 2008 against a SQL Server 2008 server.  I wasn’t sure where SQL Server 2005 even came into play.

 scripts_error

 

I went through the process again, this time paying closer attention and noticed this:

script_option

Apparently the Script Wizard defaults to SQL Server 2005.  I changed it to SQL Server 2008 and everything ran as expected.  While I had run this task against other SQL Server 2008 instances, apparently none of them made use of the new data types in 2008 and, as a result, didn’t generate errors.  Now why it would default to SQL Server 2008 is an entirely different question….

 

baby_facepalm

 | Posted by tledwards | Categories: DBAs, SQLServerPedia | Tagged: , |

Okay, maybe I’m being a little sarcastic.  I don’t troubleshoot dynamic SQL very often, so I don’t always see potential issues right away.  For those dear readers who work with it regularly, you should stop reading now – this is all pretty basic – but it took a few minutes out of my day.

 

This is the only dyna- that I like

This is the only dyna- that I like

My troubleshooting methods consist of displaying the command created by the dynamic SQL and seeing if it runs correctly or if I’m missing a quotation mark or something along the way.  There is probably a better way to troubleshoot, but again, I play with it so rarely that I’m stuck in a rut.

 

Evaluating Dynamic SQL commands

Late last week, a developer sent the following block of dynamic SQL code because he was having issues getting it to work:

EXEC
('
USE [master];
BEGIN
ALTER DATABASE [random_dbname] SET ONLINE;
WAITFOR DELAY ''00:01'';
END
USE [random_dbname];
'
)

 

I followed my normal troubleshooting methods and everything worked fine.  Trying to execute it as above, I received the following error message:

 

Msg 942, Level 14, State 4, Line 7
 Database 'random_dbname' cannot be opened because it is offline.

 

On first glance, I was confused, because it was obvious that I brought the database online.  I soon realized, though, that everything within the parentheses was being evaluated prior to being executed.  Apparently SQL Server  has a shorter memory than I do.

 

Breaking it into two separate statements like below accomplishes what needed to happen

EXEC

(

– Bring the database online

USE [master];

BEGIN

ALTER DATABASE [random_db] SET ONLINE;

WAITFOR DELAY ”00:01”;

END


)

go

EXEC

(‘ USE [random_db];

/*blah blah blah*/


)

Thinking that this ‘emergency’ had been handled, I went back to my other tasks. 

 

Database Context and Dynamic SQL

As these thing happen, though, received another call because after he ran all of this, the database context remained Master.  Fortunately, this was easy to explain.  The database context switching exists only during the execution of the EXEC statement and does not persist after its completion. 

 

None of this is rocket science or even deep SQL knowledge, but maybe it’ll save a minute or two for some other DBA out there.

 | Posted by tledwards | Categories: DBAs, SQLServerPedia, T-SQL | Tagged: , |

Last year my better half, Tim, suggested that we start a blog. It made sense for any number of reasons, but it scared the heck out of me. I couldn’t imagine that there was anything that I could ever blog about that hadn’t already been posted and probably by someone with much more experience than I have. I have a tendency (as I did today) to go out and search for other blog posts that cover the material that I’m about to write about to ensure that I’m at least adding something new with my post.

 

In school, if you’re assigned a paper on the pastoral imagery used in “The Adventures of Huckleberry Finn”, the instructor knows that there have been several works on that particular subject and assumes that you will be using (and referencing) information from those works. Blogging, for most people though, is not an assignment – it’s something that you make the choice to do. The people that read your blog assume that the ideas, tips and facts that you blog about are yours, unless you attribute them.

 

Over the past few months, there have been numerous tweets and blog posts about bloggers that have been plagiarizing other people’s works. In some cases the posts are lifted word-for-word and other cases they have selectively reworded the blog posts, but they were still identifiable. I have no idea whether it was intentional or that they were uninformed about how to use information from other posts. K. Brian Kelley [Blog/Twitter] wrote a post ‘Avoiding Plagiarism’ a couple of weeks ago. I thought I’d take this opportunity to add a little more information. As a note, I’m not an expert in plagiarism, so if any of you reading this post find errors, please comment and I’ll update this post.

On dictionary.com, the Random House Dictionary definition of plagiarism is:
“1. the unauthorized use or close imitation of the language and thoughts of another author and the representation of them as one’s own original work.
2. something used and represented in this manner.”

 

Reading this definition clarifies the reasons for my fear of blogging. I would never lift language from another blog post, but there have been blog posts that have inspired me (like K. Brian Kelley’s) to write a post. Here are some ways that I handle referencing other works.

 

I think you should read this

Example: Kevin Kline wrote an excellent post about the pains of not being included in a meme. You should read it here: http://kevinekline.com/2010/01/14/goals-and-theme-word-for-2010/
In this case, I have nothing to add, but I want to give my audience the opportunity to read great posts that I’ve come across.

 

You can say it better than I can

Example: PowerShell is fabulous. It’s so awesome that it’s caused some otherwise contentious DBA’s to wander astray. Colin Stasiuk [Blog/Twitter] admitted as much in a recent blog post : “…it’s no secret that I’ve been having an affair on TSQL. I’ve been seeing PowerShell behind her back and I gotta tell ya even after the initial excitement of a new language I’m still loving it. “
I know that I couldn’t have said it better than Colin, so in addition to linking to his post, I quoted his remark. Quotes should be used sparingly – if you find yourself quoting more than a sentence or two, you should probably use the example above.

 

Note: Blogs, whitepapers or other articles that are copyrighted require permission prior to their use. In addition, some online works have posted requirements on how they can be used. Brent Ozar (Blog/Twitter) has a good example of that here.

 

This is what I researched

Example: While the sp_change_users_login has the option to auto_fix logins, that action assumes that the username and login name match. If they don’t, it will fail. Using the Update_One option is a safer and the preferable way to handle it. For SQL Server 2005/2008, the ALTER USER statement is the preferred method for mapping users to logins. Greg Low’s (Blog/Twitter) article ‘Much ado about logins and SIDs’ provides a good explanation for these methods.

 

This is probably where unintentional plagiarism occurs most often. If, during your research, you read blog posts, articles, whitepapers, etc. and find useful information, your best bet is to attribute them. If you recall the definition of plagiarism above, it applies to both language and ideas, so if you learned something that you’re passing on a blog post or if you’re using that information to validate your ideas, they need to be cited. Again, keep in mind any copyright laws that might apply.

 

What doesn’t need to be cited

Common knowledge/generally accepted facts

Items that are common knowledge or generally accepted facts do not need to be cited.  Examples of common knowledge are:

  • A table can only have one clustered index
  • SQL Server is an RDBMS
  • Most SQL Server based questions can be answered with “It Depends”

 There is a decent article on common knowledge here.

 

 

Results of personal research

If you’re blogging about an incident that occurred or the results of test that you ran, they don’t require a citation. That is, unless, you did research to solve the incident or used other information to validate your test results.

 

Fair Use

The term ‘Fair Use’ had been bandied about in the recent plagiarism incident. The idea of fair use has no exact definition, but is determined by a set of guidelines. There is a good definition at Plagiarism.org and a good article titled “The Basics of Fair Use” by Jonathan Bailey. According to Plagiarism.org the guidelines look at:

  1. The nature of your use
  2. The amount used
  3. The affect of your use on the original

The ability to define fair use is pretty obscure and personally, I wouldn’t want to try and stand behind that argument.  The incident mentioned above definitely fell outside of those guidelines, in my opinion.

 

Public Domain

At some works fall out of their copyright term and become part of the public domain.  The Wikipedia article regarding public domain can be found here.  While the copyright laws no longer apply, they still require citations.  This point is moot for any SQL Server blogs, since, at this time, there aren’t any works old enough to have fallen out of their copyright term.

 

Conclusion

There is a huge amount of helpful information in blogs. Blogging also provides an opportunity for us to share information and experiences. I think that it’s understood that we learn from other people – just ensure that you credit those people for their hard work.

 | Posted by tledwards | Categories: DBAs, Discussion, SQLServerPedia | Tagged: , , |

As DBAs, we are increasingly being asked to manage more and more technology.  Some of that is the result of internal pressures (i.e. taking on additional roles) and some of that is the result of Microsoft bundling an ever increasing array of different technologies within SQL Server.  Dealing with these various technologies has become a weekly issue for me, but it really came to light today when I had a SQL Server Reporting Services 2008 server that was consuming 100% of the CPU.  Doing some poking around, I realized not only did I not really know anything about this beast called “SQL Server Reporting Services”, the tools to manage it are extremely lacking (now, that is my opinion coming from a position of complete ignorance about this technology).  I connected to the SSRS 2008 service with SSMS and, from there, I could only view three things:  SSRS jobs, security, and shared schedules.  I determined that none of the shared schedules were responsible for the utilization since nothing was scheduled to run anywhere near the time that the problem started, so that was a dead end.

 

Next, I connected to the Report Service by hitting http://[servername]/reports.  From here, I could look at all of the various reports that had been deployed to the instance, general site settings, security, both instance-wide and at a report level, and I could look at my own subscriptions.  The one thing that seemed to elude me was visibility into what, if anything, users were running on the SSRS server.

 

Frustrated, I connected to database instance through SSMS that hosts the ReportServer database.  I figured there had to be something in the database I could query to give me some visibility into what my SSRS instance does all day.  Thinking like a DBA, the first thing I did was look under “System Views” in the ReportServer database.  I saw two views, ExecutionLog and ExecutionLog2, so I decided to do a simple SELECT TOP 100* from each to see what they would give me.  This is where I stumbled upon my gold nugget for the day.  Right there in the ExecutionLog2 system view was all of the information that I had been looking for.  Running the following query, you can get a wealth of valuable information on what reports users are running, when they are running them, what parameters they used, and how long the report took to generate (broken down into data retrieval time, processing time, and rendering time) – all key information for trending the load that your server is under and <gasp> justifying new hardware, if needed.

 

SELECT InstanceName
       , ReportPath
       , UserName
       , RequestType
       , Format
       , Parameters
       , ReportAction
       , TimeStart
       , TimeEnd
       , TimeDataRetrieval
       , TimeProcessing
       , TimeRendering
       , Source
       , Status
       , ByteCount
       , RowCount
       , AdditionalInfo
FROM ExecutionLog2
WHERE CONVERT(VARCHAR(10),timeend,101) >= -- Some start date that you supply
AND CONVERT(VARCHAR(10),timeend,101) <= -- Some end date that you supply

 

To many of you who regularly use SSRS this may be very remedial, but I figured I would throw this out there for those DBAs who like me, have to learn this stuff on the fly in a crisis.

 

By the way, as a side note, for those who are curious about why the ExecutionLog2 system view was used instead of the ExecutionLog system view, it appears that the ExecutionLog system view exists for backward compatibility for SQL Server 2005 Reporting Services upgrades.  The ExecutionLog2 system view provides much more information than the ExecutionLog system view.

It would be so nice if something made sense for a change.

It would be so nice if something made sense for a change.

A couple of weeks ago, we had a database that had become corrupted due to a disk failure.  After running a DBCC CHECKDB, it appeared that there was one table with a corrupted cluster index.  Paul Randal (Twitter/Blog) verified that this was the case and that it was, unfortunately, non-recoverable without data loss.  Fortunately, we had backups and this database was a replication subscription, so between the backup and publication, we were able to rebuild the database without losing any data.  We then had to rebuild a kind of goofy replication set up, but that’s another story.

 

As I was doing some validation about the data that was contained in the backups and what needed to be pulled down from the publication, I realized that there was at least one other table that was corrupted.  This is where it became confusing.   I went back through the output from the CHECKDB and noticed that the table in question was not mentioned.   At this point, I went through and matched up all of the tables that were listed in the CHECKDB output against the actual tables in the database and found that there were three tables in the database that were not listed on the DBCC output.  I ran DBCC CHECKTABLE against the missing tables and while two of them came back with no error, one was definitely corrupted.  The CHECKTABLE command actually “terminated abnormally” on the corrupted table.   I went back through the the error logs and dump files that had been generated at that time.  Aside from the references to the CHECKTABLE that was run, the only object referenced was the table that had shown up in the CHECKDB output.

 

As a note, this is a 2008 SP1 database.  I know that, when running CHECKDB from SSMS, it will only display the first 1000 errors, but there were only 274 errors mentioned in the output, so I didn’t think that was the issue.  I asked Paul Randal (mentioned above) and his thought was that perhaps the metadata had also been corrupted and because of that CHECKDB may not have seen those tables as existing. 

 

We’ve recovered from the incident, but it was a curious experience.  Up until now, I’ve gone off of the CHECKDB output, but this gives me reason to think that there might be tables that aren’t included. I am interested to know whether anyone else has ever run into a similar incident.  Hopefully it was just an (odd) learning experience.

SQL Search – a review

2 February 2010

SQL Search a new (free!) SSMS plugin from Red Gate, was released yesterday, February 1st.    This tool allows you to search for text within SQL Server objects.    I’ve had occasions where I needed to search legacy stored procedures for decremented code (backup log … with no_log – egad!).  There have also been times where database servers were renamed and any references to the previous server name had to be changed. Prior to the beta version of this tool, I’d used the syscomments system table as a way to find text within stored procedures, but the results definitely weren’t pretty.

 

SQL Search allows you to enter a piece of text, specify either all objects or specific objects and whether you want to search all databases or a single database.

 

In the example below, I searched for the term ‘Reject’ occurring within all objects on the Reporting database.  The list of objects, their types and how the match was made is displayed in the upper half of the window.  When the object that is highlighted in the top of the screen is a stored procedure or function, the text is displayed in the bottom half of the screen, with the text highlighted.

sqlsearch

 

One of the most helpful features is the link in the middle of the screen to ‘Select object in Object Explorer’.  Just as it sounds, a click will take you directly to that object.

 

In working with it, I haven’t noticed many downsides.  It takes a bit of time the first time a database is used as SQL Search indexes it, but after that, searches seem to go pretty quickly.  The first time that you search a database on a server, SQL Search needs to be opened within the context of that server, but after that point, it is added to the dropdown box of servers that are selectable.

 

SQL Search seems to be a helpful tool and, as I mentioned before, it’s free.  To download, just click here to go to the Red Gate site.

 | Posted by tledwards | Categories: Administration, DBAs | Tagged: |