We’ve been in the process of setting up reporting servers for some of our larger clients.  We’re also in the process of setting up DEV and QA environments (yes, Colin, I know, I know).  This involves restoring databases onto the new servers.

 

Transferring logins

There is a fairly detailed knowledge base article from Microsoft outlining transferring logins between servers.  This post is going to outline the gist of that article.  The first step is creating the  sp_hexadecimal stored procedure in the master database.  This stored procedure will be used by the sp_help_revlogin to create a hashed value for the login password.  The second is to create the sp_help_revlogin procedure – there is a script for SQL Server_2000 and one for SQL Server 2005/2008.  The output from running the  sp_help_revlogin procedure on the source server is a series of CREATE LOGIN statements for all of the existing logins.

 

There are three main benefits for using this method to transfer logins:

  1. It’s fast.  No scripting out individual logins.
  2. If the logins and user are from the same servers, the user and logins SIDS will match, avoiding any ‘orphaned users’.
  3.  The passwords come across with the logins.  Definitely preferable to keeping an Excel spreadsheet tucked away with all of the logins and associated passwords.

 

As with any process that is automatically generated, it’s best to review the scripts carefully before running them on the destination server.  A couple of issues that you need to watch out for are:

  • If a login has a defined default database that doesn’t exist on the destination server, obviously the CREATE LOGIN statement will fail.
  • If there are existing logins on the destination server, there is the possibility that a SID may already be in use.  In that case, you will need to create a new login and map the existing users.

Orphaned Users

CantGetThere

 One issue that comes up when transferring logins is that the database users may not be mapped to the server logins.   This occurs when there isn’t a corresponding login SID for the user SIDs in the database.  If the database(s) that are being restored are from a single server and the logins were transferred from that same server the SIDs will match.  If the SIDs do not match, the database users will not be attached to a server login and the user is said to be ‘orphaned’.

 

 There are some cases where orphaned users can’t be avoided.  In development environments, the databases on the destination server may be consolidated from more than one production server, logins may overlap and, as a result, the login and user SIDs don’t match.  Another instance would be when a production database is restored to a development server and the rights for the users in the development environment are different from their rights in the production environment.  For both of these cases, steps will need to be taken to match database users with their respective logins on the server.

 

 To determine whether there are any orphaned users, run the following stored procedure:

EXEC sp_change_users_login ‘report’

 

There are a few ways to handle these orphaned users.  The sp_change_users_login stored procedure can be used to update the connection between logins and users in SQL Server 2000.  While the sp_change_users_login has the option  to auto_fix logins, that action assumes that the username and login name match.  If they don’t, it will fail.  Using the Update_One option is a safer and the preferable way to handle it.  For SQL Server 2005/2008, the ALTER USER statement is the preferred method for mapping users to logins.  Greg Low’s article ‘Much ado about logins and SIDs’ provides a good explanation for these methods.

 

Quick script to map users to logins

 

If you’re sure that the usernames match the login names that they map to, here are a couple of quick scripts that you can run in each database that you restore from your production server to your development server to map the orphaned users – yes, they’re cursors, but they run quickly.

 

SQL Server 2000

DECLARE @Username varchar(100), @cmd varchar(100)
DECLARE userLogin_cursor CURSOR FAST_FORWARD
FOR
SELECT UserName = name FROM sysusers
WHERE issqluser = 1 and (sid IS NOT NULL AND sid <> 0×0)
    AND suser_sname(sid) IS NULL
ORDER BY name
FOR READ ONLY
OPEN userLogin_cursor

FETCH NEXT FROM userLogin_cursor INTO @Username
WHILE @@fetch_status = 0
  BEGIN
    SET @cmd = ‘EXEC sp_change_users_login ‘ + char(39) + ‘AUTO_FIX’ + char(39) + ‘, ‘ + @Username
    EXECUTE(@cmd)
    FETCH NEXT FROM userLogin_cursor INTO @Username
  END
CLOSE userLogin_cursor
DEALLOCATE userLogin_cursor

 

SQL Server 2005/2008

DECLARE @Username varchar(100), @cmd varchar(100)
DECLARE userLogin_cursor CURSOR FAST_FORWARD
FOR
SELECT UserName = name FROM sysusers
WHERE issqluser = 1 and (sid IS NOT NULL AND sid <> 0×0)
    AND suser_sname(sid) IS NULL
ORDER BY name
FOR READ ONLY
OPEN userLogin_cursor
 
FETCH NEXT FROM userLogin_cursor INTO @Username
WHILE @@fetch_status = 0
  BEGIN
    SET @cmd = ‘ALTER USER ‘+@username+‘ WITH LOGIN ‘+@username
    EXECUTE(@cmd)
    FETCH NEXT FROM userLogin_cursor INTO @Username
  END
CLOSE userLogin_cursor
DEALLOCATE userLogin_cursor

I went to install SQL Server 2008 on a Windows Server 2008 R2 box today for the first time and was greeted with the error “You must use the Role Management Tool to install or configure Microsoft .NET Framework 3.5.”  The text of this error was basically about as decipherable as the voice of the adults in the old Peanuts cartoons, so I fired up my old friend Google to find out what to really do.  It seems that Windows Server 2008 R2 ships with .NET Framework 3.5.1 and in order to proceed, you need to go to Server Manager (that window that pops up when you login and stays unless you close it) and enable (really, install) the .NET Framework 3.5.1 and any prerequisites.

ss2008_error_1

 

ss2008_error_2

Last week Microsoft released CTP3 for SQL Server 2008 R2.  At first blush, it doesn’t sound like it should be a huge release.  It is really a minor version release (10.5) as the next major release is scheduled to be SQL Server 2011, so one would expect to see maybe some additional or increased functionality, but not really any huge changes. 

 

As it turns out, there are some additional features with this release, but the big change is coming in the editions and pricing of SQL Server.  Microsoft is adding two additional “premium” editions of SQL Server, Datacenter Edition and Parallel Data Warehouse Edition, which are not only going to be more expensive than the Enterprise Edition, but will be required to take advantage of some of the key features of SQL Server 2008 R2, such as multi-server management of more than 25 instances, StreamInsight (complex event processing), Star Join Queries and parallel database copy (Parallel Data Warehouse Edition). 

 

The retail pricing for these two new “premium” editions are both set at $57,498 per processor, with no option for per server plus CAL pricing.  Enterprise Edition pricing is set at $28,749 per processor or $13,969 per server with 25 CALs, while Standard Edition pricing is set at $7,499 per processor or $1,849 with 25 CALs.

 

The Enterprise Edition will be required to take advantage of new features such as PowerPivot, multi-server management of up to 25 instances and Master Data Services.

 

As part of SQL Server 2008 R2, Microsoft has announced that features such as backup data compression (which can reduce the size of backups by up to 60%) will be available in the Standard Edition where it was previously only available in Enterprise Edition.

PASS Summit Kool Aid

9 November 2009

Last week, I had the opportunity to attend the PASS Summit in Seattle.   As I had mentioned in a previous post, I was fairly anxious about attending, because I knew that there were going to be around 2,000 people there and I had met two in person.  Yep, 2 out of 2000.  Let’s just say that I wasn’t worrying about how to fit in time for catching up with those folks.

 

I knew that the PASS Summit would be a great learning opportunity.  I’ve attended Tech-Ed, SQL Server launches and other similar SQL Server events – the learning that occurred at those events was extremely valuable.  In looking at the sessions for the Summit, I knew that it was possible (probable) that my head would explode with newly gained knowledge.  There are plenty of folks that will be blogging about the sessions and all of the excellent speakers – I may be doing that in a future post, but that’s not my focus here.

 

koolaidman
Ohhhh yeeaaaahhhh!

My focus is on the PASS community.  While I already knew that there were helpful, friendly people that were already a part of PASS, I never thought that it would would pervade the entire conference.  I had the opportunity to meet an incredible number of people – those whom I was familiar with through Twitter, blogs or forums and those whose faces and names were new to me.  In every instance, they were accessible and welcoming.  In turn, these experiences encouraged me to go seek out and introduce myself to others.  This was truly a community in the best sense of the word.

 

Tim and I have been talking about getting more involved and have discussed starting a PASS chapter here in Tucson.  The experiences of last week have made me see that this is not only doable, but necessary.  I’ve supped of the PASS kool aid and it was not only yummy, it’s my new favorite drink.

 

I’m looking forward to keeping in touch with the people that I was fortunate enough to meet and becoming more involved in PASS, both locally and virtually.   My hope is to share this community with others and help it to grow.

 

On a more personal note, there were a few individuals that went above and beyond the call of  (professional) duty last week.  I hope that I’ve let you all know personally how much your thoughts and prayers meant.  Tim and I were pleasantly surprised and touched by your willingness to listen and help.  We truly thank you from the bottom of our hearts.

 

On a completely unprofessional note, I was overjoyed to be a part of the karaoke jollification (yeah, it’s a word) on Thursday night.  I was impressed with the singing (and dancing) talents of this crew.  I’m just hoping that there are no incriminating pictures…

I had been thinking of writing a blog post on the SQL Server community for the last couple of weeks.  Seeing Brent Ozar’s blog post “What Community Means to Me” helped me decide to go forward with it.

In my first draft of this post, I went into great detail about the beginning of my career, my quest for meaningful, reliable sources of information and my wish for a view of a larger community.  Unfortunately, I’m trying to get ready for a birthday party, Halloween, soccer games and, oh yes, the PASS Summit.  So that’s another story for another time.

 

When I first signed up to attend the PASS Summit, my hope was that my darling husband would be able to attend with me.  Regardless of what Tim might say, I’m not outgoing enough to walk up and talk with people I’ve never met.  Yet I know that those conversations will probably be the parts that I remember most and best from the Summit.  Unfortunately, it wasn’t in the cards for Tim and I’ll be attending solo.   I pictured three days of wandering around, trying to make conversations and going back to the hotel room to eat room service.

 

Enter the happy-happy-joy-joy land that is Twitter.  Tim and I both started using Twitter in April of this year.  It was interesting getting started – kind of like walking into a conference – you all have the same interests, but you don’t know anyone.  Slowly but surely we got involved.  Had some lively IM conversations at the spring SSWUG vConference in the Quest chat room, tried to write a rap song, got involved with PASS Virtual Chapters, started a blog, shared meals with a couple of great DBAs and got the kind of SQL Server advice and help that you can’t pay for.

 

Twitter is obviously not the only method for getting involved with the SQL Server community, but I’ve found it extremely helpful for becoming familiar with other people that do what we do.  By reading tweets and blog posts throughout the day, I’ve picked up tips and tricks as well as become exposed to features and functionality that I might not have been aware of.

 

Now, in addition to attending some excellent sessions, I’m also looking

It's not quite this, but close...

It's not quite this, but close...

forward to meeting a number of people that I’ve ‘met’ through Twitter.   I’ve felt more a part of the SQL Server community in the last six months than the previous 5 1/2 years of working as a DBA.  It’s a great community and I talk about the benefits of being involved any time I can.   I still wish Tim could have come along, but I also know that I won’t be feeling as alone.    Maybe when I’m there, I’ll meet someone who hasn’t yet had the chance to get involved in the community and be able to pass this along to them.

 | Posted by tledwards | Categories: DBAs, Miscellaneous, Personal | Tagged: , , , |

The company I work for is currently in the midst of solidifying their SQL Server high availability and disaster recovery scheme.  While doing this, I did a comparison of all the available options within SQL Server for HA/DR for recommendation to my management.  We eventually went with a third party tool and this blog isn’t an endorsement of that tool, but rather is intended to give insight into how one company (the one I work for) approached looking at our disaster recovery options.  Since I had to put this together, I figured that maybe some of this might be helpful to someone else out there, so I thought I would write it up as a blog post.

 

What Do The Terms High Availability And Disaster Recovery Mean?

Before we get too deep into this, I figured I would define the terms High Availability and Disaster Recovery since they are quite often used together and sometimes, mistakenly, interchangeably.

 

High Availability – Protects against hardware/software issues that may cause downtime.  An example could be a bad processor or memory issue.

Disaster Recovery – The ability to recover from a catastrophic event, such as a natural disaster or, on a more local level, a data center disaster (fire, flood, cooling outage, etc) and be able to continue business operations with minimal loss and minimal downtime.

 

These two concepts really go hand in hand in ensuring business continuity and both should be built into a business continuity solution.

 

High Availability/Disaster Recovery Options In SQL Server

SQL Server has a number of native features built in to provide some basic measure of high availability/disaster recovery.  These include:

 

  • Database backups – this is probably the most basic form of disaster recovery for SQL Server and one that should be practiced in every situation, regardless of what other HA/DR solutions are in place.
  • Clustering – this provides a way of binding two or more like Windows servers together in what is known as a cluster.  Each of the servers in the cluster is considered a node and, typically, one node is “active” (processing transactions) and the other nodes are “passive”.  There is a private network that runs between the nodes so that if the active node fails to deliver a “heartbeat” that the other node(s) can detect, an automatic failover is invoked and one of the passive nodes is promoted to active.
    • Pros of Clustering:  Clustering provides redundancy in the case of server hardware failure and provides a fairly quick (within 5 minutes), automatic solution to move processing to another server.
    • Cons of Clustering
      • Does not protect against disk issues since all nodes share the database on the same disk.
      • Only protects against issues with that specific server, not data center-wide since all nodes are located in same data center.
      • Only addresses availability, not disaster recovery
  • Database Mirroring – new in SQL Server 2005, database mirroring offers a way to mirror a database to another server (and disk).  All transactions are sent to the mirror server as they are committed on the production server.  Depending on how it is implemented, can automate failover, similar to clustering.
    • Pros of Database Mirroring
      • Provides some form of both HA and DR since mirror can be located in another data center, thus protecting you from hardware failure and disaster.
      • Fast.  Mirror is updated virtually instantly
    • Cons of Database Mirroring
      • Only done at the database level, not the instance level and only user databases can be mirrored, not system databases.  This means that some other form of synchronizing logins and other system database objects has to be devised.
      • To be get all features of database mirroring, Enterprise Edition has to be used. 
      • Any SQL Agent jobs must be manually enabled on the mirror if a failover takes place.
  • Log Shipping – this is one of the oldest forms DR available in SQL Server and involves setting up a warm standby server with a copy of the user database on it that is to be protected and backups of the transaction log from the production database are periodically shipped to the standby server and applied.
    • Pros of Log Shipping:
      • Tried and true technology that has been around for a long time.
      • At the database level, can provide both HA and DR protection because warm standby can be located in another data center.
    • Cons of Log Shipping:
      • Amount of potential data loss is higher than with the other options because logs are usually shipped no more frequently than every 5 minutes and typically, more like every 30 minutes to an hour.
      • Failover is fairly manual and time intensive.  Takes longer than other options to bring warm standby online.
      • Like database mirroring, this only protects a database, not the entire instance.
      • For SQL Server 2000, this feature is only available in Enterprise Edition.  Available in Standard Edition from SQL Server 2005 forward.
      • Does not transfer non-logged transactions or schema changes (security, addition of database objects
  • Replication – while not necessarily intended as an HA/DR solution replication can be used in both scenarios.
    • Pros of Replication:
      • Real-time data updates.  Minimal data loss, if at all.
      • Can be used for both HA/DR as publisher and subscriber can be in different data centers.
    • Cons of Replication:
      • Complicated to setup and maintain.
      • No provided failover mechanism.  This has to be created as part of solution.
      • Again, only database specific solution, not instance specific.

Given that these native solutions were really only database and not instance based, we chose to look at third party options.  The product that we settled on was Double-Take.  While certainly not an easy solution to implement, Double-Take was attractive to us because it allowed us to set up a stand-by server in our hot site for each of our mission critical SQL Servers and then continuously replicate the entire instance to the stand-by server.  It also provides for either automated (if the server stops responding) or manual failover (we have opted for manual) through a control applet that automatically swaps DNS entries between the production and the standby server when a failover is initiated.

 

Double-Take:  How It Works

Both the production and the standby server have to have the exact same SQL Server configuration (Edition, build, directory structure/location, instances, etc) installed.  The Double-Take software is then installed on both the production and standby server and then, through the Double-Take software, the production server is configured as the source and the standby server is configured as the target.

 

During the configuration process, you can configure Double-Take to compress the data before it replicates it over the WAN to the stand-by server.  This can save a ton of bandwidth and makes sure that the transactions are queued on the target server as quickly as possible ensuring minimal data loss in the event of a failure.

 

Additionally, Double-Take will also generate scripts for the failover, failback, and restore of the databases back to the production server when it is back in commission.  These scripts and/or the replication can be customized by overriding the automatically generated rules that Double-Take creates.

More tidbits

23 October 2009

Work has been hectic for both of us, so please pardon the absence of full-bodied blog posts.  Until we get our feet under us, here are a couple of (hopefully) helpful tidbits.

 

Since you've been good and patient, you can have a treat

Since you've been good and patient, you can have a treat

Sys.dm_exec_requests

I’m not the most patient person in the world and have a tendency to think that a process has stopped when it’s actually processing away.  A few months ago, I needed to run a DBCC on a database with corruption errors.  This wasn’t a large database and the DBCC had already been running for about 2 1/2 hours.  I put something out on Twitter about whether I should continue the DBCC (which I knew that I should) or worry about whether something had gone horribly wrong.  Paul Randal quickly tweeted back to not stop the DBCC and to check to see how far it had gone.  That’s when he gave me the information that has helped me through a number of waiting games:

 

Sys.dm_exec_requests is a DMV that returns a row for every request that is currently executing.    Since I was certain that this was the only dbcc checkdb running on this server, the following query eased my mind:

 

select percent_complete
 from sys.dm_exec_requests
 where command = ‘dbcc checkdb’

 

Keeping an eye on that let me know that the process was progressing and kept me from making a stupid mistake.  I’ve since used that on long running processes like backups and restores.  It also provides a way to tell users that, yes, this is going and it’s x% done
One caveat – the precentage_done field is for specific operations like backups, restores, rollbacks, dbccs, etc.  It is not populated for queries

 

Altering a table in SSMS 2008 (without t-sql)

While I know that we’d all rather use an sqlcmd query through a DAC connection to do anything SQL Server related, there may be times that it might be more expeditious to make the change within SSMS using the Design option.

 

If you’re in SQL Server 2008 and the table is populated, making any changes within the Design option may cause the following error:

alter_table

While we were able to do this in SQL Server 2005, it seems like it’s unavailable in 2008.  Fortunately, you can do this in SSMS, you just need to set the option to make it available.

 

In SSMS, go to Tools->Options->expand Designers.  Then under Table and Database Designers, uncheck the box next to ‘Prevent saving changes that require table re-creation’.  Voila!  You can now make all the changes that you want. 

 

As a last minute bit of news, Tim just found out that he’s now a Friend of RedGate.  Congratulations, Tim!

This is a quickie blog post, but I thought I’d post it before I forgot about it.

 

We have transactional replication turned on for one of our production databases.  This database includes XML fields and we recently saw the following error:

 

           Length of LOB data (78862) to be replicated exceeds configured maximum 65536

 

Looking into it, this is a server setting and the default value for Max Text Replication Size is 65536 (bytes).   Because our XML data is of varying sizes, I made the decision to set this to the maximum allowable value – 2147483647 (bytes). 

 

To change it through SSMS:

  •                Right click on the server and choose Properties
  •                In the Advanced page – change the Max Text Replication Size to 2147483647

In T-SQL:

EXEC sp_configuremax text repl size’, 2147483647

GO

RECONFIGURE
GO
 
Hope this helps someone out! 
 
 

 

 

 

This weekend, one of my co-workers passed away.  He was 33 years old with a wife and toddler at home and a baby on the way.  I didn’t know him well, but we had worked together on a few projects and he was always very knowledgeable, thorough and helpful.  His passing was very unexpected and, as these things do, it caused me to think about my own life and my priorities.

 

Like many DBAs, the servers that Tim and I manage need to be available 24/7.  Tim is on call every third week, but being a lead, he often needs to step in on weeks that he isn’t on call.  I’m the sole DBA at my company.  We’re both proud of being dedicated professionals and we work hard to keep our current systems available as well as keeping our skills updated so that we can provide the best solutions possible.  I firmly believe that we’re setting a good example for our children in showing them the responsibility that we take in our positions.

 

The problem that we both face is in knowing when to step out of our work personas and focus on our family.  Tim is an excellent father and I work hard to be a good mom, but I know that there are many times that I’m talking about work, thinking about work, worrying about work when I should be more engaged as a wife and mother.  While I know that we both provide benefit to our businesses, I also recognize that, if we left, we would be replaced and work would continue as usual.  The time and effort that we put into our time together and our time as parents will shape all of us for the rest of our lives.

 

There isn’t an easy solution to this problem.  It’s not always that simple to walk out the door (especially for Tim, who works at home) and turn off the DBA part.  There will be times that I need to focus on an issue even after leaving work in order to sort it out, but I’m going to make the effort to do that only when it’s necessary.   I need to keep in mind that my first job is to take care of my family.  If I do that, the rest of life will work itself out.

A co-worker of mine had the following saying up on their wall:

 

                              Remember, the people that you work for are waiting for you at home.

 

I need to keep that in mind.

 | Posted by tledwards | Categories: DBAs, Miscellaneous, Personal | Tagged: , |

A couple of weeks ago, I tweeted a question out to the community about whether there was any way to run PowerShell scripts from within a SQL Server Reporting Services 2008 DataSet query.  The answers that I received back were discouraging.  Even though Microsoft is touting PowerShell as the best thing since sliced bread (I have to admit, it is pretty darned cool!) and integrating into all of its current products, the integration with SQL Server 2008 seems to have stopped at giving you a mini PowerShell console (SQLPS) and a SQL Server provider to give you easier access to running SQL Server commands from within PowerShell.  This integration hasn’t gone beyond the database engine, so if you want to run PowerShell scripts from within Reporting Services, you have to get creative.  That posed a big problem for me because the report I was writing depended on some PowerShell scripts that I had written. 

 

After walking away from the problem for an hour or two, it finally hit me.  Since a Reporting Services 2008 DataSet query runs T-SQL code, including stored procedures, why don’t I just write a stored procedure that I can use to run a PowerShell script.  Below, is the stored procedure that I wrote.  It is really pretty simple.  It takes as a parameter, the command line command that you would normally type in at a PowerShell command line to run your script.  This information would include the script path\name and any parameters that the PowerShell script requires.

 

USE [MyDatabase]
GO

IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N’[Utility].[RunPowerShellScript]‘) AND type in (N’P', N’PC’))
DROP PROCEDURE [Utility].[RunPowerShellScript]
GO
 

USE [MyDatabase]
GO
 

SET ANSI_NULLS ON
GO
 SET QUOTED_IDENTIFIER ON
GO
 

CREATE PROCEDURE [Utility].[RunPowerShellScript]
      @Script           varchar(2000)
AS

set nocount on;

declare @cmd  varchar(2000);
set         @cmd = ’sqlps -c ‘ + ‘”‘ + @Script + ‘”‘
exec master..xp_cmdshell @cmd, no_output;
GO

 

In the above code, “[MyDatabase]” of course refers to the database that you would want this stored procedure to be stored in.  So, walking through the code, all this script really does is create a stored procedure called Utility.RunPowerShellScript that runs the xp_cmdshell extended stored procedure with a command string that calls the SQL Server PowerShell mini console (SQLPS) in command line mode and passes to that command line whatever you passed into the stored procedure as a parameter.  For my own purposes, I have created a schema called “Utility” so that I can easily identify stored procedures, such as this one, as helper stored procedures.  Feel free to omit this if you like.  So an example of how you could use this stored procedure after you have created it would be as follows.  Say you wanted to run a PowerShell script called “Get-DriveSpace” that returns total size and free space information for the drives on a server that you pass in and resides in the D:\Scripts folder.  All you would need to do is type:

 

exec DBA_Console.Utility.RunPowerShellScript‘”D:\Scripts\Get-DiskSpace.ps1 -Server MyServer”‘             

  

Where “MyServer” is the name of the server that you are passing into the PowerShell script as a parameter.

 

That’s it.  Short and simple and now I have a mechanism to call all of the PowerShell scripts that I want from within a Reporting Service DataSet query.