This post about an issue between SQL Server 2008 R2 vs SQL Server 2012 has been moved to blogs.sqlsentry.com

 | Posted by loriedwards | Categories: Speaking, T-SQL | Tagged: , |

Unlike the similar Reese’s dilemmas, these results were far less favorable…

Reese's are yummy

Reese's are yummy

 

As many of these issues begin, a developer put together a procedure.  When he ran it locally, it ran in a matter of seconds.  For some reason, though, he wanted the procedure run from a remote server.  When he attempted to run that same procedure from the remote server, it took a number of minutes.    It was apparent that it wasn’t the result set being returned, as there were OLEDB waits while the query was processing.

 

In looking at the query, it wasn’t anything spectacular.  It dumped some data into a temp table and then joined that table to an existing table and displayed the result.    The problem ended up being the temp tables.  While the query was being executed against the remote server, the temp tables were being created on the server the query was running on.  So, pulling the data into the temp table and joining the two tables all had to occur with data being pulled across the network.

 

The fix was pretty simple – change the procedure to create a table on the remote server and drop it once the procedure was finished.  The procedure ran in the expected time period.

 | Posted by tledwards | Categories: SQLServerPedia, T-SQL | Tagged: , |

Okay, maybe I’m being a little sarcastic.  I don’t troubleshoot dynamic SQL very often, so I don’t always see potential issues right away.  For those dear readers who work with it regularly, you should stop reading now – this is all pretty basic – but it took a few minutes out of my day.

 

This is the only dyna- that I like

This is the only dyna- that I like

My troubleshooting methods consist of displaying the command created by the dynamic SQL and seeing if it runs correctly or if I’m missing a quotation mark or something along the way.  There is probably a better way to troubleshoot, but again, I play with it so rarely that I’m stuck in a rut.

 

Evaluating Dynamic SQL commands

Late last week, a developer sent the following block of dynamic SQL code because he was having issues getting it to work:

EXEC
('
USE [master];
BEGIN
ALTER DATABASE [random_dbname] SET ONLINE;
WAITFOR DELAY ''00:01'';
END
USE [random_dbname];
'
)

 

I followed my normal troubleshooting methods and everything worked fine.  Trying to execute it as above, I received the following error message:

 

Msg 942, Level 14, State 4, Line 7
 Database 'random_dbname' cannot be opened because it is offline.

 

On first glance, I was confused, because it was obvious that I brought the database online.  I soon realized, though, that everything within the parentheses was being evaluated prior to being executed.  Apparently SQL Server  has a shorter memory than I do.

 

Breaking it into two separate statements like below accomplishes what needed to happen

EXEC

(

– Bring the database online

USE [master];

BEGIN

ALTER DATABASE [random_db] SET ONLINE;

WAITFOR DELAY ”00:01”;

END


)

go

EXEC

(‘ USE [random_db];

/*blah blah blah*/


)

Thinking that this ‘emergency’ had been handled, I went back to my other tasks. 

 

Database Context and Dynamic SQL

As these thing happen, though, received another call because after he ran all of this, the database context remained Master.  Fortunately, this was easy to explain.  The database context switching exists only during the execution of the EXEC statement and does not persist after its completion. 

 

None of this is rocket science or even deep SQL knowledge, but maybe it’ll save a minute or two for some other DBA out there.

 | Posted by tledwards | Categories: DBAs, SQLServerPedia, T-SQL | Tagged: , |

Several weeks ago on Twitter, Colin Stasiuk (BlogTwitter), ) asked if anyone had a script to pull back job statuses from the SQL Server Agent.  I had been doing some work on a SQL Server DBA Console and had written some PowerShell scripts to give me various pieces of information and put together a PowerShell script that could be run as a SQL Agent job to periodically report the statuses of SQL Agent jobs.  Eventually, I think Colin went with a different solution, but I figured I would go ahead and post the PowerShell script that I came up with.  This solution has been tested against SQL Server 2000/2005/2008.

 

This first script is just a SQL script to create the SQL Server table that the PowerShell script writes the status information to and can be downloaded here, LastJobStatus_Table.sql.

CREATE TABLE [dbo].[LastJobStatus](
	[ServerName] [nvarchar](128) NULL,
	[Job_Name] [nvarchar](128) NULL,
	[Run_Date] [datetime] NULL,
	[Job_Duration] [time](7) NULL,
	[Run_Status] [varchar](50) NULL,
	[Sample_Date] [datetime] NULL
) ON [PRIMARY]

 

The next is the PowerShell script that does all of the work bringing back the SQL Agent job statuses.  It takes a parameter of Server, which is the name of the SQL Server that you want job statuses from.  Make sure that you change the name “MgtServer” to whatever the name is of the SQL Server where you intend to store the results from this script.   You’ll also need to change the root directory for where your scripts are loaded to match your environment. This script can be downloaded here, Get-LastJobStatusServer.ps1.

param([string]$Server=$(Throw "Parameter missing: -Server ServerName"))
[void][reflection.assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo")

##Point to library files
$scriptRoot = "D:\DBAScripts"

--change script root directory to match your environment
#$scriptRoot = Split-Path (Resolve-Path $myInvocation.MyCommand.Path)
. $scriptRoot\LibrarySmo.ps1	#Part of Chad Miller's SQLPSX project on CodePlex
. $scriptRoot\DataTable.ps1	    #Included with the scripts for this blog, also from a CodePlex project (http://www.codeplex.com/PSObject)

Set-Alias -Name Test-SqlConn -Value D:\DBAScripts\Test-SqlConn.ps1

##Define variables

## open database connection
$conn = New-Object System.Data.SqlClient.SqlConnection("Data Source=$server;
Initial Catalog=master; Integrated Security=SSPI")
$conn.Open()
$cmd = $conn.CreateCommand()
$cmd.CommandText= "	CREATE TABLE #JobsRun (	ServerName nvarchar(128),
											Job_Name nvarchar(128),
											Run_Date datetime,
											Job_Duration time(7),
											Run_Status varchar(50),
											Sample_Date datetime
										  );
					insert into #JobsRun
					select	@@SERVERNAME AS ServerName
							,j.name Job_Name
							,(msdb.dbo.agent_datetime(jh.run_date,jh.run_time)) As Run_Date
							,substring(cast(run_duration + 1000000 as varchar(7)),2,2) + ':' +
									substring(cast(run_duration + 1000000 as varchar(7)),4,2) + ':' +
									substring(cast(run_duration + 1000000 as varchar(7)),6,2) Job_Duration
							,case when run_status = 0
										then 'Failed'
								when run_status = 1
										then 'Succeed'
								when run_status = 2
										then 'Retry'
								when run_status = 3
										then 'Cancel'
								when run_status = 4
										then 'In Progress'
							end as Run_Status
							,GETDATE() As Sample_Date
					FROM msdb.dbo.sysjobhistory jh
						join msdb.dbo.sysjobs j
							on jh.job_id = j.job_id
					where	step_id = 0
					and		enabled = 1
					order by cast(cast(run_date as char) + ' ' +
								substring(cast(run_time + 1000000 as varchar(7)),2,2) + ':' +
								substring(cast(run_time + 1000000 as varchar(7)),4,2) + ':' +
								substring(cast(run_time + 1000000 as varchar(7)),6,2)  as datetime) desc

					delete from MgtServer.DBA_Console.dbo.LastJobStatus where ServerName = '$server'                       -- Change 'MgtServer' to the name of whatever the SQL Server is in
                                                                                                                                               -- your env that will house the LastJobStatus table which stores the
                                                                                                                                               -- results of this script
					insert into MgtServer.DBA_Console.dbo.LastJobStatus (ServerName, Job_Name, Run_Date, Job_Duration, Run_Status, Sample_Date)
					select	jr.ServerName,
							jr.Job_Name,
							jr.Run_Date,
							jr.Job_Duration,
							jr.Run_Status,
							jr.Sample_Date
					from	#JobsRun jr
					where	Run_Date = (	select	max(jr1.Run_Date)
											from	#JobsRun jr1
											where	jr1.Job_Name = jr.Job_Name)
					drop table #JobsRun; "
$cmd.ExecuteNonQuery()
$conn.Close()

 

There are references in the above script to a LibrarySMO.ps1 script that can be obtained from CodePlex (see the comments in the script for URL) and a DataTable.ps1 script (also from CodePlex, but included in the download file for this blog, for your convenience.)

# Taken from out-dataTable script from the PowerShell Scripts Project
# http://www.codeplex.com/PsObject/WorkItem/View.aspx?WorkItemId=7915

Function out-DataTable {

  $dt = new-object Data.datatable
  $First = $true  

  foreach ($item in $input){
    $DR = $DT.NewRow()
    $Item.PsObject.get_properties() | foreach {
      if ($first) {
        $Col =  new-object Data.DataColumn
        $Col.ColumnName = $_.Name.ToString()
        $DT.Columns.Add($Col)       }
      if ($_.value -eq $null) {
        $DR.Item($_.Name) = "[empty]"
      }
      elseif ($_.IsArray) {
        $DR.Item($_.Name) =[string]::Join($_.value ,";")
      }
      else {
        $DR.Item($_.Name) = $_.value
      }
    }
    $DT.Rows.Add($DR)
    $First = $false
  } 

  return @(,($dt))

}

We’ve been in the process of setting up reporting servers for some of our larger clients.  We’re also in the process of setting up DEV and QA environments (yes, Colin, I know, I know).  This involves restoring databases onto the new servers.

 

Transferring logins

There is a fairly detailed knowledge base article from Microsoft outlining transferring logins between servers.  This post is going to outline the gist of that article.  The first step is creating the  sp_hexadecimal stored procedure in the master database.  This stored procedure will be used by the sp_help_revlogin to create a hashed value for the login password.  The second is to create the sp_help_revlogin procedure – there is a script for SQL Server_2000 and one for SQL Server 2005/2008.  The output from running the  sp_help_revlogin procedure on the source server is a series of CREATE LOGIN statements for all of the existing logins.

 

There are three main benefits for using this method to transfer logins:

  1. It’s fast.  No scripting out individual logins.
  2. If the logins and user are from the same servers, the user and logins SIDS will match, avoiding any ‘orphaned users’.
  3.  The passwords come across with the logins.  Definitely preferable to keeping an Excel spreadsheet tucked away with all of the logins and associated passwords.

 

As with any process that is automatically generated, it’s best to review the scripts carefully before running them on the destination server.  A couple of issues that you need to watch out for are:

  • If a login has a defined default database that doesn’t exist on the destination server, obviously the CREATE LOGIN statement will fail.
  • If there are existing logins on the destination server, there is the possibility that a SID may already be in use.  In that case, you will need to create a new login and map the existing users.

Orphaned Users

CantGetThere

 One issue that comes up when transferring logins is that the database users may not be mapped to the server logins.   This occurs when there isn’t a corresponding login SID for the user SIDs in the database.  If the database(s) that are being restored are from a single server and the logins were transferred from that same server the SIDs will match.  If the SIDs do not match, the database users will not be attached to a server login and the user is said to be ‘orphaned’.

 

 There are some cases where orphaned users can’t be avoided.  In development environments, the databases on the destination server may be consolidated from more than one production server, logins may overlap and, as a result, the login and user SIDs don’t match.  Another instance would be when a production database is restored to a development server and the rights for the users in the development environment are different from their rights in the production environment.  For both of these cases, steps will need to be taken to match database users with their respective logins on the server.

 

 To determine whether there are any orphaned users, run the following stored procedure:

EXEC sp_change_users_login ‘report’

 

There are a few ways to handle these orphaned users.  The sp_change_users_login stored procedure can be used to update the connection between logins and users in SQL Server 2000.  While the sp_change_users_login has the option  to auto_fix logins, that action assumes that the username and login name match.  If they don’t, it will fail.  Using the Update_One option is a safer and the preferable way to handle it.  For SQL Server 2005/2008, the ALTER USER statement is the preferred method for mapping users to logins.  Greg Low’s article ‘Much ado about logins and SIDs’ provides a good explanation for these methods.

 

Quick script to map users to logins

 

If you’re sure that the usernames match the login names that they map to, here are a couple of quick scripts that you can run in each database that you restore from your production server to your development server to map the orphaned users – yes, they’re cursors, but they run quickly.

 

SQL Server 2000

DECLARE @Username varchar(100), @cmd varchar(100)
DECLARE userLogin_cursor CURSOR FAST_FORWARD
FOR
SELECT UserName = name FROM sysusers
WHERE issqluser = 1 and (sid IS NOT NULL AND sid <> 0×0)
    AND suser_sname(sid) IS NULL
ORDER BY name
FOR READ ONLY
OPEN userLogin_cursor

FETCH NEXT FROM userLogin_cursor INTO @Username
WHILE @@fetch_status = 0
  BEGIN
    SET @cmd = ‘EXEC sp_change_users_login ‘ + char(39) + ‘AUTO_FIX’ + char(39) + ‘, ‘ + @Username
    EXECUTE(@cmd)
    FETCH NEXT FROM userLogin_cursor INTO @Username
  END
CLOSE userLogin_cursor
DEALLOCATE userLogin_cursor

 

SQL Server 2005/2008

DECLARE @Username varchar(100), @cmd varchar(100)
DECLARE userLogin_cursor CURSOR FAST_FORWARD
FOR
SELECT UserName = name FROM sysusers
WHERE issqluser = 1 and (sid IS NOT NULL AND sid <> 0×0)
    AND suser_sname(sid) IS NULL
ORDER BY name
FOR READ ONLY
OPEN userLogin_cursor
 
FETCH NEXT FROM userLogin_cursor INTO @Username
WHILE @@fetch_status = 0
  BEGIN
    SET @cmd = ‘ALTER USER ‘+@username+‘ WITH LOGIN ‘+@username
    EXECUTE(@cmd)
    FETCH NEXT FROM userLogin_cursor INTO @Username
  END
CLOSE userLogin_cursor
DEALLOCATE userLogin_cursor

More tidbits

23 October 2009

Work has been hectic for both of us, so please pardon the absence of full-bodied blog posts.  Until we get our feet under us, here are a couple of (hopefully) helpful tidbits.

 

Since you've been good and patient, you can have a treat

Since you've been good and patient, you can have a treat

Sys.dm_exec_requests

I’m not the most patient person in the world and have a tendency to think that a process has stopped when it’s actually processing away.  A few months ago, I needed to run a DBCC on a database with corruption errors.  This wasn’t a large database and the DBCC had already been running for about 2 1/2 hours.  I put something out on Twitter about whether I should continue the DBCC (which I knew that I should) or worry about whether something had gone horribly wrong.  Paul Randal quickly tweeted back to not stop the DBCC and to check to see how far it had gone.  That’s when he gave me the information that has helped me through a number of waiting games:

 

Sys.dm_exec_requests is a DMV that returns a row for every request that is currently executing.    Since I was certain that this was the only dbcc checkdb running on this server, the following query eased my mind:

 

select percent_complete
 from sys.dm_exec_requests
 where command = ‘dbcc checkdb’

 

Keeping an eye on that let me know that the process was progressing and kept me from making a stupid mistake.  I’ve since used that on long running processes like backups and restores.  It also provides a way to tell users that, yes, this is going and it’s x% done
One caveat – the precentage_done field is for specific operations like backups, restores, rollbacks, dbccs, etc.  It is not populated for queries

 

Altering a table in SSMS 2008 (without t-sql)

While I know that we’d all rather use an sqlcmd query through a DAC connection to do anything SQL Server related, there may be times that it might be more expeditious to make the change within SSMS using the Design option.

 

If you’re in SQL Server 2008 and the table is populated, making any changes within the Design option may cause the following error:

alter_table

While we were able to do this in SQL Server 2005, it seems like it’s unavailable in 2008.  Fortunately, you can do this in SSMS, you just need to set the option to make it available.

 

In SSMS, go to Tools->Options->expand Designers.  Then under Table and Database Designers, uncheck the box next to ‘Prevent saving changes that require table re-creation’.  Voila!  You can now make all the changes that you want. 

 

As a last minute bit of news, Tim just found out that he’s now a Friend of RedGate.  Congratulations, Tim!

A couple of weeks ago, I tweeted a question out to the community about whether there was any way to run PowerShell scripts from within a SQL Server Reporting Services 2008 DataSet query.  The answers that I received back were discouraging.  Even though Microsoft is touting PowerShell as the best thing since sliced bread (I have to admit, it is pretty darned cool!) and integrating into all of its current products, the integration with SQL Server 2008 seems to have stopped at giving you a mini PowerShell console (SQLPS) and a SQL Server provider to give you easier access to running SQL Server commands from within PowerShell.  This integration hasn’t gone beyond the database engine, so if you want to run PowerShell scripts from within Reporting Services, you have to get creative.  That posed a big problem for me because the report I was writing depended on some PowerShell scripts that I had written. 

 

After walking away from the problem for an hour or two, it finally hit me.  Since a Reporting Services 2008 DataSet query runs T-SQL code, including stored procedures, why don’t I just write a stored procedure that I can use to run a PowerShell script.  Below, is the stored procedure that I wrote.  It is really pretty simple.  It takes as a parameter, the command line command that you would normally type in at a PowerShell command line to run your script.  This information would include the script path\name and any parameters that the PowerShell script requires.

 

USE [MyDatabase]
GO

IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N’[Utility].[RunPowerShellScript]‘) AND type in (N’P', N’PC’))
DROP PROCEDURE [Utility].[RunPowerShellScript]
GO
 

USE [MyDatabase]
GO
 

SET ANSI_NULLS ON
GO
 SET QUOTED_IDENTIFIER ON
GO
 

CREATE PROCEDURE [Utility].[RunPowerShellScript]
      @Script           varchar(2000)
AS

set nocount on;

declare @cmd  varchar(2000);
set         @cmd = ’sqlps -c ‘ + ‘”‘ + @Script + ‘”‘
exec master..xp_cmdshell @cmd, no_output;
GO

 

In the above code, “[MyDatabase]” of course refers to the database that you would want this stored procedure to be stored in.  So, walking through the code, all this script really does is create a stored procedure called Utility.RunPowerShellScript that runs the xp_cmdshell extended stored procedure with a command string that calls the SQL Server PowerShell mini console (SQLPS) in command line mode and passes to that command line whatever you passed into the stored procedure as a parameter.  For my own purposes, I have created a schema called “Utility” so that I can easily identify stored procedures, such as this one, as helper stored procedures.  Feel free to omit this if you like.  So an example of how you could use this stored procedure after you have created it would be as follows.  Say you wanted to run a PowerShell script called “Get-DriveSpace” that returns total size and free space information for the drives on a server that you pass in and resides in the D:\Scripts folder.  All you would need to do is type:

 

exec DBA_Console.Utility.RunPowerShellScript‘”D:\Scripts\Get-DiskSpace.ps1 -Server MyServer”‘             

  

Where “MyServer” is the name of the server that you are passing into the PowerShell script as a parameter.

 

That’s it.  Short and simple and now I have a mechanism to call all of the PowerShell scripts that I want from within a Reporting Service DataSet query.

Ever been frustrated when working with the way that SQL Server stores run dates and times from jobs scheduled and run using the SQL Server agent in the sysjobs, sysjobhistory, etc tables?  SQL Server stores the date and the time in separate columns in these tables and stores them as data type int, which can be a real pain to deal with if you need to compare them to an actual date.  In SQL Server 2005, there is a little known and sparsely documented function called msdb.dbo.agent_datetime that you can use to create a real date out of these columns.  To use it is pretty simple.  Say you want to see if the run_date and run_time from the sysjobhistory table is greater than yesterday’s date.  The syntax for that would be:
 
select a.name, (msdb.dbo.agent_datetime(b.run_date, b.run_time)) as RunTime
from sysjobs a inner join sysjobhistory b
on a.job_id = b.job_id
where (msdb.dbo.agent_datetime(b.run_date, b.run_time) > getdate()-1)

It is pretty simple to use, but an extremely powerful tool.  Unfortunately, SQL Server 2000 stores the agent dates the same way, but it doesn’t come with this function.  No worries, below is the code should you ever need to create this function on a SQL Server 2000 instance:
CREATE FUNCTION agent_datetime(@date int, @time int)
RETURNS DATETIME
AS
BEGIN

RETURN
(
CONVERT(DATETIME,
CONVERT(NVARCHAR(4),@date / 10000) + N’-’ +
CONVERT(NVARCHAR(2),(@date % 10000)/100) + N’-’ +
CONVERT(NVARCHAR(2),@date % 100) + N’ ‘ +
CONVERT(NVARCHAR(2),@time / 10000) + N’:’ +
CONVERT(NVARCHAR(2),(@time % 10000)/100) + N’:’ +
CONVERT(NVARCHAR(2),@time % 100),
120)
)
END

You can query everything…

16 September 2009
The possibilities are endless

The possibilities are endless

Well, probably not everything, but there are some times that you might want to execute a query against a server other than the one that you’re currently on. Let’s go further and say that you don’t want to use the servername.dbname.schema.table model.

EXECUTE is used to pass commands to linked servers (http://msdn.microsoft.com/en-us/library/ms188332.aspx ). When executing a query, the syntax is:

EXEC(’some random query’) at servername – note that the query within the parenthesis needs to be quoted.

Now here is the part that I found cool (okay, I said it. It’s geeky and yet still cool). The server that you’re querying against can be passed as a variable. For example, I want to know how many databases are on MyServer.

declare @server varchar(30), @cmd varchar(100)
set @server = ‘MyServer’
set @cmd = ‘EXEC (”SELECT count(name) from sys.databases”) AT ‘+@server+
exec(@cmd)

Because I’m easily amused, finding that out pretty much made my day. In my thirst for knowledge, though, I thought I’d dig deeper. Maybe I have a table (server_inv – id identity, name varchar) that holds all of my servers and I want to figure how how many tables I have on each of them AND I want the server names to be returned as well. I’m never satisfied. So here’s how I’d do that:

SET NOCOUNT ON
declare @next_one int,
@this_one int,
@counter int,
@server varchar(20)

SELECT @counter = 1

SELECT @next_one = MIN(id)
FROM server_inv

SELECT @this_one = id,
@server = name
FROM server_inv
WHERE id = @next_one

WHILE @counter = 1
BEGIN
declare @sqlcmd varchar(1000)
set @sqlcmd = ‘EXEC (”SELECT ””’+@server+””’, count(name) from sys.databases”) AT ‘+@server+
exec(@sqlcmd)

SELECT @next_one = NULL

SELECT @next_one = MIN(id)
FROM server_inv
WHERE id > @this_one

IF ISNULL(@next_one,0) = 0
BEGIN
BREAK
END

SELECT @this_one = id,
@server = name
FROM server_inv
WHERE id = @next_one
END
RETURN

Now, I understand that this is a pretty simplistic example, but hopefully it’s an example of what can be done. I’ll admit that I haven’t fully played with this EXECUTE command and there may be limitations to its functionality.

It’s probably far easier to put together a Central Management Server if you’re on SQL Server 2008, but that’s a post for another day.

 | Posted by tledwards | Categories: SQLServerPedia, T-SQL | Tagged: , , |