As many of you have probably noticed, I haven’t blogged in quite a while due to work commitments, health issues and various family commitments (don’t want to go on too long here with excuses J), but I decided a perfect first blog post back might be taking the stored procedure that a friend and his consulting group have so graciously shared with the community. I am, of course, speaking of Brent Ozar’s sp_BLITZ stored procedure intended to help a DBA see what they are inheriting when someone dumps a new server on them. I kind of took a different twist on this and figured that this might be a great tool to use on all of the SQL Servers in an environment periodically by creating a report around it.

Some Background

I work as the lead DBA in an environment with over 160 SQL Server instances (a number that seems to grow by at least five or so every quarter) and somewhere in excess of 2,000 databases ranging in size from what some of you might consider laughably small to some rather large data warehouse databases, many of which are mission critical to our business. To manage this environment, I have a team of two other DBA’s that I lead. One started with the company the week after I started and the other, a junior DBA, has been with us just over a year. We have a great team, but even with three DBA’s, it is hard to be proactive without some tools to let you know what is going on. Unfortunately, we don’t have the budget for some of the major monitoring tools as the cost for our environment would be rather substantial. Needless to say, it is left to me and my team to be creative and create our tools and instrumentation. That is where Brent’s sp_BLITZ script comes in. With a report written around it that my junior DBA can go through on a weekly or monthly basis, we can be much more proactive with some of the more basic or fundamental settings that someone who shouldn’t have access to change, but always inevitably does, changes without our knowledge.

 

The Report

So, the report itself is pretty simple. Unfortunately, it does require that you have a server that has linked servers to all of your servers (we have a centralized DBA server that we use for this) and the sp_BLITZ script that can be downloaded from here has to be installed on each of these servers. This is a perfect use for the SQL Server 2008 Central Management Server feature that we have set up on our DBA monitoring server. What I have done in the report is created two datasets, one that queries a table that we maintain with an inventory of all of our SQL Servers which will feed the “Server Name” report parameter and the second which actually runs the sp_BLITZ stored procedure on the server that has been chosen from the dropdown. Brent has a great video on exactly what his script does at http://www.brentozar.com/blitz/. This report just gives you a format that you can go out and run off of your Reporting Services site or even schedule to run automatically in a Reporting Services subscription and have it automatically emailed to you or posted out in a document library on a SharePoint site if you are running Reporting Services in SharePoint integrated mode. This report does require that your Reporting Services service is at least 2008 R2 in order to work. One of the nice things about this report is that the URLs that Brent provides in the output for this stored procedure are active links in this report, so if you click in that URL cell, you will be taken to the page on Brent’s site that explains the Finding. Below are some screenshots of the report in collapsed and expanded form (all private information has been blacked out to protect the innocent or at least those who sign my paycheck J):

 

    

Figure 1 Collapsed Version of Report

 

    

    

Figure 2 Expanded View of Report

 

Setting Up The Report

So, to use the report that is freely downloadable at the end of this blog post, all you need to do is go into the Data Source for the report and change it to the name of your monitoring SQL Server or at least a server that has linked servers to all of the servers that you want to manage with this report, like so, replacing the text <Type Your Monitoring Server Here> with the name of your monitoring server.:

    

    

 

The next step is to make sure that you have a table on your monitoring server that has an inventory list of all of the servers from your environment and replace the <ServerName.database.schema.tablename> text in the query in the Servers Dataset with the pertinent information for your environment. See below:

 

    

 

 

From here, it is just a matter of deploying the report to your Reporting Services server and making sure that Brent’s stored procedure has been created on all of the servers that you wish to monitor.

 

The report can be downloaded here (you will need to go to Brent’s site mentioned earlier in this blog post to get the latest version of his sp_BLITZ script). I hope that you find this to be one of the many helpful tools in your tool chest to keep your environment in check.

    

The ability to lock pages in memory is used on 64-bit systems to help prevent the operating system from paging out SQL Server’s working set.  This may be enabled when the DBA starts seeing errors like the following:

 

"A significant part of sql server process memory has been paged out. This may result in a performance degradation."

 

If you’re running SQL Server 2005/2008 Enterprise, you would take the steps to lock pages in memory and you’re done with it.   If you’re on SQL Server 2005/2008 Standard Edition, you still have a ways to go.  The ability to lock pages in memory for standard

This flag will not help in this situation

This flag will not help in this situation

edition is handled through  a trace flag.  For SQL Server 2005 SP3, you need to apply CU4 .  For SQL Server 2008 SP1, you need to apply CU2.    Once those CUs have been applied, set trace flag 845 as a startup parameter.  Here’s a good ServerFault question that explains how to set a trace flag as a startup parameter.

 

Once the trace flag was enabled, the memory issues were solved.   Day saved, once again. :)  

 

As with anything, this has the potential to degrade system performance.   In this article, scroll to the section entitled “Important considerations before you assign “Lock Pages in memory” user right for an instance of a 64-bit edition of SQL Server”.  Read it thoroughly prior to making any changes to your production systems.

I always thought that Mars was a planet, but apparently it also has to do with multiple pending requests within a single SQL Server connection.  MARS (Multiple Active Result Sets) was introduced in SQL Server 2005 and provided the ability to handle these multiple requests.  Like

Apparently this isn't the only Mars out there

Apparently this isn't the only Mars out there

anything else, though, it has to be used correctly.

 

About a week ago, I started seeing the following error on one of my servers:

 

DESCRIPTION:  The server will drop the connection, because the client driver has sent multiple requests while the session is in single-user mode. This error occurs when a client sends a request to reset the connection while there are batches still running in the session, or when the client sends a request while the session is resetting a connection. Please contact the client driver vendor.

 

After digging around and talking to some of the developers in house, I found that they were making use of MARS, but not always correctly.  To avoid the error above,  ”MultipleActiveResultSets=True” needs to be added to the connection string.  Adding that seems to have fixed the issues.

Several weeks ago on Twitter, Colin Stasiuk (BlogTwitter), ) asked if anyone had a script to pull back job statuses from the SQL Server Agent.  I had been doing some work on a SQL Server DBA Console and had written some PowerShell scripts to give me various pieces of information and put together a PowerShell script that could be run as a SQL Agent job to periodically report the statuses of SQL Agent jobs.  Eventually, I think Colin went with a different solution, but I figured I would go ahead and post the PowerShell script that I came up with.  This solution has been tested against SQL Server 2000/2005/2008.

 

This first script is just a SQL script to create the SQL Server table that the PowerShell script writes the status information to and can be downloaded here, LastJobStatus_Table.sql.

CREATE TABLE [dbo].[LastJobStatus](
	[ServerName] [nvarchar](128) NULL,
	[Job_Name] [nvarchar](128) NULL,
	[Run_Date] [datetime] NULL,
	[Job_Duration] [time](7) NULL,
	[Run_Status] [varchar](50) NULL,
	[Sample_Date] [datetime] NULL
) ON [PRIMARY]

 

The next is the PowerShell script that does all of the work bringing back the SQL Agent job statuses.  It takes a parameter of Server, which is the name of the SQL Server that you want job statuses from.  Make sure that you change the name “MgtServer” to whatever the name is of the SQL Server where you intend to store the results from this script.   You’ll also need to change the root directory for where your scripts are loaded to match your environment. This script can be downloaded here, Get-LastJobStatusServer.ps1.

param([string]$Server=$(Throw "Parameter missing: -Server ServerName"))
[void][reflection.assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo")

##Point to library files
$scriptRoot = "D:\DBAScripts"

--change script root directory to match your environment
#$scriptRoot = Split-Path (Resolve-Path $myInvocation.MyCommand.Path)
. $scriptRoot\LibrarySmo.ps1	#Part of Chad Miller's SQLPSX project on CodePlex
. $scriptRoot\DataTable.ps1	    #Included with the scripts for this blog, also from a CodePlex project (http://www.codeplex.com/PSObject)

Set-Alias -Name Test-SqlConn -Value D:\DBAScripts\Test-SqlConn.ps1

##Define variables

## open database connection
$conn = New-Object System.Data.SqlClient.SqlConnection("Data Source=$server;
Initial Catalog=master; Integrated Security=SSPI")
$conn.Open()
$cmd = $conn.CreateCommand()
$cmd.CommandText= "	CREATE TABLE #JobsRun (	ServerName nvarchar(128),
											Job_Name nvarchar(128),
											Run_Date datetime,
											Job_Duration time(7),
											Run_Status varchar(50),
											Sample_Date datetime
										  );
					insert into #JobsRun
					select	@@SERVERNAME AS ServerName
							,j.name Job_Name
							,(msdb.dbo.agent_datetime(jh.run_date,jh.run_time)) As Run_Date
							,substring(cast(run_duration + 1000000 as varchar(7)),2,2) + ':' +
									substring(cast(run_duration + 1000000 as varchar(7)),4,2) + ':' +
									substring(cast(run_duration + 1000000 as varchar(7)),6,2) Job_Duration
							,case when run_status = 0
										then 'Failed'
								when run_status = 1
										then 'Succeed'
								when run_status = 2
										then 'Retry'
								when run_status = 3
										then 'Cancel'
								when run_status = 4
										then 'In Progress'
							end as Run_Status
							,GETDATE() As Sample_Date
					FROM msdb.dbo.sysjobhistory jh
						join msdb.dbo.sysjobs j
							on jh.job_id = j.job_id
					where	step_id = 0
					and		enabled = 1
					order by cast(cast(run_date as char) + ' ' +
								substring(cast(run_time + 1000000 as varchar(7)),2,2) + ':' +
								substring(cast(run_time + 1000000 as varchar(7)),4,2) + ':' +
								substring(cast(run_time + 1000000 as varchar(7)),6,2)  as datetime) desc

					delete from MgtServer.DBA_Console.dbo.LastJobStatus where ServerName = '$server'                       -- Change 'MgtServer' to the name of whatever the SQL Server is in
                                                                                                                                               -- your env that will house the LastJobStatus table which stores the
                                                                                                                                               -- results of this script
					insert into MgtServer.DBA_Console.dbo.LastJobStatus (ServerName, Job_Name, Run_Date, Job_Duration, Run_Status, Sample_Date)
					select	jr.ServerName,
							jr.Job_Name,
							jr.Run_Date,
							jr.Job_Duration,
							jr.Run_Status,
							jr.Sample_Date
					from	#JobsRun jr
					where	Run_Date = (	select	max(jr1.Run_Date)
											from	#JobsRun jr1
											where	jr1.Job_Name = jr.Job_Name)
					drop table #JobsRun; "
$cmd.ExecuteNonQuery()
$conn.Close()

 

There are references in the above script to a LibrarySMO.ps1 script that can be obtained from CodePlex (see the comments in the script for URL) and a DataTable.ps1 script (also from CodePlex, but included in the download file for this blog, for your convenience.)

# Taken from out-dataTable script from the PowerShell Scripts Project
# http://www.codeplex.com/PsObject/WorkItem/View.aspx?WorkItemId=7915

Function out-DataTable {

  $dt = new-object Data.datatable
  $First = $true  

  foreach ($item in $input){
    $DR = $DT.NewRow()
    $Item.PsObject.get_properties() | foreach {
      if ($first) {
        $Col =  new-object Data.DataColumn
        $Col.ColumnName = $_.Name.ToString()
        $DT.Columns.Add($Col)       }
      if ($_.value -eq $null) {
        $DR.Item($_.Name) = "[empty]"
      }
      elseif ($_.IsArray) {
        $DR.Item($_.Name) =[string]::Join($_.value ,";")
      }
      else {
        $DR.Item($_.Name) = $_.value
      }
    }
    $DT.Rows.Add($DR)
    $First = $false
  } 

  return @(,($dt))

}

We’ve been in the process of setting up reporting servers for some of our larger clients.  We’re also in the process of setting up DEV and QA environments (yes, Colin, I know, I know).  This involves restoring databases onto the new servers.

 

Transferring logins

There is a fairly detailed knowledge base article from Microsoft outlining transferring logins between servers.  This post is going to outline the gist of that article.  The first step is creating the  sp_hexadecimal stored procedure in the master database.  This stored procedure will be used by the sp_help_revlogin to create a hashed value for the login password.  The second is to create the sp_help_revlogin procedure – there is a script for SQL Server_2000 and one for SQL Server 2005/2008.  The output from running the  sp_help_revlogin procedure on the source server is a series of CREATE LOGIN statements for all of the existing logins.

 

There are three main benefits for using this method to transfer logins:

  1. It’s fast.  No scripting out individual logins.
  2. If the logins and user are from the same servers, the user and logins SIDS will match, avoiding any ‘orphaned users’.
  3.  The passwords come across with the logins.  Definitely preferable to keeping an Excel spreadsheet tucked away with all of the logins and associated passwords.

 

As with any process that is automatically generated, it’s best to review the scripts carefully before running them on the destination server.  A couple of issues that you need to watch out for are:

  • If a login has a defined default database that doesn’t exist on the destination server, obviously the CREATE LOGIN statement will fail.
  • If there are existing logins on the destination server, there is the possibility that a SID may already be in use.  In that case, you will need to create a new login and map the existing users.

Orphaned Users

CantGetThere

 One issue that comes up when transferring logins is that the database users may not be mapped to the server logins.   This occurs when there isn’t a corresponding login SID for the user SIDs in the database.  If the database(s) that are being restored are from a single server and the logins were transferred from that same server the SIDs will match.  If the SIDs do not match, the database users will not be attached to a server login and the user is said to be ‘orphaned’.

 

 There are some cases where orphaned users can’t be avoided.  In development environments, the databases on the destination server may be consolidated from more than one production server, logins may overlap and, as a result, the login and user SIDs don’t match.  Another instance would be when a production database is restored to a development server and the rights for the users in the development environment are different from their rights in the production environment.  For both of these cases, steps will need to be taken to match database users with their respective logins on the server.

 

 To determine whether there are any orphaned users, run the following stored procedure:

EXEC sp_change_users_login ‘report’

 

There are a few ways to handle these orphaned users.  The sp_change_users_login stored procedure can be used to update the connection between logins and users in SQL Server 2000.  While the sp_change_users_login has the option  to auto_fix logins, that action assumes that the username and login name match.  If they don’t, it will fail.  Using the Update_One option is a safer and the preferable way to handle it.  For SQL Server 2005/2008, the ALTER USER statement is the preferred method for mapping users to logins.  Greg Low’s article ‘Much ado about logins and SIDs’ provides a good explanation for these methods.

 

Quick script to map users to logins

 

If you’re sure that the usernames match the login names that they map to, here are a couple of quick scripts that you can run in each database that you restore from your production server to your development server to map the orphaned users – yes, they’re cursors, but they run quickly.

 

SQL Server 2000

DECLARE @Username varchar(100), @cmd varchar(100)
DECLARE userLogin_cursor CURSOR FAST_FORWARD
FOR
SELECT UserName = name FROM sysusers
WHERE issqluser = 1 and (sid IS NOT NULL AND sid <> 0×0)
    AND suser_sname(sid) IS NULL
ORDER BY name
FOR READ ONLY
OPEN userLogin_cursor

FETCH NEXT FROM userLogin_cursor INTO @Username
WHILE @@fetch_status = 0
  BEGIN
    SET @cmd = ‘EXEC sp_change_users_login ‘ + char(39) + ‘AUTO_FIX’ + char(39) + ‘, ‘ + @Username
    EXECUTE(@cmd)
    FETCH NEXT FROM userLogin_cursor INTO @Username
  END
CLOSE userLogin_cursor
DEALLOCATE userLogin_cursor

 

SQL Server 2005/2008

DECLARE @Username varchar(100), @cmd varchar(100)
DECLARE userLogin_cursor CURSOR FAST_FORWARD
FOR
SELECT UserName = name FROM sysusers
WHERE issqluser = 1 and (sid IS NOT NULL AND sid <> 0×0)
    AND suser_sname(sid) IS NULL
ORDER BY name
FOR READ ONLY
OPEN userLogin_cursor
 
FETCH NEXT FROM userLogin_cursor INTO @Username
WHILE @@fetch_status = 0
  BEGIN
    SET @cmd = ‘ALTER USER ‘+@username+‘ WITH LOGIN ‘+@username
    EXECUTE(@cmd)
    FETCH NEXT FROM userLogin_cursor INTO @Username
  END
CLOSE userLogin_cursor
DEALLOCATE userLogin_cursor

I have been managing DBAs for over ten years now and one question that always seems to come up, usually from management, is what do DBAs do anyway?  Hopefully, you have read my prior post, “Yes, Production DBAs Are Necessary!” and you know where I stand on this issue. 

 

I fully believe in the role of a production DBA and, as someone who has been in this role for well over a decade, would like to define, based on my experience, what I think that role should be.  The most effective way of doing this, I believe, is to define what a production DBA should and shouldn’t be expected to do.  Granted, this won’t work perfectly for every situation and this is not intended to be an exhaustive list, but the following should be a good guideline for defining what a production DBA does and why they are critical to ensuring that your data is safe and available.

 

User data

A production DBA should not have anything to do with data. There, I said it. This statement alone is going to generate tons of controversy, but let me explain and specify that I’m referring to user data in user databases. This confuses many people because, after all, DBA does stand for database administrator. Let me clear up that confusion right now by saying that database is one word and is a manageable entity just like a server. Data lives inside a database, much like an application lives on a server. Typically, there is a lot of data in a database, just like there can be many applications on a server, each of which may have their own application administrator. As in the case of the application analogy, we don’t expect the system/server administrator to necessarily manage all of the applications on the server, the production DBA should not be expected to manage the data in the database. The production DBA is responsible for the uptime of the database, the performance of that SQL Server instance and the safety (backups/DR) of the data in the database, not for the data itself. There should be other roles in the enterprise responsible for determining how the data gets into the database and how it is used from there, namely, data architects, database developers, etc.

 

Optimizing T-SQL

A production DBA should work with database developers to help them optimize T-SQL code. I emphasize work because production DBAs should not be writing code, outside of administration automation. This goes hand in hand with #1 above and, when you think about it in practical terms, makes sense. A production DBA may be responsible for managing hundreds or thousands of user databases all containing data for different uses. A production DBA can’t practically understand all of this data and be able to write code against it effectively. What the production DBA can do, though, is help the database developers optimize code by looking at available indexes and discussing with them how to best to arrange joins and where clauses to make the most effective use of indexes and/or create additional indexes. Additionally, if your organization falls under the Sarbanes-Oxley or PCI regulations, this segregation may be mandatory. Your developers should not have access to production databases. Their code should be promoted by the production DBA, who does have access to the production databases. This also means that you should have a development environment that reasonably approximates your production environment.

 

Managing Instances

The production DBA should manage the SQL Server instance(s). This is a big job and if your data is important and you have more than a handful of instances, it is a full time job. Let me breakdown what this actually includes to illustrate just how large a job it is:

  1. Install/patch SQL Server instances – The production DBA is responsible for installing any new instances of SQL Server and making sure that existing instances stay current on Microsoft patches. This isn’t just a matter of running through an install wizard. A number of things have to be considered by the production DBA when a new instance of SQL Server is installed. These include:
    • Collation settings. Questions have to be asked about how the application that uses this data expects the database to handle the case of words, accents on words, code pages that the application expects to be used (this gets into localization and language for those shops in other countries or that are responsible for servers that reside or are going to be accessed by users in other countries).
    • Drive/File Layout – To make the database instance run optimally, the production DBA has to consider what drive resources are going to be available and how the database files should be laid out across those resources. During this phase of the installation the production DBA has to consider whether only local storage is available or whether storage will be housed on a SAN. If it is going to be housed on a SAN, the production DBA needs to work with the SAN administrator to ensure that the LUNs are set up appropriately for optimal SQL Server access, which in and of itself requires a lot of knowledge and experience.
    • Scalability – The production DBA should be involved in developing the specifications for the hardware. Questions that the production DBA will be asking of the various parties include, how many concurrent users will be accessing the data, how many databases will there be, is the data static or changing and how does it change (i.e., batch load, transactional updates), etc. This will give the production DBA a better idea of what kind of resource utilization to expect and drive the hardware specification process. It will also help determine recovery model and backup strategies.
  2. Create, and periodically test, a backup and recovery scheme for your enterprise’s data. Things the DBA has to consider as part of this include:
    • Is the data development or production data? Yes, development data gets backed up and is considered in an effective backup/recovery scheme because, after all, you don’t want to lose any of your development effort; it just isn’t prioritized as highly as production data.
    • How often data is updated in databases and how is it updated (online transactions, batch, etc)? This determines the recovery model that should be used and leads the DBA to ask another question, what is the maximum acceptable data loss in terms of time? With the answers to these questions, the DBA can more effectively determine the type and the frequency of backups (full, differential, transaction log, etc).
  3. While somewhat tied with backup/recovery, the DBA is also tasked with helping to come up with a disaster recovery strategy for the SQL Server environment, within the constraints of the enterprise’s available resources.
  4. Uptime/performance – The DBA is tasked with managing/monitoring those things that would impact the availability of the databases. This includes CPU utilization, I/O performance, disk space, reviewing event and SQL Server logs,etc.
  5. Help design the security model for SQL Server instances. Within SQL Server, there are a number of built in security roles, both at the instance level and database level that users can made members of in addition to the ability to create custom roles. Additionally, SQL Server can operate in Windows Authentication mode, which integrates your domain security and uses the users’ Windows domain accounts or Mixed (or SQL Server and Windows Authentication Mode) Mode, which allows accounts to either be Windows domain or native SQL Server accounts. There are a number of factors to be considered here to make sure that your users can use your system while still protecting and safeguarding sensitive data and the SQL Server configuration.

 
So as you can see, the production DBA has plenty of things to contend with in order to effectively and efficiently maintain a SQL Server environment.  The bottom line comes down to how valuable is your data to you and how much are you willing to risk by not allowing your DBA to dedicate themselves to maintaining and safeguarding it.

I know that, for many, this is a controversial topic.  There are those that believe that there really is no such thing as a SQL Server production DBA and that DBAs should be jacks of all trades doing anything from database development to OLAP/BI development to .NET programming to being a webmaster or network/server/system administrator.  It seems that everywhere I turn anymore, job postings, sharing horror stories with colleagues, and even blogs from members of the SQL Server community, I see references to production DBAs being more than just a DBA.  It as if, somewhere, managers are thinking that they need to have someone manage their SQL Server databases, but they don’t know what that means, so the first thought is “let’s just make it part of someone’s duties that we already have on staff.”  The question is constantly asked, “Why can’t the database developer handle that, he/she already has to use SQL Server?” or the common mistake, “let’s just let the server administrator handle it.”  There has even been a term coined for this, “the reluctant DBA.”  I have actually heard SQL Server compared to Access since it has wizards and, because Access doesn’t require a production DBA, why the heck should SQL Server?  In my experience, this perception is especially prevalent in the small and medium-size (SMB) market, but there are large companies and managers that have grown up in medium-size companies that seem to reinforce this misconception.

 

Microsoft hasn’t really done anything to correct this misconception.  From a marketing perspective, I guess it is in their best interest for prospective SQL Server customers to think that it doesn’t take much to manage a production SQL Server environment.  When companies purchase Oracle or move their applications to Oracle databases, it is a foregone conclusion that dedicated production DBAs are necessary and so, these companies typically build that into their cost calculations.  I guess Microsoft feels that if customers don’t know a DBA is required, it makes their product look that much less expensive.

 

Now don’t get me wrong, I am not saying that database developers, server administrators, network administrators, etc. can’t become DBAs, they just can’t do it without training and experience, just like they didn’t fall into their current jobs without training and experience and the job of production DBA certainly can’t be done (at least not effectively) in someone’s “spare time.”  As SQL Server has matured, it has become extremely flexible and full featured, but these features come with a cost: complexity.  SQL Server can be used and configured in a myriad of different ways and for many different scenarios, but who is going to manage that and recommend how it should be configured and used for a particular purpose if you don’t have a production DBA?

 

In my next post, I will discuss what a production DBA does and what they shouldn’t do.

Ever been frustrated when working with the way that SQL Server stores run dates and times from jobs scheduled and run using the SQL Server agent in the sysjobs, sysjobhistory, etc tables?  SQL Server stores the date and the time in separate columns in these tables and stores them as data type int, which can be a real pain to deal with if you need to compare them to an actual date.  In SQL Server 2005, there is a little known and sparsely documented function called msdb.dbo.agent_datetime that you can use to create a real date out of these columns.  To use it is pretty simple.  Say you want to see if the run_date and run_time from the sysjobhistory table is greater than yesterday’s date.  The syntax for that would be:
 
select a.name, (msdb.dbo.agent_datetime(b.run_date, b.run_time)) as RunTime
from sysjobs a inner join sysjobhistory b
on a.job_id = b.job_id
where (msdb.dbo.agent_datetime(b.run_date, b.run_time) > getdate()-1)

It is pretty simple to use, but an extremely powerful tool.  Unfortunately, SQL Server 2000 stores the agent dates the same way, but it doesn’t come with this function.  No worries, below is the code should you ever need to create this function on a SQL Server 2000 instance:
CREATE FUNCTION agent_datetime(@date int, @time int)
RETURNS DATETIME
AS
BEGIN

RETURN
(
CONVERT(DATETIME,
CONVERT(NVARCHAR(4),@date / 10000) + N’-’ +
CONVERT(NVARCHAR(2),(@date % 10000)/100) + N’-’ +
CONVERT(NVARCHAR(2),@date % 100) + N’ ‘ +
CONVERT(NVARCHAR(2),@time / 10000) + N’:’ +
CONVERT(NVARCHAR(2),(@time % 10000)/100) + N’:’ +
CONVERT(NVARCHAR(2),@time % 100),
120)
)
END

Replication tidbit

17 September 2009

I had set up replication a few times with SQL Server 2000 and there were times that I didn’t want to use a snapshot or a backup – had the schema and data already available and I just wanted to push new data over there.  Setting up the subscription was pretty easy.  In the Initialize Subscription dialog box, I just checked ‘No, the Subscriber already has the schema and data’.  Simple, eh?

initialize_2000

Recently I ran into the same situation – couldn’t use a snapshot or a backup (don’t ask).  The subscription database was already in place and I just wanted to push any new transactions from the publication over there.  When I went through the handy dandy replication wizard, though, I didn’t see the option that I had remembered from SQL Server 2000.  After an initial panic attack, I did some research.  The option is still there, it just looks different.

initialize_2005

All that you need to do now is uncheck the Initialize checkbox in the Initalize Subscription dialog box and away you go.  

Live, learn and replicate.