As DBAs, we are increasingly being asked to manage more and more technology.  Some of that is the result of internal pressures (i.e. taking on additional roles) and some of that is the result of Microsoft bundling an ever increasing array of different technologies within SQL Server.  Dealing with these various technologies has become a weekly issue for me, but it really came to light today when I had a SQL Server Reporting Services 2008 server that was consuming 100% of the CPU.  Doing some poking around, I realized not only did I not really know anything about this beast called “SQL Server Reporting Services”, the tools to manage it are extremely lacking (now, that is my opinion coming from a position of complete ignorance about this technology).  I connected to the SSRS 2008 service with SSMS and, from there, I could only view three things:  SSRS jobs, security, and shared schedules.  I determined that none of the shared schedules were responsible for the utilization since nothing was scheduled to run anywhere near the time that the problem started, so that was a dead end.

 

Next, I connected to the Report Service by hitting http://[servername]/reports.  From here, I could look at all of the various reports that had been deployed to the instance, general site settings, security, both instance-wide and at a report level, and I could look at my own subscriptions.  The one thing that seemed to elude me was visibility into what, if anything, users were running on the SSRS server.

 

Frustrated, I connected to database instance through SSMS that hosts the ReportServer database.  I figured there had to be something in the database I could query to give me some visibility into what my SSRS instance does all day.  Thinking like a DBA, the first thing I did was look under “System Views” in the ReportServer database.  I saw two views, ExecutionLog and ExecutionLog2, so I decided to do a simple SELECT TOP 100* from each to see what they would give me.  This is where I stumbled upon my gold nugget for the day.  Right there in the ExecutionLog2 system view was all of the information that I had been looking for.  Running the following query, you can get a wealth of valuable information on what reports users are running, when they are running them, what parameters they used, and how long the report took to generate (broken down into data retrieval time, processing time, and rendering time) – all key information for trending the load that your server is under and <gasp> justifying new hardware, if needed.

 

SELECT InstanceName
       , ReportPath
       , UserName
       , RequestType
       , Format
       , Parameters
       , ReportAction
       , TimeStart
       , TimeEnd
       , TimeDataRetrieval
       , TimeProcessing
       , TimeRendering
       , Source
       , Status
       , ByteCount
       , RowCount
       , AdditionalInfo
FROM ExecutionLog2
WHERE CONVERT(VARCHAR(10),timeend,101) >= -- Some start date that you supply
AND CONVERT(VARCHAR(10),timeend,101) <= -- Some end date that you supply

 

To many of you who regularly use SSRS this may be very remedial, but I figured I would throw this out there for those DBAs who like me, have to learn this stuff on the fly in a crisis.

 

By the way, as a side note, for those who are curious about why the ExecutionLog2 system view was used instead of the ExecutionLog system view, it appears that the ExecutionLog system view exists for backward compatibility for SQL Server 2005 Reporting Services upgrades.  The ExecutionLog2 system view provides much more information than the ExecutionLog system view.

/*Note – I have yet to see if this breaks anything – this post is for observational use only.  If it does break replication, I’ll add some notes to the bottom of this post */

 

Late last week, we had the fabulous opportunity to see just what happens when the dual power supplies for your SAN are plugged into the same PDU and that PDU gets powered off.  As you can imagine, this caused all sorts of neat things to occur.  Fortunately, the situation was remedied fairly quickly, but not before breaking replication for one of my databases.

 

The error that I received was: “The concurrent snapshot for publication ‘xx’ is not available because it has not been fully generated…”.  There were probably different routes to go to fix this issue, but I decided to reinitialize the subscription from a snapshot and went ahead and created the snapshot.  This particular database is a little under 500GB, so I knew that this would take awhile.

 

The next morning when I came to work, I could see that the snapshot had completed and the subscription had been reinitialized.   I knew that it would take some time for all of the commands that had queued up in the distribution database to be written to the subscription and so I went about my day.

 

Later, I noticed that the number of undistributed commands didn’t seem to be going down at all and was in fact growing.  This is a fairly heavily utilized database with writes going on very frequently, but it still seemed odd.   I did some querying to try to figure out just how behind we were, but, to my surprise, the subscription was completely up to date.  This is where it started to get strange.

 

I queried the MSdistriibution_status view to get an idea of what was still outstanding and saw the following:

Article_id Agent_id UndelivCmdsInDistDB DelivCmdsInDistDB
1 3 0 2130
2 3 0 3295
3 3 0 8275
4 3 0 2555
5 3 0 8310
6 3 0 2521
1 -2 2130 0
2 -2 3295 0
3 -2 8275 0
4 -2 2555 0
5 -2 8310 0
6 -2 2521 0
1 -1 2130 0
2 -1 3295 0
3 -1 8275 0
4 -1 2555 0
5 -1 8310 0
6 -1 2521 0

 

You’ll notice that the row counts for agents -1 and -2, the number of undistributed commands for each article was the same as the distributed commands for agent 3.

 

When I queried the MSDistribution_Agents table, I saw three distribution agents.  One was named identically to the distribution agent job.  The other two had subscriber_ids with negative numbers and the subscriber_db was ‘virtual’. 

 

I did some digging around on some unnamed search engine to try to figure out what virtual subscriptions were.  To be honest, there wasn’t  a lot of information to be found.  I did find a question on Eggheadcafe.com from 2007 where it seemed like the person was having a similar issue.  Reading through, it looked like it had something to do with the settings for allowing anonymous subscriptions and immediately synchronizing the snapshot.  I looked at my publication and both those were set to true.

 

Of course, I posted a question on Twitter at the same time and Hilary Cotter [Blog/Twitter], stated that the virtual subscriptions “are to keep your snapshot up to date, ignore them”.  That seemed to be consistent with what I was reading, but it still seemed odd that they appeared to be holding onto undistributed commands.  This, in turn, was causing my distribution database to grow.

 

Since, at this point, I figured that I was going to have to tear the whole thing down and rebuild it, I thought that I’d try to modify the publication first.  I knew that I didn’t want to allow anonymous subscriptions and I didn’t need to immediately synch from a snapshot, so I ran the following commands:

 

exec sp_changepublication @publication = 'xxxxx', @property = N'allow_anonymous', @value = 'false'

exec sp_changepublication @publication = 'xxxxx', @property = N'immediate_sync', @value = 'false'

 

I reran my query of the MSdistribution_status and voila, the undistributed commands were gone.  When I queried MSDistribution_agents, there was still a virtual subscriber, but it was no longer filling up my distribution database with undelivered commands.

 

I still don’t know what caused this – it seems like something occurred when I reinitialized the snapshot.  So far, replication is continuing to run without errors.  I’d be interested to hear if anyone has run into a similar issue.

More tidbits

23 October 2009

Work has been hectic for both of us, so please pardon the absence of full-bodied blog posts.  Until we get our feet under us, here are a couple of (hopefully) helpful tidbits.

 

Since you've been good and patient, you can have a treat

Since you've been good and patient, you can have a treat

Sys.dm_exec_requests

I’m not the most patient person in the world and have a tendency to think that a process has stopped when it’s actually processing away.  A few months ago, I needed to run a DBCC on a database with corruption errors.  This wasn’t a large database and the DBCC had already been running for about 2 1/2 hours.  I put something out on Twitter about whether I should continue the DBCC (which I knew that I should) or worry about whether something had gone horribly wrong.  Paul Randal quickly tweeted back to not stop the DBCC and to check to see how far it had gone.  That’s when he gave me the information that has helped me through a number of waiting games:

 

Sys.dm_exec_requests is a DMV that returns a row for every request that is currently executing.    Since I was certain that this was the only dbcc checkdb running on this server, the following query eased my mind:

 

select percent_complete
 from sys.dm_exec_requests
 where command = ‘dbcc checkdb’

 

Keeping an eye on that let me know that the process was progressing and kept me from making a stupid mistake.  I’ve since used that on long running processes like backups and restores.  It also provides a way to tell users that, yes, this is going and it’s x% done
One caveat – the precentage_done field is for specific operations like backups, restores, rollbacks, dbccs, etc.  It is not populated for queries

 

Altering a table in SSMS 2008 (without t-sql)

While I know that we’d all rather use an sqlcmd query through a DAC connection to do anything SQL Server related, there may be times that it might be more expeditious to make the change within SSMS using the Design option.

 

If you’re in SQL Server 2008 and the table is populated, making any changes within the Design option may cause the following error:

alter_table

While we were able to do this in SQL Server 2005, it seems like it’s unavailable in 2008.  Fortunately, you can do this in SSMS, you just need to set the option to make it available.

 

In SSMS, go to Tools->Options->expand Designers.  Then under Table and Database Designers, uncheck the box next to ‘Prevent saving changes that require table re-creation’.  Voila!  You can now make all the changes that you want. 

 

As a last minute bit of news, Tim just found out that he’s now a Friend of RedGate.  Congratulations, Tim!

Ever been frustrated when working with the way that SQL Server stores run dates and times from jobs scheduled and run using the SQL Server agent in the sysjobs, sysjobhistory, etc tables?  SQL Server stores the date and the time in separate columns in these tables and stores them as data type int, which can be a real pain to deal with if you need to compare them to an actual date.  In SQL Server 2005, there is a little known and sparsely documented function called msdb.dbo.agent_datetime that you can use to create a real date out of these columns.  To use it is pretty simple.  Say you want to see if the run_date and run_time from the sysjobhistory table is greater than yesterday’s date.  The syntax for that would be:
 
select a.name, (msdb.dbo.agent_datetime(b.run_date, b.run_time)) as RunTime
from sysjobs a inner join sysjobhistory b
on a.job_id = b.job_id
where (msdb.dbo.agent_datetime(b.run_date, b.run_time) > getdate()-1)

It is pretty simple to use, but an extremely powerful tool.  Unfortunately, SQL Server 2000 stores the agent dates the same way, but it doesn’t come with this function.  No worries, below is the code should you ever need to create this function on a SQL Server 2000 instance:
CREATE FUNCTION agent_datetime(@date int, @time int)
RETURNS DATETIME
AS
BEGIN

RETURN
(
CONVERT(DATETIME,
CONVERT(NVARCHAR(4),@date / 10000) + N’-’ +
CONVERT(NVARCHAR(2),(@date % 10000)/100) + N’-’ +
CONVERT(NVARCHAR(2),@date % 100) + N’ ‘ +
CONVERT(NVARCHAR(2),@time / 10000) + N’:’ +
CONVERT(NVARCHAR(2),(@time % 10000)/100) + N’:’ +
CONVERT(NVARCHAR(2),@time % 100),
120)
)
END