The SQL Server community never ceases to amaze me.  The number of people that are willing to take time out from their jobs and families to volunteer is especially impressive.

 

I’ve had the good  fortune to be able to volunteer for the Program Committe this year.  My job is to pull together special projects and whatever other slave work Allen thinks up for me.   I’ve had a number of volunteers that have put great work into our current project.  This project has multiple steps and has required a ton of coordination between the volunteers – but it is all coming together.   It’s something that’s been needed for awhile  and now it’s going to be a reality.   I’d name names, but I know that I’d forget someone.   So thank you to everyone that’s helped out.

 

A big (virtual) cake for all of you!

A big (virtual) cake for all of you!

It’s not just me, though.  Tim’s in the process of re-starting the Performance VC.  He had mentioned the need for volunteers through our blog, Twitter and Blythe Morrow(Blog/Twitter) put out a call for volunteers on the PASS blog.  He’s been overwhelmed at the number of people that have asked to help out.

 

For all of you that volunteer for PASS – kudos to you!  For those of you that are thinking of volunteering, but haven’t yet,  get ahold of Tim or me or go here for additional volunteer opportunities.

The Generate Scripts task in SSMS is insanely handy.  A few quick clicks and you can have tables, stored procedures, users, etc. scripted out.  I’ve used it quite a bit, but I ran into an unusual situation yesterday.

 

I needed to create a database schema and thought I’d use the handy dandy Generate Scripts task.  Popped through the wizard, clicked finish and it errored!  Here was the error message:

 

I was thoroughly confused – I was running SSMS 2008 against a SQL Server 2008 server.  I wasn’t sure where SQL Server 2005 even came into play.

 scripts_error

 

I went through the process again, this time paying closer attention and noticed this:

script_option

Apparently the Script Wizard defaults to SQL Server 2005.  I changed it to SQL Server 2008 and everything ran as expected.  While I had run this task against other SQL Server 2008 instances, apparently none of them made use of the new data types in 2008 and, as a result, didn’t generate errors.  Now why it would default to SQL Server 2008 is an entirely different question….

 

baby_facepalm

 | Posted by tledwards | Categories: DBAs, SQLServerPedia | Tagged: , |

Okay, maybe I’m being a little sarcastic.  I don’t troubleshoot dynamic SQL very often, so I don’t always see potential issues right away.  For those dear readers who work with it regularly, you should stop reading now – this is all pretty basic – but it took a few minutes out of my day.

 

This is the only dyna- that I like

This is the only dyna- that I like

My troubleshooting methods consist of displaying the command created by the dynamic SQL and seeing if it runs correctly or if I’m missing a quotation mark or something along the way.  There is probably a better way to troubleshoot, but again, I play with it so rarely that I’m stuck in a rut.

 

Evaluating Dynamic SQL commands

Late last week, a developer sent the following block of dynamic SQL code because he was having issues getting it to work:

EXEC
('
USE [master];
BEGIN
ALTER DATABASE [random_dbname] SET ONLINE;
WAITFOR DELAY ''00:01'';
END
USE [random_dbname];
'
)

 

I followed my normal troubleshooting methods and everything worked fine.  Trying to execute it as above, I received the following error message:

 

Msg 942, Level 14, State 4, Line 7
 Database 'random_dbname' cannot be opened because it is offline.

 

On first glance, I was confused, because it was obvious that I brought the database online.  I soon realized, though, that everything within the parentheses was being evaluated prior to being executed.  Apparently SQL Server  has a shorter memory than I do.

 

Breaking it into two separate statements like below accomplishes what needed to happen

EXEC

(

– Bring the database online

USE [master];

BEGIN

ALTER DATABASE [random_db] SET ONLINE;

WAITFOR DELAY ”00:01”;

END


)

go

EXEC

(‘ USE [random_db];

/*blah blah blah*/


)

Thinking that this ‘emergency’ had been handled, I went back to my other tasks. 

 

Database Context and Dynamic SQL

As these thing happen, though, received another call because after he ran all of this, the database context remained Master.  Fortunately, this was easy to explain.  The database context switching exists only during the execution of the EXEC statement and does not persist after its completion. 

 

None of this is rocket science or even deep SQL knowledge, but maybe it’ll save a minute or two for some other DBA out there.

 | Posted by tledwards | Categories: DBAs, SQLServerPedia, T-SQL | Tagged: , |

The Professional Association for SQL Server is restarting a Virtual Chapter focusing on SQL Server performance.  The goal of the PASS Virtual Chapters is to provide free and timely training focused on a particular SQL Server area or set of functionality (in this case SQL Server performance).  I have been honored to have been chosen to lead this particular Virtual Chapter, but, as you can imagine, this can’t happen without volunteers from the community.  We are looking for individuals to serve on the Performance Virtual Chapter steering committee that are:

 

  • Passionate about SQL Server (and who isn’t right? ;)
  • Interested in helping and serving the SQL Server community
  • Either have a blog or a Twitter presence
  • Willing to put in a couple of hours of work a week on such things as arranging speakers, putting together presentations, etc.  Generally, working to help get good education out to our SQL Server community on performance related topics.

If this sounds like you and you are interested in serving on the Performance Virtual Chapter steering committee, we want you!  Please contact Tim Edwards at sqlservertimes2@gmail.com.

 | Posted by tledwards | Categories: PASS, SQLServerPedia | Tagged: |

Last year my better half, Tim, suggested that we start a blog. It made sense for any number of reasons, but it scared the heck out of me. I couldn’t imagine that there was anything that I could ever blog about that hadn’t already been posted and probably by someone with much more experience than I have. I have a tendency (as I did today) to go out and search for other blog posts that cover the material that I’m about to write about to ensure that I’m at least adding something new with my post.

 

In school, if you’re assigned a paper on the pastoral imagery used in “The Adventures of Huckleberry Finn”, the instructor knows that there have been several works on that particular subject and assumes that you will be using (and referencing) information from those works. Blogging, for most people though, is not an assignment – it’s something that you make the choice to do. The people that read your blog assume that the ideas, tips and facts that you blog about are yours, unless you attribute them.

 

Over the past few months, there have been numerous tweets and blog posts about bloggers that have been plagiarizing other people’s works. In some cases the posts are lifted word-for-word and other cases they have selectively reworded the blog posts, but they were still identifiable. I have no idea whether it was intentional or that they were uninformed about how to use information from other posts. K. Brian Kelley [Blog/Twitter] wrote a post ‘Avoiding Plagiarism’ a couple of weeks ago. I thought I’d take this opportunity to add a little more information. As a note, I’m not an expert in plagiarism, so if any of you reading this post find errors, please comment and I’ll update this post.

On dictionary.com, the Random House Dictionary definition of plagiarism is:
“1. the unauthorized use or close imitation of the language and thoughts of another author and the representation of them as one’s own original work.
2. something used and represented in this manner.”

 

Reading this definition clarifies the reasons for my fear of blogging. I would never lift language from another blog post, but there have been blog posts that have inspired me (like K. Brian Kelley’s) to write a post. Here are some ways that I handle referencing other works.

 

I think you should read this

Example: Kevin Kline wrote an excellent post about the pains of not being included in a meme. You should read it here: http://kevinekline.com/2010/01/14/goals-and-theme-word-for-2010/
In this case, I have nothing to add, but I want to give my audience the opportunity to read great posts that I’ve come across.

 

You can say it better than I can

Example: PowerShell is fabulous. It’s so awesome that it’s caused some otherwise contentious DBA’s to wander astray. Colin Stasiuk [Blog/Twitter] admitted as much in a recent blog post : “…it’s no secret that I’ve been having an affair on TSQL. I’ve been seeing PowerShell behind her back and I gotta tell ya even after the initial excitement of a new language I’m still loving it. “
I know that I couldn’t have said it better than Colin, so in addition to linking to his post, I quoted his remark. Quotes should be used sparingly – if you find yourself quoting more than a sentence or two, you should probably use the example above.

 

Note: Blogs, whitepapers or other articles that are copyrighted require permission prior to their use. In addition, some online works have posted requirements on how they can be used. Brent Ozar (Blog/Twitter) has a good example of that here.

 

This is what I researched

Example: While the sp_change_users_login has the option to auto_fix logins, that action assumes that the username and login name match. If they don’t, it will fail. Using the Update_One option is a safer and the preferable way to handle it. For SQL Server 2005/2008, the ALTER USER statement is the preferred method for mapping users to logins. Greg Low’s (Blog/Twitter) article ‘Much ado about logins and SIDs’ provides a good explanation for these methods.

 

This is probably where unintentional plagiarism occurs most often. If, during your research, you read blog posts, articles, whitepapers, etc. and find useful information, your best bet is to attribute them. If you recall the definition of plagiarism above, it applies to both language and ideas, so if you learned something that you’re passing on a blog post or if you’re using that information to validate your ideas, they need to be cited. Again, keep in mind any copyright laws that might apply.

 

What doesn’t need to be cited

Common knowledge/generally accepted facts

Items that are common knowledge or generally accepted facts do not need to be cited.  Examples of common knowledge are:

  • A table can only have one clustered index
  • SQL Server is an RDBMS
  • Most SQL Server based questions can be answered with “It Depends”

 There is a decent article on common knowledge here.

 

 

Results of personal research

If you’re blogging about an incident that occurred or the results of test that you ran, they don’t require a citation. That is, unless, you did research to solve the incident or used other information to validate your test results.

 

Fair Use

The term ‘Fair Use’ had been bandied about in the recent plagiarism incident. The idea of fair use has no exact definition, but is determined by a set of guidelines. There is a good definition at Plagiarism.org and a good article titled “The Basics of Fair Use” by Jonathan Bailey. According to Plagiarism.org the guidelines look at:

  1. The nature of your use
  2. The amount used
  3. The affect of your use on the original

The ability to define fair use is pretty obscure and personally, I wouldn’t want to try and stand behind that argument.  The incident mentioned above definitely fell outside of those guidelines, in my opinion.

 

Public Domain

At some works fall out of their copyright term and become part of the public domain.  The Wikipedia article regarding public domain can be found here.  While the copyright laws no longer apply, they still require citations.  This point is moot for any SQL Server blogs, since, at this time, there aren’t any works old enough to have fallen out of their copyright term.

 

Conclusion

There is a huge amount of helpful information in blogs. Blogging also provides an opportunity for us to share information and experiences. I think that it’s understood that we learn from other people – just ensure that you credit those people for their hard work.

 | Posted by tledwards | Categories: DBAs, Discussion, SQLServerPedia | Tagged: , , |

As DBAs, we are increasingly being asked to manage more and more technology.  Some of that is the result of internal pressures (i.e. taking on additional roles) and some of that is the result of Microsoft bundling an ever increasing array of different technologies within SQL Server.  Dealing with these various technologies has become a weekly issue for me, but it really came to light today when I had a SQL Server Reporting Services 2008 server that was consuming 100% of the CPU.  Doing some poking around, I realized not only did I not really know anything about this beast called “SQL Server Reporting Services”, the tools to manage it are extremely lacking (now, that is my opinion coming from a position of complete ignorance about this technology).  I connected to the SSRS 2008 service with SSMS and, from there, I could only view three things:  SSRS jobs, security, and shared schedules.  I determined that none of the shared schedules were responsible for the utilization since nothing was scheduled to run anywhere near the time that the problem started, so that was a dead end.

 

Next, I connected to the Report Service by hitting http://[servername]/reports.  From here, I could look at all of the various reports that had been deployed to the instance, general site settings, security, both instance-wide and at a report level, and I could look at my own subscriptions.  The one thing that seemed to elude me was visibility into what, if anything, users were running on the SSRS server.

 

Frustrated, I connected to database instance through SSMS that hosts the ReportServer database.  I figured there had to be something in the database I could query to give me some visibility into what my SSRS instance does all day.  Thinking like a DBA, the first thing I did was look under “System Views” in the ReportServer database.  I saw two views, ExecutionLog and ExecutionLog2, so I decided to do a simple SELECT TOP 100* from each to see what they would give me.  This is where I stumbled upon my gold nugget for the day.  Right there in the ExecutionLog2 system view was all of the information that I had been looking for.  Running the following query, you can get a wealth of valuable information on what reports users are running, when they are running them, what parameters they used, and how long the report took to generate (broken down into data retrieval time, processing time, and rendering time) – all key information for trending the load that your server is under and <gasp> justifying new hardware, if needed.

 

SELECT InstanceName
       , ReportPath
       , UserName
       , RequestType
       , Format
       , Parameters
       , ReportAction
       , TimeStart
       , TimeEnd
       , TimeDataRetrieval
       , TimeProcessing
       , TimeRendering
       , Source
       , Status
       , ByteCount
       , RowCount
       , AdditionalInfo
FROM ExecutionLog2
WHERE CONVERT(VARCHAR(10),timeend,101) >= -- Some start date that you supply
AND CONVERT(VARCHAR(10),timeend,101) <= -- Some end date that you supply

 

To many of you who regularly use SSRS this may be very remedial, but I figured I would throw this out there for those DBAs who like me, have to learn this stuff on the fly in a crisis.

 

By the way, as a side note, for those who are curious about why the ExecutionLog2 system view was used instead of the ExecutionLog system view, it appears that the ExecutionLog system view exists for backward compatibility for SQL Server 2005 Reporting Services upgrades.  The ExecutionLog2 system view provides much more information than the ExecutionLog system view.

/*Note – I have yet to see if this breaks anything – this post is for observational use only.  If it does break replication, I’ll add some notes to the bottom of this post */

 

Late last week, we had the fabulous opportunity to see just what happens when the dual power supplies for your SAN are plugged into the same PDU and that PDU gets powered off.  As you can imagine, this caused all sorts of neat things to occur.  Fortunately, the situation was remedied fairly quickly, but not before breaking replication for one of my databases.

 

The error that I received was: “The concurrent snapshot for publication ‘xx’ is not available because it has not been fully generated…”.  There were probably different routes to go to fix this issue, but I decided to reinitialize the subscription from a snapshot and went ahead and created the snapshot.  This particular database is a little under 500GB, so I knew that this would take awhile.

 

The next morning when I came to work, I could see that the snapshot had completed and the subscription had been reinitialized.   I knew that it would take some time for all of the commands that had queued up in the distribution database to be written to the subscription and so I went about my day.

 

Later, I noticed that the number of undistributed commands didn’t seem to be going down at all and was in fact growing.  This is a fairly heavily utilized database with writes going on very frequently, but it still seemed odd.   I did some querying to try to figure out just how behind we were, but, to my surprise, the subscription was completely up to date.  This is where it started to get strange.

 

I queried the MSdistriibution_status view to get an idea of what was still outstanding and saw the following:

Article_id Agent_id UndelivCmdsInDistDB DelivCmdsInDistDB
1 3 0 2130
2 3 0 3295
3 3 0 8275
4 3 0 2555
5 3 0 8310
6 3 0 2521
1 -2 2130 0
2 -2 3295 0
3 -2 8275 0
4 -2 2555 0
5 -2 8310 0
6 -2 2521 0
1 -1 2130 0
2 -1 3295 0
3 -1 8275 0
4 -1 2555 0
5 -1 8310 0
6 -1 2521 0

 

You’ll notice that the row counts for agents -1 and -2, the number of undistributed commands for each article was the same as the distributed commands for agent 3.

 

When I queried the MSDistribution_Agents table, I saw three distribution agents.  One was named identically to the distribution agent job.  The other two had subscriber_ids with negative numbers and the subscriber_db was ‘virtual’. 

 

I did some digging around on some unnamed search engine to try to figure out what virtual subscriptions were.  To be honest, there wasn’t  a lot of information to be found.  I did find a question on Eggheadcafe.com from 2007 where it seemed like the person was having a similar issue.  Reading through, it looked like it had something to do with the settings for allowing anonymous subscriptions and immediately synchronizing the snapshot.  I looked at my publication and both those were set to true.

 

Of course, I posted a question on Twitter at the same time and Hilary Cotter [Blog/Twitter], stated that the virtual subscriptions “are to keep your snapshot up to date, ignore them”.  That seemed to be consistent with what I was reading, but it still seemed odd that they appeared to be holding onto undistributed commands.  This, in turn, was causing my distribution database to grow.

 

Since, at this point, I figured that I was going to have to tear the whole thing down and rebuild it, I thought that I’d try to modify the publication first.  I knew that I didn’t want to allow anonymous subscriptions and I didn’t need to immediately synch from a snapshot, so I ran the following commands:

 

exec sp_changepublication @publication = 'xxxxx', @property = N'allow_anonymous', @value = 'false'

exec sp_changepublication @publication = 'xxxxx', @property = N'immediate_sync', @value = 'false'

 

I reran my query of the MSdistribution_status and voila, the undistributed commands were gone.  When I queried MSDistribution_agents, there was still a virtual subscriber, but it was no longer filling up my distribution database with undelivered commands.

 

I still don’t know what caused this – it seems like something occurred when I reinitialized the snapshot.  So far, replication is continuing to run without errors.  I’d be interested to hear if anyone has run into a similar issue.

It would be so nice if something made sense for a change.

It would be so nice if something made sense for a change.

A couple of weeks ago, we had a database that had become corrupted due to a disk failure.  After running a DBCC CHECKDB, it appeared that there was one table with a corrupted cluster index.  Paul Randal (Twitter/Blog) verified that this was the case and that it was, unfortunately, non-recoverable without data loss.  Fortunately, we had backups and this database was a replication subscription, so between the backup and publication, we were able to rebuild the database without losing any data.  We then had to rebuild a kind of goofy replication set up, but that’s another story.

 

As I was doing some validation about the data that was contained in the backups and what needed to be pulled down from the publication, I realized that there was at least one other table that was corrupted.  This is where it became confusing.   I went back through the output from the CHECKDB and noticed that the table in question was not mentioned.   At this point, I went through and matched up all of the tables that were listed in the CHECKDB output against the actual tables in the database and found that there were three tables in the database that were not listed on the DBCC output.  I ran DBCC CHECKTABLE against the missing tables and while two of them came back with no error, one was definitely corrupted.  The CHECKTABLE command actually “terminated abnormally” on the corrupted table.   I went back through the the error logs and dump files that had been generated at that time.  Aside from the references to the CHECKTABLE that was run, the only object referenced was the table that had shown up in the CHECKDB output.

 

As a note, this is a 2008 SP1 database.  I know that, when running CHECKDB from SSMS, it will only display the first 1000 errors, but there were only 274 errors mentioned in the output, so I didn’t think that was the issue.  I asked Paul Randal (mentioned above) and his thought was that perhaps the metadata had also been corrupted and because of that CHECKDB may not have seen those tables as existing. 

 

We’ve recovered from the incident, but it was a curious experience.  Up until now, I’ve gone off of the CHECKDB output, but this gives me reason to think that there might be tables that aren’t included. I am interested to know whether anyone else has ever run into a similar incident.  Hopefully it was just an (odd) learning experience.

I was tagged by TJay Belt (Twitter/Blog) in this latest series of blog stories.  I believe that it was started by Paul Randal (Twitter/Blog), carried on by Tom LaRock (Twitter/Blog) and then went viral.  Since ‘New Year’ seems to be synonymous with ‘everything going to heck in a handbasket’, it’s taken me awhile to respond, but here goes.

 

I’ll start by saying that if anyone would have told me that I’d be a DBA (or anything computer related for that matter)

You may ask yourself, well, how did I get here?

You may ask yourself, well, how did I get here?

when I was in college, I would have fallen down laughing.    My step-father was a biomechanical engineer and one of my main goals in life was not to be a geek like I thought he was.  I majored in Communications with a minor in English.   At the time of my

graduation I had never touched a computer or even wanted to.  So, how did I get to be a DBA?  Sheer coincidence.

 

IBM

Back in my kid-free days, I worked for IBM.  I actually had to use a computer (odd for me), but my responsibilities were working with IBM’s resellers and the maintenance plans they resold.  It was all about soft skills and I spent a ton of time on the phone with resellers.  All of the information that we gathered was stored in a(wait for it…) DB2 database.   After awhile, I took on the responsibility for putting together reports.  While there was definitely no administration going on, it was kind of fascinating to play with all of that data.  That all stopped, though, for my next life changing event.

 

And they looked so sweet...

And they looked so sweet...

Kids

I left my job at IBM just before I gave birth to my first child and became a stay-at-home mom.  Around the time my

 second child was born, I started to feel the desire to go back to school.  The odd thing is that the field that I was drawn to was computer science.  I’m not sure if it was due to some strange chemical imbalance or the need to spend time with something that actually had logic behind it, but I began my computer science degree shortly after my youngest son turned one. 

Going back to school with two little ones running around was definitely a challenge.  Getting to the end of an 800 line assembly language project and have my son smack his hand on the keyboard deleting it, helped me learn the value of saving and saving often.  I’m sure that trying to learn recursion while dealing with a cranky toddler helped my ability to persevere.   Eventually, though.  I completed the program and became a computer science instructor.  Teaching was and is still the field that provides me with the greatest amount of satisfaction.  I enjoyed it immensely and felt that I was good at it.  Unfortunately, though, by that time I was a single mother of two boys and job satisfaction doesn’t exactly pay the bills.

 

My first *real* job

After leaving my teaching position at the college, I was able to get a job teaching the medical staff at our local hospital the new order entry/documentation application.   I knew that this had to be temporary and that I needed to become a part of a more technical division.  During the process of keeping our training environment up to date, I ended up interfacing with our DBA group on a regular basis.  One of the DBAs left and that provided me the opportunity to join the team.   Our lead DBA was pure awesomeness and provided me with a good solid platform of knowledge.  That was back in 2003, completed my MCDBA in 2005 and the rest is, well, the rest is now.    Still working, still learning.

 

It was a crazy, twisted road to get here and I’m looking forward to the road ahead.  I’m not tagging anyone with this, but I’m thankful to TJay for giving me the chance to share my story.

Several weeks ago on Twitter, Colin Stasiuk (BlogTwitter), ) asked if anyone had a script to pull back job statuses from the SQL Server Agent.  I had been doing some work on a SQL Server DBA Console and had written some PowerShell scripts to give me various pieces of information and put together a PowerShell script that could be run as a SQL Agent job to periodically report the statuses of SQL Agent jobs.  Eventually, I think Colin went with a different solution, but I figured I would go ahead and post the PowerShell script that I came up with.  This solution has been tested against SQL Server 2000/2005/2008.

 

This first script is just a SQL script to create the SQL Server table that the PowerShell script writes the status information to and can be downloaded here, LastJobStatus_Table.sql.

CREATE TABLE [dbo].[LastJobStatus](
	[ServerName] [nvarchar](128) NULL,
	[Job_Name] [nvarchar](128) NULL,
	[Run_Date] [datetime] NULL,
	[Job_Duration] [time](7) NULL,
	[Run_Status] [varchar](50) NULL,
	[Sample_Date] [datetime] NULL
) ON [PRIMARY]

 

The next is the PowerShell script that does all of the work bringing back the SQL Agent job statuses.  It takes a parameter of Server, which is the name of the SQL Server that you want job statuses from.  Make sure that you change the name “MgtServer” to whatever the name is of the SQL Server where you intend to store the results from this script.   You’ll also need to change the root directory for where your scripts are loaded to match your environment. This script can be downloaded here, Get-LastJobStatusServer.ps1.

param([string]$Server=$(Throw "Parameter missing: -Server ServerName"))
[void][reflection.assembly]::LoadWithPartialName("Microsoft.SqlServer.Smo")

##Point to library files
$scriptRoot = "D:\DBAScripts"

--change script root directory to match your environment
#$scriptRoot = Split-Path (Resolve-Path $myInvocation.MyCommand.Path)
. $scriptRoot\LibrarySmo.ps1	#Part of Chad Miller's SQLPSX project on CodePlex
. $scriptRoot\DataTable.ps1	    #Included with the scripts for this blog, also from a CodePlex project (http://www.codeplex.com/PSObject)

Set-Alias -Name Test-SqlConn -Value D:\DBAScripts\Test-SqlConn.ps1

##Define variables

## open database connection
$conn = New-Object System.Data.SqlClient.SqlConnection("Data Source=$server;
Initial Catalog=master; Integrated Security=SSPI")
$conn.Open()
$cmd = $conn.CreateCommand()
$cmd.CommandText= "	CREATE TABLE #JobsRun (	ServerName nvarchar(128),
											Job_Name nvarchar(128),
											Run_Date datetime,
											Job_Duration time(7),
											Run_Status varchar(50),
											Sample_Date datetime
										  );
					insert into #JobsRun
					select	@@SERVERNAME AS ServerName
							,j.name Job_Name
							,(msdb.dbo.agent_datetime(jh.run_date,jh.run_time)) As Run_Date
							,substring(cast(run_duration + 1000000 as varchar(7)),2,2) + ':' +
									substring(cast(run_duration + 1000000 as varchar(7)),4,2) + ':' +
									substring(cast(run_duration + 1000000 as varchar(7)),6,2) Job_Duration
							,case when run_status = 0
										then 'Failed'
								when run_status = 1
										then 'Succeed'
								when run_status = 2
										then 'Retry'
								when run_status = 3
										then 'Cancel'
								when run_status = 4
										then 'In Progress'
							end as Run_Status
							,GETDATE() As Sample_Date
					FROM msdb.dbo.sysjobhistory jh
						join msdb.dbo.sysjobs j
							on jh.job_id = j.job_id
					where	step_id = 0
					and		enabled = 1
					order by cast(cast(run_date as char) + ' ' +
								substring(cast(run_time + 1000000 as varchar(7)),2,2) + ':' +
								substring(cast(run_time + 1000000 as varchar(7)),4,2) + ':' +
								substring(cast(run_time + 1000000 as varchar(7)),6,2)  as datetime) desc

					delete from MgtServer.DBA_Console.dbo.LastJobStatus where ServerName = '$server'                       -- Change 'MgtServer' to the name of whatever the SQL Server is in
                                                                                                                                               -- your env that will house the LastJobStatus table which stores the
                                                                                                                                               -- results of this script
					insert into MgtServer.DBA_Console.dbo.LastJobStatus (ServerName, Job_Name, Run_Date, Job_Duration, Run_Status, Sample_Date)
					select	jr.ServerName,
							jr.Job_Name,
							jr.Run_Date,
							jr.Job_Duration,
							jr.Run_Status,
							jr.Sample_Date
					from	#JobsRun jr
					where	Run_Date = (	select	max(jr1.Run_Date)
											from	#JobsRun jr1
											where	jr1.Job_Name = jr.Job_Name)
					drop table #JobsRun; "
$cmd.ExecuteNonQuery()
$conn.Close()

 

There are references in the above script to a LibrarySMO.ps1 script that can be obtained from CodePlex (see the comments in the script for URL) and a DataTable.ps1 script (also from CodePlex, but included in the download file for this blog, for your convenience.)

# Taken from out-dataTable script from the PowerShell Scripts Project
# http://www.codeplex.com/PsObject/WorkItem/View.aspx?WorkItemId=7915

Function out-DataTable {

  $dt = new-object Data.datatable
  $First = $true  

  foreach ($item in $input){
    $DR = $DT.NewRow()
    $Item.PsObject.get_properties() | foreach {
      if ($first) {
        $Col =  new-object Data.DataColumn
        $Col.ColumnName = $_.Name.ToString()
        $DT.Columns.Add($Col)       }
      if ($_.value -eq $null) {
        $DR.Item($_.Name) = "[empty]"
      }
      elseif ($_.IsArray) {
        $DR.Item($_.Name) =[string]::Join($_.value ,";")
      }
      else {
        $DR.Item($_.Name) = $_.value
      }
    }
    $DT.Rows.Add($DR)
    $First = $false
  } 

  return @(,($dt))

}