Not-for-profit organizations can be awesome and extremely tough at the same time.   About 15 years ago, I joined a not-for-profit community service organzation.  Much like PASS, there were local groups, regions and  an international level.  On all three levels, this organization was definitely able to make a difference. I was fortunate to serve at both the club and regional levels and take part in an international event in Japan.  It was definitely a life-changing experience.

 

One of the problems that I noticed while I was serving on the regional board was that the more that we were able to accomplish, the more that was expected of us.  That was great and it was exciting to see the possibilites, but the problem that we faced is that our resources hadn’t changed.  Because of our tax status, we had to be very careful about how we used the funds raised during our fund raisers and the vast majority of that went to other non-profit organizations that we supported.  In order to give the regions and clubs more funding, we had a couple of options: raise dues, raise conference attendance costs or have more fundraisers during our conferences.  Raising dues and raising conference attendance had the possible side outcome of fewer members or fewer people attending conferences.  Additional fundraisers at the conferences took time away from what we were meeting about.  So we made the decision to make better use of what we had and revisit those ideas in the future.

 

I see a similar issue with PASS.  We’re incredibly fortunate to have an organization that provides as many resources as PASS does and with a free membership.  While there is a great staff at PASS, much of what gets done here is because of a community of dedicated volunteers.   I’ve been fortunate enough this year to be a part of the program committee and that has allowed me the opportunity to understand more of what goes on at PASS.

 

There have been many discussions about what PASS does well and less well along with what they should be doing.  The latest discussions have been about the PASS Summit survey results.  There have been a number of blog posts about it - Brent Ozar (Blog/Twitter), Tom LaRock (Blog/Twitter), Steve Jones (Blog/Twitter) and Andy Warren(Blog/Twitter), to name a few.  I’m not picking on these particular bloggers or even this particular discussion topic.  They all make valid points. 

 

The questions that I found myself asking (and answering) are:  Is this survey the best possible survey?  Probably not.  Did it provide PASS with valuable information?  Yes.  Are there people in the community that might be more skilled in writing/interpreting survey results?  Possibly.  Is paying for a company to write and interpret surveys for PASS the best use for our funds?  I don’t know.

 

If I were to look at the wish list for PASS, I’m sure that it would be huge.  Especially when it comes to items that require funding.  If there are additional things that are needed that will require additonal funding, that money needs to come from somewhere.  Do we increase the registration cost at the PASS Summit?  Do we institute dues?  Both of those choices have a direct affect on PASS membership and the members that are able to take part in the PASS Summit.  Short of that, we have to look at either companies or individuals that are willing to donate their time and resources.   Anyone that has volunteered for not-for-profit organizations know that getting companies and/or people to donate isn’t always the easiest thing.

 

I believe that members should continue to provide (constructive) criticism of PASS when it’s needed.  I don’t believe that there should be a step up or shut up attitude.   But if you can’t volunteer, then understand that PASS can’t continue to grow without also growing its resources.  If you have ideas, provide them.  If you have time, volunteer.  If you find that leprechaun at the end of the rainbow, take him out, steal the pot of gold and donate some of it to PASS.

 

I think most of us will agree that PASS is a pretty amazing organization.  It’s up to us to make it even better.

 

 

The Generate Scripts task in SSMS is insanely handy.  A few quick clicks and you can have tables, stored procedures, users, etc. scripted out.  I’ve used it quite a bit, but I ran into an unusual situation yesterday.

 

I needed to create a database schema and thought I’d use the handy dandy Generate Scripts task.  Popped through the wizard, clicked finish and it errored!  Here was the error message:

 

I was thoroughly confused – I was running SSMS 2008 against a SQL Server 2008 server.  I wasn’t sure where SQL Server 2005 even came into play.

 scripts_error

 

I went through the process again, this time paying closer attention and noticed this:

script_option

Apparently the Script Wizard defaults to SQL Server 2005.  I changed it to SQL Server 2008 and everything ran as expected.  While I had run this task against other SQL Server 2008 instances, apparently none of them made use of the new data types in 2008 and, as a result, didn’t generate errors.  Now why it would default to SQL Server 2008 is an entirely different question….

 

baby_facepalm

 | Posted by tledwards | Categories: DBAs, SQLServerPedia | Tagged: , |

Okay, maybe I’m being a little sarcastic.  I don’t troubleshoot dynamic SQL very often, so I don’t always see potential issues right away.  For those dear readers who work with it regularly, you should stop reading now – this is all pretty basic – but it took a few minutes out of my day.

 

This is the only dyna- that I like

This is the only dyna- that I like

My troubleshooting methods consist of displaying the command created by the dynamic SQL and seeing if it runs correctly or if I’m missing a quotation mark or something along the way.  There is probably a better way to troubleshoot, but again, I play with it so rarely that I’m stuck in a rut.

 

Evaluating Dynamic SQL commands

Late last week, a developer sent the following block of dynamic SQL code because he was having issues getting it to work:

EXEC
('
USE [master];
BEGIN
ALTER DATABASE [random_dbname] SET ONLINE;
WAITFOR DELAY ''00:01'';
END
USE [random_dbname];
'
)

 

I followed my normal troubleshooting methods and everything worked fine.  Trying to execute it as above, I received the following error message:

 

Msg 942, Level 14, State 4, Line 7
 Database 'random_dbname' cannot be opened because it is offline.

 

On first glance, I was confused, because it was obvious that I brought the database online.  I soon realized, though, that everything within the parentheses was being evaluated prior to being executed.  Apparently SQL Server  has a shorter memory than I do.

 

Breaking it into two separate statements like below accomplishes what needed to happen

EXEC

(

– Bring the database online

USE [master];

BEGIN

ALTER DATABASE [random_db] SET ONLINE;

WAITFOR DELAY ”00:01”;

END


)

go

EXEC

(‘ USE [random_db];

/*blah blah blah*/


)

Thinking that this ‘emergency’ had been handled, I went back to my other tasks. 

 

Database Context and Dynamic SQL

As these thing happen, though, received another call because after he ran all of this, the database context remained Master.  Fortunately, this was easy to explain.  The database context switching exists only during the execution of the EXEC statement and does not persist after its completion. 

 

None of this is rocket science or even deep SQL knowledge, but maybe it’ll save a minute or two for some other DBA out there.

 | Posted by tledwards | Categories: DBAs, SQLServerPedia, T-SQL | Tagged: , |

The Professional Association for SQL Server is restarting a Virtual Chapter focusing on SQL Server performance.  The goal of the PASS Virtual Chapters is to provide free and timely training focused on a particular SQL Server area or set of functionality (in this case SQL Server performance).  I have been honored to have been chosen to lead this particular Virtual Chapter, but, as you can imagine, this can’t happen without volunteers from the community.  We are looking for individuals to serve on the Performance Virtual Chapter steering committee that are:

 

  • Passionate about SQL Server (and who isn’t right? ;)
  • Interested in helping and serving the SQL Server community
  • Either have a blog or a Twitter presence
  • Willing to put in a couple of hours of work a week on such things as arranging speakers, putting together presentations, etc.  Generally, working to help get good education out to our SQL Server community on performance related topics.

If this sounds like you and you are interested in serving on the Performance Virtual Chapter steering committee, we want you!  Please contact Tim Edwards at sqlservertimes2@gmail.com.

 | Posted by tledwards | Categories: PASS, SQLServerPedia | Tagged: |

Last year my better half, Tim, suggested that we start a blog. It made sense for any number of reasons, but it scared the heck out of me. I couldn’t imagine that there was anything that I could ever blog about that hadn’t already been posted and probably by someone with much more experience than I have. I have a tendency (as I did today) to go out and search for other blog posts that cover the material that I’m about to write about to ensure that I’m at least adding something new with my post.

 

In school, if you’re assigned a paper on the pastoral imagery used in “The Adventures of Huckleberry Finn”, the instructor knows that there have been several works on that particular subject and assumes that you will be using (and referencing) information from those works. Blogging, for most people though, is not an assignment – it’s something that you make the choice to do. The people that read your blog assume that the ideas, tips and facts that you blog about are yours, unless you attribute them.

 

Over the past few months, there have been numerous tweets and blog posts about bloggers that have been plagiarizing other people’s works. In some cases the posts are lifted word-for-word and other cases they have selectively reworded the blog posts, but they were still identifiable. I have no idea whether it was intentional or that they were uninformed about how to use information from other posts. K. Brian Kelley [Blog/Twitter] wrote a post ‘Avoiding Plagiarism’ a couple of weeks ago. I thought I’d take this opportunity to add a little more information. As a note, I’m not an expert in plagiarism, so if any of you reading this post find errors, please comment and I’ll update this post.

On dictionary.com, the Random House Dictionary definition of plagiarism is:
“1. the unauthorized use or close imitation of the language and thoughts of another author and the representation of them as one’s own original work.
2. something used and represented in this manner.”

 

Reading this definition clarifies the reasons for my fear of blogging. I would never lift language from another blog post, but there have been blog posts that have inspired me (like K. Brian Kelley’s) to write a post. Here are some ways that I handle referencing other works.

 

I think you should read this

Example: Kevin Kline wrote an excellent post about the pains of not being included in a meme. You should read it here: http://kevinekline.com/2010/01/14/goals-and-theme-word-for-2010/
In this case, I have nothing to add, but I want to give my audience the opportunity to read great posts that I’ve come across.

 

You can say it better than I can

Example: PowerShell is fabulous. It’s so awesome that it’s caused some otherwise contentious DBA’s to wander astray. Colin Stasiuk [Blog/Twitter] admitted as much in a recent blog post : “…it’s no secret that I’ve been having an affair on TSQL. I’ve been seeing PowerShell behind her back and I gotta tell ya even after the initial excitement of a new language I’m still loving it. “
I know that I couldn’t have said it better than Colin, so in addition to linking to his post, I quoted his remark. Quotes should be used sparingly – if you find yourself quoting more than a sentence or two, you should probably use the example above.

 

Note: Blogs, whitepapers or other articles that are copyrighted require permission prior to their use. In addition, some online works have posted requirements on how they can be used. Brent Ozar (Blog/Twitter) has a good example of that here.

 

This is what I researched

Example: While the sp_change_users_login has the option to auto_fix logins, that action assumes that the username and login name match. If they don’t, it will fail. Using the Update_One option is a safer and the preferable way to handle it. For SQL Server 2005/2008, the ALTER USER statement is the preferred method for mapping users to logins. Greg Low’s (Blog/Twitter) article ‘Much ado about logins and SIDs’ provides a good explanation for these methods.

 

This is probably where unintentional plagiarism occurs most often. If, during your research, you read blog posts, articles, whitepapers, etc. and find useful information, your best bet is to attribute them. If you recall the definition of plagiarism above, it applies to both language and ideas, so if you learned something that you’re passing on a blog post or if you’re using that information to validate your ideas, they need to be cited. Again, keep in mind any copyright laws that might apply.

 

What doesn’t need to be cited

Common knowledge/generally accepted facts

Items that are common knowledge or generally accepted facts do not need to be cited.  Examples of common knowledge are:

  • A table can only have one clustered index
  • SQL Server is an RDBMS
  • Most SQL Server based questions can be answered with “It Depends”

 There is a decent article on common knowledge here.

 

 

Results of personal research

If you’re blogging about an incident that occurred or the results of test that you ran, they don’t require a citation. That is, unless, you did research to solve the incident or used other information to validate your test results.

 

Fair Use

The term ‘Fair Use’ had been bandied about in the recent plagiarism incident. The idea of fair use has no exact definition, but is determined by a set of guidelines. There is a good definition at Plagiarism.org and a good article titled “The Basics of Fair Use” by Jonathan Bailey. According to Plagiarism.org the guidelines look at:

  1. The nature of your use
  2. The amount used
  3. The affect of your use on the original

The ability to define fair use is pretty obscure and personally, I wouldn’t want to try and stand behind that argument.  The incident mentioned above definitely fell outside of those guidelines, in my opinion.

 

Public Domain

At some works fall out of their copyright term and become part of the public domain.  The Wikipedia article regarding public domain can be found here.  While the copyright laws no longer apply, they still require citations.  This point is moot for any SQL Server blogs, since, at this time, there aren’t any works old enough to have fallen out of their copyright term.

 

Conclusion

There is a huge amount of helpful information in blogs. Blogging also provides an opportunity for us to share information and experiences. I think that it’s understood that we learn from other people – just ensure that you credit those people for their hard work.

 | Posted by tledwards | Categories: DBAs, Discussion, SQLServerPedia | Tagged: , , |

As DBAs, we are increasingly being asked to manage more and more technology.  Some of that is the result of internal pressures (i.e. taking on additional roles) and some of that is the result of Microsoft bundling an ever increasing array of different technologies within SQL Server.  Dealing with these various technologies has become a weekly issue for me, but it really came to light today when I had a SQL Server Reporting Services 2008 server that was consuming 100% of the CPU.  Doing some poking around, I realized not only did I not really know anything about this beast called “SQL Server Reporting Services”, the tools to manage it are extremely lacking (now, that is my opinion coming from a position of complete ignorance about this technology).  I connected to the SSRS 2008 service with SSMS and, from there, I could only view three things:  SSRS jobs, security, and shared schedules.  I determined that none of the shared schedules were responsible for the utilization since nothing was scheduled to run anywhere near the time that the problem started, so that was a dead end.

 

Next, I connected to the Report Service by hitting http://[servername]/reports.  From here, I could look at all of the various reports that had been deployed to the instance, general site settings, security, both instance-wide and at a report level, and I could look at my own subscriptions.  The one thing that seemed to elude me was visibility into what, if anything, users were running on the SSRS server.

 

Frustrated, I connected to database instance through SSMS that hosts the ReportServer database.  I figured there had to be something in the database I could query to give me some visibility into what my SSRS instance does all day.  Thinking like a DBA, the first thing I did was look under “System Views” in the ReportServer database.  I saw two views, ExecutionLog and ExecutionLog2, so I decided to do a simple SELECT TOP 100* from each to see what they would give me.  This is where I stumbled upon my gold nugget for the day.  Right there in the ExecutionLog2 system view was all of the information that I had been looking for.  Running the following query, you can get a wealth of valuable information on what reports users are running, when they are running them, what parameters they used, and how long the report took to generate (broken down into data retrieval time, processing time, and rendering time) – all key information for trending the load that your server is under and <gasp> justifying new hardware, if needed.

 

SELECT InstanceName
       , ReportPath
       , UserName
       , RequestType
       , Format
       , Parameters
       , ReportAction
       , TimeStart
       , TimeEnd
       , TimeDataRetrieval
       , TimeProcessing
       , TimeRendering
       , Source
       , Status
       , ByteCount
       , RowCount
       , AdditionalInfo
FROM ExecutionLog2
WHERE CONVERT(VARCHAR(10),timeend,101) >= -- Some start date that you supply
AND CONVERT(VARCHAR(10),timeend,101) <= -- Some end date that you supply

 

To many of you who regularly use SSRS this may be very remedial, but I figured I would throw this out there for those DBAs who like me, have to learn this stuff on the fly in a crisis.

 

By the way, as a side note, for those who are curious about why the ExecutionLog2 system view was used instead of the ExecutionLog system view, it appears that the ExecutionLog system view exists for backward compatibility for SQL Server 2005 Reporting Services upgrades.  The ExecutionLog2 system view provides much more information than the ExecutionLog system view.

/*Note – I have yet to see if this breaks anything – this post is for observational use only.  If it does break replication, I’ll add some notes to the bottom of this post */

 

Late last week, we had the fabulous opportunity to see just what happens when the dual power supplies for your SAN are plugged into the same PDU and that PDU gets powered off.  As you can imagine, this caused all sorts of neat things to occur.  Fortunately, the situation was remedied fairly quickly, but not before breaking replication for one of my databases.

 

The error that I received was: “The concurrent snapshot for publication ‘xx’ is not available because it has not been fully generated…”.  There were probably different routes to go to fix this issue, but I decided to reinitialize the subscription from a snapshot and went ahead and created the snapshot.  This particular database is a little under 500GB, so I knew that this would take awhile.

 

The next morning when I came to work, I could see that the snapshot had completed and the subscription had been reinitialized.   I knew that it would take some time for all of the commands that had queued up in the distribution database to be written to the subscription and so I went about my day.

 

Later, I noticed that the number of undistributed commands didn’t seem to be going down at all and was in fact growing.  This is a fairly heavily utilized database with writes going on very frequently, but it still seemed odd.   I did some querying to try to figure out just how behind we were, but, to my surprise, the subscription was completely up to date.  This is where it started to get strange.

 

I queried the MSdistriibution_status view to get an idea of what was still outstanding and saw the following:

Article_id Agent_id UndelivCmdsInDistDB DelivCmdsInDistDB
1 3 0 2130
2 3 0 3295
3 3 0 8275
4 3 0 2555
5 3 0 8310
6 3 0 2521
1 -2 2130 0
2 -2 3295 0
3 -2 8275 0
4 -2 2555 0
5 -2 8310 0
6 -2 2521 0
1 -1 2130 0
2 -1 3295 0
3 -1 8275 0
4 -1 2555 0
5 -1 8310 0
6 -1 2521 0

 

You’ll notice that the row counts for agents -1 and -2, the number of undistributed commands for each article was the same as the distributed commands for agent 3.

 

When I queried the MSDistribution_Agents table, I saw three distribution agents.  One was named identically to the distribution agent job.  The other two had subscriber_ids with negative numbers and the subscriber_db was ‘virtual’. 

 

I did some digging around on some unnamed search engine to try to figure out what virtual subscriptions were.  To be honest, there wasn’t  a lot of information to be found.  I did find a question on Eggheadcafe.com from 2007 where it seemed like the person was having a similar issue.  Reading through, it looked like it had something to do with the settings for allowing anonymous subscriptions and immediately synchronizing the snapshot.  I looked at my publication and both those were set to true.

 

Of course, I posted a question on Twitter at the same time and Hilary Cotter [Blog/Twitter], stated that the virtual subscriptions “are to keep your snapshot up to date, ignore them”.  That seemed to be consistent with what I was reading, but it still seemed odd that they appeared to be holding onto undistributed commands.  This, in turn, was causing my distribution database to grow.

 

Since, at this point, I figured that I was going to have to tear the whole thing down and rebuild it, I thought that I’d try to modify the publication first.  I knew that I didn’t want to allow anonymous subscriptions and I didn’t need to immediately synch from a snapshot, so I ran the following commands:

 

exec sp_changepublication @publication = 'xxxxx', @property = N'allow_anonymous', @value = 'false'

exec sp_changepublication @publication = 'xxxxx', @property = N'immediate_sync', @value = 'false'

 

I reran my query of the MSdistribution_status and voila, the undistributed commands were gone.  When I queried MSDistribution_agents, there was still a virtual subscriber, but it was no longer filling up my distribution database with undelivered commands.

 

I still don’t know what caused this – it seems like something occurred when I reinitialized the snapshot.  So far, replication is continuing to run without errors.  I’d be interested to hear if anyone has run into a similar issue.

Unfortunately, it has been awhile since I have posted a blog up here, but having spent several hours last night on Twitter with a number of esteemed members of the SQL Server community trying to educate a blogger, John Dunleavy [Twitter/Blog] about the proper way to credit authors when you use their work inspired me to get this post up.  Unfortunately, the issue of bloggers or web site operators using other people’s work without properly crediting them is becoming an increasingly more frequent occurrence.  Earlier in the day yesterday, Aaron Bertrand [Twitter/Blog] had a very similar issue with a SQL Server MVP (you can read Aaron’s blog post and the associated comments here). 

So, the point of my post today is not really to rehash the issue of plagiarism (intended or unintended), but rather to discuss why most of us give back to the community in the form of informative blog posts, volunteerism, and answering questions on the forums and to extend that a little bit, if I may, by offering the services and knowledge of the SQL Server community to educate new bloggers about how to get started, what is acceptable, and what is not.

 

First off, why do we do what we do?  Speaking for myself and,  I think,  many members of the SQL Server community, we do this because a) we were all in a position where we were just starting out and needed help; b) someone helped us, answered our questions, and we feel honored to be able to do the same; c) we take great pride in having one of the most open, collaborative, and philanthropically motivated communities in the world of technology.  These services are provided free of charge to anyone who visits any of the hundreds of great SQL Server blogs out there.  The number of books that you would have to purchase or expensive courses that you would have to attend to get anywhere near the content that is freely available on blogs in the SQL Server community would set you back many, many thousands of dollars and it still wouldn’t provide you with all of the benefit of the experience that this community writes from.  Through our discussions last night, some barbs were thrown our way such as “how can you help if your[sic] linin[sic] your pockets” and “You guys are being selfish.”  I would like to address those as I believe there must be a huge misconception about the motivation behind what we do.  First off, there is “no pocket lining” going on here.  Speaking for our site, http://sqlservertimes2.com, we pay for the domain registration and hosting out of our own pockets which, in my eyes, is an investment back into the community.  We receive no revenue from our site as there is no advertising or services sold from the site.  Now, that is just our choice and I want to be clear that there is absolutely nothing wrong with hosting ads offering services from your site, if you so choose, to help pay the bills, as long as the content of your site is original or you have at least obtained the permission of the original authors or copyright owners to host non-original material.  Our motivation is strictly a collaborative one.  Lori and I post issues that we have come across in our jobs and the solutions that we have come up with to solve them.  Throughout our careers, we have relied heavily on others’ blog posts for our professional development and feel honored to now be able to participate in that and provide something back.  Our compensation is solely the feedback we receive from readers that lets us know that we provided something that saved someone some time somewhere down the road, nothing more, nothing less.

 

So, where does this leave us?  As I have said many times, I think the SQL Server community is one of the greatest technical communities around.  One of the main reasons for this is the lack of egos and willingness to share.  The majority of us are not insecure and welcome new bloggers with open arms because all of us are constantly learning.  If a day goes by where I haven’t learned something new, it was not a very good day in my book and the more people out there sharing knowledge, the more likely I am to learn something new.  This is a very forgiving community and I believe that if you are a blogger who has or is plagiarizing the work of others because, for some reason, you didn’t realize that it was wrong or you don’t know how to get started blogging, reach out to the community and ask for help.  There are many of us who will gladly help you start sharing your own knowledge and gifts with the community, all you have to do is be willing to understand that plagiarism is stealing and wrong and be open to feedback from the group.  I know most technical professionals are proud and do not like to admit being wrong, but  many times being wrong is the first path to learning.

It would be so nice if something made sense for a change.

It would be so nice if something made sense for a change.

A couple of weeks ago, we had a database that had become corrupted due to a disk failure.  After running a DBCC CHECKDB, it appeared that there was one table with a corrupted cluster index.  Paul Randal (Twitter/Blog) verified that this was the case and that it was, unfortunately, non-recoverable without data loss.  Fortunately, we had backups and this database was a replication subscription, so between the backup and publication, we were able to rebuild the database without losing any data.  We then had to rebuild a kind of goofy replication set up, but that’s another story.

 

As I was doing some validation about the data that was contained in the backups and what needed to be pulled down from the publication, I realized that there was at least one other table that was corrupted.  This is where it became confusing.   I went back through the output from the CHECKDB and noticed that the table in question was not mentioned.   At this point, I went through and matched up all of the tables that were listed in the CHECKDB output against the actual tables in the database and found that there were three tables in the database that were not listed on the DBCC output.  I ran DBCC CHECKTABLE against the missing tables and while two of them came back with no error, one was definitely corrupted.  The CHECKTABLE command actually “terminated abnormally” on the corrupted table.   I went back through the the error logs and dump files that had been generated at that time.  Aside from the references to the CHECKTABLE that was run, the only object referenced was the table that had shown up in the CHECKDB output.

 

As a note, this is a 2008 SP1 database.  I know that, when running CHECKDB from SSMS, it will only display the first 1000 errors, but there were only 274 errors mentioned in the output, so I didn’t think that was the issue.  I asked Paul Randal (mentioned above) and his thought was that perhaps the metadata had also been corrupted and because of that CHECKDB may not have seen those tables as existing. 

 

We’ve recovered from the incident, but it was a curious experience.  Up until now, I’ve gone off of the CHECKDB output, but this gives me reason to think that there might be tables that aren’t included. I am interested to know whether anyone else has ever run into a similar incident.  Hopefully it was just an (odd) learning experience.

SQL Search – a review

2 February 2010

SQL Search a new (free!) SSMS plugin from Red Gate, was released yesterday, February 1st.    This tool allows you to search for text within SQL Server objects.    I’ve had occasions where I needed to search legacy stored procedures for decremented code (backup log … with no_log – egad!).  There have also been times where database servers were renamed and any references to the previous server name had to be changed. Prior to the beta version of this tool, I’d used the syscomments system table as a way to find text within stored procedures, but the results definitely weren’t pretty.

 

SQL Search allows you to enter a piece of text, specify either all objects or specific objects and whether you want to search all databases or a single database.

 

In the example below, I searched for the term ‘Reject’ occurring within all objects on the Reporting database.  The list of objects, their types and how the match was made is displayed in the upper half of the window.  When the object that is highlighted in the top of the screen is a stored procedure or function, the text is displayed in the bottom half of the screen, with the text highlighted.

sqlsearch

 

One of the most helpful features is the link in the middle of the screen to ‘Select object in Object Explorer’.  Just as it sounds, a click will take you directly to that object.

 

In working with it, I haven’t noticed many downsides.  It takes a bit of time the first time a database is used as SQL Search indexes it, but after that, searches seem to go pretty quickly.  The first time that you search a database on a server, SQL Search needs to be opened within the context of that server, but after that point, it is added to the dropdown box of servers that are selectable.

 

SQL Search seems to be a helpful tool and, as I mentioned before, it’s free.  To download, just click here to go to the Red Gate site.

 | Posted by tledwards | Categories: Administration, DBAs | Tagged: |