I had been thinking of writing a blog post on the SQL Server community for the last couple of weeks.  Seeing Brent Ozar’s blog post “What Community Means to Me” helped me decide to go forward with it.

In my first draft of this post, I went into great detail about the beginning of my career, my quest for meaningful, reliable sources of information and my wish for a view of a larger community.  Unfortunately, I’m trying to get ready for a birthday party, Halloween, soccer games and, oh yes, the PASS Summit.  So that’s another story for another time.

 

When I first signed up to attend the PASS Summit, my hope was that my darling husband would be able to attend with me.  Regardless of what Tim might say, I’m not outgoing enough to walk up and talk with people I’ve never met.  Yet I know that those conversations will probably be the parts that I remember most and best from the Summit.  Unfortunately, it wasn’t in the cards for Tim and I’ll be attending solo.   I pictured three days of wandering around, trying to make conversations and going back to the hotel room to eat room service.

 

Enter the happy-happy-joy-joy land that is Twitter.  Tim and I both started using Twitter in April of this year.  It was interesting getting started – kind of like walking into a conference – you all have the same interests, but you don’t know anyone.  Slowly but surely we got involved.  Had some lively IM conversations at the spring SSWUG vConference in the Quest chat room, tried to write a rap song, got involved with PASS Virtual Chapters, started a blog, shared meals with a couple of great DBAs and got the kind of SQL Server advice and help that you can’t pay for.

 

Twitter is obviously not the only method for getting involved with the SQL Server community, but I’ve found it extremely helpful for becoming familiar with other people that do what we do.  By reading tweets and blog posts throughout the day, I’ve picked up tips and tricks as well as become exposed to features and functionality that I might not have been aware of.

 

Now, in addition to attending some excellent sessions, I’m also looking

It's not quite this, but close...

It's not quite this, but close...

forward to meeting a number of people that I’ve ‘met’ through Twitter.   I’ve felt more a part of the SQL Server community in the last six months than the previous 5 1/2 years of working as a DBA.  It’s a great community and I talk about the benefits of being involved any time I can.   I still wish Tim could have come along, but I also know that I won’t be feeling as alone.    Maybe when I’m there, I’ll meet someone who hasn’t yet had the chance to get involved in the community and be able to pass this along to them.

 | Posted by tledwards | Categories: DBAs, Miscellaneous, Personal | Tagged: , , , |

The company I work for is currently in the midst of solidifying their SQL Server high availability and disaster recovery scheme.  While doing this, I did a comparison of all the available options within SQL Server for HA/DR for recommendation to my management.  We eventually went with a third party tool and this blog isn’t an endorsement of that tool, but rather is intended to give insight into how one company (the one I work for) approached looking at our disaster recovery options.  Since I had to put this together, I figured that maybe some of this might be helpful to someone else out there, so I thought I would write it up as a blog post.

 

What Do The Terms High Availability And Disaster Recovery Mean?

Before we get too deep into this, I figured I would define the terms High Availability and Disaster Recovery since they are quite often used together and sometimes, mistakenly, interchangeably.

 

High Availability – Protects against hardware/software issues that may cause downtime.  An example could be a bad processor or memory issue.

Disaster Recovery – The ability to recover from a catastrophic event, such as a natural disaster or, on a more local level, a data center disaster (fire, flood, cooling outage, etc) and be able to continue business operations with minimal loss and minimal downtime.

 

These two concepts really go hand in hand in ensuring business continuity and both should be built into a business continuity solution.

 

High Availability/Disaster Recovery Options In SQL Server

SQL Server has a number of native features built in to provide some basic measure of high availability/disaster recovery.  These include:

 

  • Database backups – this is probably the most basic form of disaster recovery for SQL Server and one that should be practiced in every situation, regardless of what other HA/DR solutions are in place.
  • Clustering – this provides a way of binding two or more like Windows servers together in what is known as a cluster.  Each of the servers in the cluster is considered a node and, typically, one node is “active” (processing transactions) and the other nodes are “passive”.  There is a private network that runs between the nodes so that if the active node fails to deliver a “heartbeat” that the other node(s) can detect, an automatic failover is invoked and one of the passive nodes is promoted to active.
    • Pros of Clustering:  Clustering provides redundancy in the case of server hardware failure and provides a fairly quick (within 5 minutes), automatic solution to move processing to another server.
    • Cons of Clustering
      • Does not protect against disk issues since all nodes share the database on the same disk.
      • Only protects against issues with that specific server, not data center-wide since all nodes are located in same data center.
      • Only addresses availability, not disaster recovery
  • Database Mirroring – new in SQL Server 2005, database mirroring offers a way to mirror a database to another server (and disk).  All transactions are sent to the mirror server as they are committed on the production server.  Depending on how it is implemented, can automate failover, similar to clustering.
    • Pros of Database Mirroring
      • Provides some form of both HA and DR since mirror can be located in another data center, thus protecting you from hardware failure and disaster.
      • Fast.  Mirror is updated virtually instantly
    • Cons of Database Mirroring
      • Only done at the database level, not the instance level and only user databases can be mirrored, not system databases.  This means that some other form of synchronizing logins and other system database objects has to be devised.
      • To be get all features of database mirroring, Enterprise Edition has to be used. 
      • Any SQL Agent jobs must be manually enabled on the mirror if a failover takes place.
  • Log Shipping – this is one of the oldest forms DR available in SQL Server and involves setting up a warm standby server with a copy of the user database on it that is to be protected and backups of the transaction log from the production database are periodically shipped to the standby server and applied.
    • Pros of Log Shipping:
      • Tried and true technology that has been around for a long time.
      • At the database level, can provide both HA and DR protection because warm standby can be located in another data center.
    • Cons of Log Shipping:
      • Amount of potential data loss is higher than with the other options because logs are usually shipped no more frequently than every 5 minutes and typically, more like every 30 minutes to an hour.
      • Failover is fairly manual and time intensive.  Takes longer than other options to bring warm standby online.
      • Like database mirroring, this only protects a database, not the entire instance.
      • For SQL Server 2000, this feature is only available in Enterprise Edition.  Available in Standard Edition from SQL Server 2005 forward.
      • Does not transfer non-logged transactions or schema changes (security, addition of database objects
  • Replication – while not necessarily intended as an HA/DR solution replication can be used in both scenarios.
    • Pros of Replication:
      • Real-time data updates.  Minimal data loss, if at all.
      • Can be used for both HA/DR as publisher and subscriber can be in different data centers.
    • Cons of Replication:
      • Complicated to setup and maintain.
      • No provided failover mechanism.  This has to be created as part of solution.
      • Again, only database specific solution, not instance specific.

Given that these native solutions were really only database and not instance based, we chose to look at third party options.  The product that we settled on was Double-Take.  While certainly not an easy solution to implement, Double-Take was attractive to us because it allowed us to set up a stand-by server in our hot site for each of our mission critical SQL Servers and then continuously replicate the entire instance to the stand-by server.  It also provides for either automated (if the server stops responding) or manual failover (we have opted for manual) through a control applet that automatically swaps DNS entries between the production and the standby server when a failover is initiated.

 

Double-Take:  How It Works

Both the production and the standby server have to have the exact same SQL Server configuration (Edition, build, directory structure/location, instances, etc) installed.  The Double-Take software is then installed on both the production and standby server and then, through the Double-Take software, the production server is configured as the source and the standby server is configured as the target.

 

During the configuration process, you can configure Double-Take to compress the data before it replicates it over the WAN to the stand-by server.  This can save a ton of bandwidth and makes sure that the transactions are queued on the target server as quickly as possible ensuring minimal data loss in the event of a failure.

 

Additionally, Double-Take will also generate scripts for the failover, failback, and restore of the databases back to the production server when it is back in commission.  These scripts and/or the replication can be customized by overriding the automatically generated rules that Double-Take creates.

More tidbits

23 October 2009

Work has been hectic for both of us, so please pardon the absence of full-bodied blog posts.  Until we get our feet under us, here are a couple of (hopefully) helpful tidbits.

 

Since you've been good and patient, you can have a treat

Since you've been good and patient, you can have a treat

Sys.dm_exec_requests

I’m not the most patient person in the world and have a tendency to think that a process has stopped when it’s actually processing away.  A few months ago, I needed to run a DBCC on a database with corruption errors.  This wasn’t a large database and the DBCC had already been running for about 2 1/2 hours.  I put something out on Twitter about whether I should continue the DBCC (which I knew that I should) or worry about whether something had gone horribly wrong.  Paul Randal quickly tweeted back to not stop the DBCC and to check to see how far it had gone.  That’s when he gave me the information that has helped me through a number of waiting games:

 

Sys.dm_exec_requests is a DMV that returns a row for every request that is currently executing.    Since I was certain that this was the only dbcc checkdb running on this server, the following query eased my mind:

 

select percent_complete
 from sys.dm_exec_requests
 where command = ‘dbcc checkdb’

 

Keeping an eye on that let me know that the process was progressing and kept me from making a stupid mistake.  I’ve since used that on long running processes like backups and restores.  It also provides a way to tell users that, yes, this is going and it’s x% done
One caveat – the precentage_done field is for specific operations like backups, restores, rollbacks, dbccs, etc.  It is not populated for queries

 

Altering a table in SSMS 2008 (without t-sql)

While I know that we’d all rather use an sqlcmd query through a DAC connection to do anything SQL Server related, there may be times that it might be more expeditious to make the change within SSMS using the Design option.

 

If you’re in SQL Server 2008 and the table is populated, making any changes within the Design option may cause the following error:

alter_table

While we were able to do this in SQL Server 2005, it seems like it’s unavailable in 2008.  Fortunately, you can do this in SSMS, you just need to set the option to make it available.

 

In SSMS, go to Tools->Options->expand Designers.  Then under Table and Database Designers, uncheck the box next to ‘Prevent saving changes that require table re-creation’.  Voila!  You can now make all the changes that you want. 

 

As a last minute bit of news, Tim just found out that he’s now a Friend of RedGate.  Congratulations, Tim!

This is a quickie blog post, but I thought I’d post it before I forgot about it.

 

We have transactional replication turned on for one of our production databases.  This database includes XML fields and we recently saw the following error:

 

           Length of LOB data (78862) to be replicated exceeds configured maximum 65536

 

Looking into it, this is a server setting and the default value for Max Text Replication Size is 65536 (bytes).   Because our XML data is of varying sizes, I made the decision to set this to the maximum allowable value – 2147483647 (bytes). 

 

To change it through SSMS:

  •                Right click on the server and choose Properties
  •                In the Advanced page – change the Max Text Replication Size to 2147483647

In T-SQL:

EXEC sp_configuremax text repl size’, 2147483647

GO

RECONFIGURE
GO
 
Hope this helps someone out! 
 
 

 

 

 

This weekend, one of my co-workers passed away.  He was 33 years old with a wife and toddler at home and a baby on the way.  I didn’t know him well, but we had worked together on a few projects and he was always very knowledgeable, thorough and helpful.  His passing was very unexpected and, as these things do, it caused me to think about my own life and my priorities.

 

Like many DBAs, the servers that Tim and I manage need to be available 24/7.  Tim is on call every third week, but being a lead, he often needs to step in on weeks that he isn’t on call.  I’m the sole DBA at my company.  We’re both proud of being dedicated professionals and we work hard to keep our current systems available as well as keeping our skills updated so that we can provide the best solutions possible.  I firmly believe that we’re setting a good example for our children in showing them the responsibility that we take in our positions.

 

The problem that we both face is in knowing when to step out of our work personas and focus on our family.  Tim is an excellent father and I work hard to be a good mom, but I know that there are many times that I’m talking about work, thinking about work, worrying about work when I should be more engaged as a wife and mother.  While I know that we both provide benefit to our businesses, I also recognize that, if we left, we would be replaced and work would continue as usual.  The time and effort that we put into our time together and our time as parents will shape all of us for the rest of our lives.

 

There isn’t an easy solution to this problem.  It’s not always that simple to walk out the door (especially for Tim, who works at home) and turn off the DBA part.  There will be times that I need to focus on an issue even after leaving work in order to sort it out, but I’m going to make the effort to do that only when it’s necessary.   I need to keep in mind that my first job is to take care of my family.  If I do that, the rest of life will work itself out.

A co-worker of mine had the following saying up on their wall:

 

                              Remember, the people that you work for are waiting for you at home.

 

I need to keep that in mind.

 | Posted by tledwards | Categories: DBAs, Miscellaneous, Personal | Tagged: , |

A couple of weeks ago, I tweeted a question out to the community about whether there was any way to run PowerShell scripts from within a SQL Server Reporting Services 2008 DataSet query.  The answers that I received back were discouraging.  Even though Microsoft is touting PowerShell as the best thing since sliced bread (I have to admit, it is pretty darned cool!) and integrating into all of its current products, the integration with SQL Server 2008 seems to have stopped at giving you a mini PowerShell console (SQLPS) and a SQL Server provider to give you easier access to running SQL Server commands from within PowerShell.  This integration hasn’t gone beyond the database engine, so if you want to run PowerShell scripts from within Reporting Services, you have to get creative.  That posed a big problem for me because the report I was writing depended on some PowerShell scripts that I had written. 

 

After walking away from the problem for an hour or two, it finally hit me.  Since a Reporting Services 2008 DataSet query runs T-SQL code, including stored procedures, why don’t I just write a stored procedure that I can use to run a PowerShell script.  Below, is the stored procedure that I wrote.  It is really pretty simple.  It takes as a parameter, the command line command that you would normally type in at a PowerShell command line to run your script.  This information would include the script path\name and any parameters that the PowerShell script requires.

 

USE [MyDatabase]
GO

IF  EXISTS (SELECT * FROM sys.objects WHERE object_id = OBJECT_ID(N’[Utility].[RunPowerShellScript]‘) AND type in (N’P', N’PC’))
DROP PROCEDURE [Utility].[RunPowerShellScript]
GO
 

USE [MyDatabase]
GO
 

SET ANSI_NULLS ON
GO
 SET QUOTED_IDENTIFIER ON
GO
 

CREATE PROCEDURE [Utility].[RunPowerShellScript]
      @Script           varchar(2000)
AS

set nocount on;

declare @cmd  varchar(2000);
set         @cmd = ’sqlps -c ‘ + ‘”‘ + @Script + ‘”‘
exec master..xp_cmdshell @cmd, no_output;
GO

 

In the above code, “[MyDatabase]” of course refers to the database that you would want this stored procedure to be stored in.  So, walking through the code, all this script really does is create a stored procedure called Utility.RunPowerShellScript that runs the xp_cmdshell extended stored procedure with a command string that calls the SQL Server PowerShell mini console (SQLPS) in command line mode and passes to that command line whatever you passed into the stored procedure as a parameter.  For my own purposes, I have created a schema called “Utility” so that I can easily identify stored procedures, such as this one, as helper stored procedures.  Feel free to omit this if you like.  So an example of how you could use this stored procedure after you have created it would be as follows.  Say you wanted to run a PowerShell script called “Get-DriveSpace” that returns total size and free space information for the drives on a server that you pass in and resides in the D:\Scripts folder.  All you would need to do is type:

 

exec DBA_Console.Utility.RunPowerShellScript‘”D:\Scripts\Get-DiskSpace.ps1 -Server MyServer”‘             

  

Where “MyServer” is the name of the server that you are passing into the PowerShell script as a parameter.

 

That’s it.  Short and simple and now I have a mechanism to call all of the PowerShell scripts that I want from within a Reporting Service DataSet query.

I have been managing DBAs for over ten years now and one question that always seems to come up, usually from management, is what do DBAs do anyway?  Hopefully, you have read my prior post, “Yes, Production DBAs Are Necessary!” and you know where I stand on this issue. 

 

I fully believe in the role of a production DBA and, as someone who has been in this role for well over a decade, would like to define, based on my experience, what I think that role should be.  The most effective way of doing this, I believe, is to define what a production DBA should and shouldn’t be expected to do.  Granted, this won’t work perfectly for every situation and this is not intended to be an exhaustive list, but the following should be a good guideline for defining what a production DBA does and why they are critical to ensuring that your data is safe and available.

 

User data

A production DBA should not have anything to do with data. There, I said it. This statement alone is going to generate tons of controversy, but let me explain and specify that I’m referring to user data in user databases. This confuses many people because, after all, DBA does stand for database administrator. Let me clear up that confusion right now by saying that database is one word and is a manageable entity just like a server. Data lives inside a database, much like an application lives on a server. Typically, there is a lot of data in a database, just like there can be many applications on a server, each of which may have their own application administrator. As in the case of the application analogy, we don’t expect the system/server administrator to necessarily manage all of the applications on the server, the production DBA should not be expected to manage the data in the database. The production DBA is responsible for the uptime of the database, the performance of that SQL Server instance and the safety (backups/DR) of the data in the database, not for the data itself. There should be other roles in the enterprise responsible for determining how the data gets into the database and how it is used from there, namely, data architects, database developers, etc.

 

Optimizing T-SQL

A production DBA should work with database developers to help them optimize T-SQL code. I emphasize work because production DBAs should not be writing code, outside of administration automation. This goes hand in hand with #1 above and, when you think about it in practical terms, makes sense. A production DBA may be responsible for managing hundreds or thousands of user databases all containing data for different uses. A production DBA can’t practically understand all of this data and be able to write code against it effectively. What the production DBA can do, though, is help the database developers optimize code by looking at available indexes and discussing with them how to best to arrange joins and where clauses to make the most effective use of indexes and/or create additional indexes. Additionally, if your organization falls under the Sarbanes-Oxley or PCI regulations, this segregation may be mandatory. Your developers should not have access to production databases. Their code should be promoted by the production DBA, who does have access to the production databases. This also means that you should have a development environment that reasonably approximates your production environment.

 

Managing Instances

The production DBA should manage the SQL Server instance(s). This is a big job and if your data is important and you have more than a handful of instances, it is a full time job. Let me breakdown what this actually includes to illustrate just how large a job it is:

  1. Install/patch SQL Server instances – The production DBA is responsible for installing any new instances of SQL Server and making sure that existing instances stay current on Microsoft patches. This isn’t just a matter of running through an install wizard. A number of things have to be considered by the production DBA when a new instance of SQL Server is installed. These include:
    • Collation settings. Questions have to be asked about how the application that uses this data expects the database to handle the case of words, accents on words, code pages that the application expects to be used (this gets into localization and language for those shops in other countries or that are responsible for servers that reside or are going to be accessed by users in other countries).
    • Drive/File Layout – To make the database instance run optimally, the production DBA has to consider what drive resources are going to be available and how the database files should be laid out across those resources. During this phase of the installation the production DBA has to consider whether only local storage is available or whether storage will be housed on a SAN. If it is going to be housed on a SAN, the production DBA needs to work with the SAN administrator to ensure that the LUNs are set up appropriately for optimal SQL Server access, which in and of itself requires a lot of knowledge and experience.
    • Scalability – The production DBA should be involved in developing the specifications for the hardware. Questions that the production DBA will be asking of the various parties include, how many concurrent users will be accessing the data, how many databases will there be, is the data static or changing and how does it change (i.e., batch load, transactional updates), etc. This will give the production DBA a better idea of what kind of resource utilization to expect and drive the hardware specification process. It will also help determine recovery model and backup strategies.
  2. Create, and periodically test, a backup and recovery scheme for your enterprise’s data. Things the DBA has to consider as part of this include:
    • Is the data development or production data? Yes, development data gets backed up and is considered in an effective backup/recovery scheme because, after all, you don’t want to lose any of your development effort; it just isn’t prioritized as highly as production data.
    • How often data is updated in databases and how is it updated (online transactions, batch, etc)? This determines the recovery model that should be used and leads the DBA to ask another question, what is the maximum acceptable data loss in terms of time? With the answers to these questions, the DBA can more effectively determine the type and the frequency of backups (full, differential, transaction log, etc).
  3. While somewhat tied with backup/recovery, the DBA is also tasked with helping to come up with a disaster recovery strategy for the SQL Server environment, within the constraints of the enterprise’s available resources.
  4. Uptime/performance – The DBA is tasked with managing/monitoring those things that would impact the availability of the databases. This includes CPU utilization, I/O performance, disk space, reviewing event and SQL Server logs,etc.
  5. Help design the security model for SQL Server instances. Within SQL Server, there are a number of built in security roles, both at the instance level and database level that users can made members of in addition to the ability to create custom roles. Additionally, SQL Server can operate in Windows Authentication mode, which integrates your domain security and uses the users’ Windows domain accounts or Mixed (or SQL Server and Windows Authentication Mode) Mode, which allows accounts to either be Windows domain or native SQL Server accounts. There are a number of factors to be considered here to make sure that your users can use your system while still protecting and safeguarding sensitive data and the SQL Server configuration.

 
So as you can see, the production DBA has plenty of things to contend with in order to effectively and efficiently maintain a SQL Server environment.  The bottom line comes down to how valuable is your data to you and how much are you willing to risk by not allowing your DBA to dedicate themselves to maintaining and safeguarding it.

I know that, for many, this is a controversial topic.  There are those that believe that there really is no such thing as a SQL Server production DBA and that DBAs should be jacks of all trades doing anything from database development to OLAP/BI development to .NET programming to being a webmaster or network/server/system administrator.  It seems that everywhere I turn anymore, job postings, sharing horror stories with colleagues, and even blogs from members of the SQL Server community, I see references to production DBAs being more than just a DBA.  It as if, somewhere, managers are thinking that they need to have someone manage their SQL Server databases, but they don’t know what that means, so the first thought is “let’s just make it part of someone’s duties that we already have on staff.”  The question is constantly asked, “Why can’t the database developer handle that, he/she already has to use SQL Server?” or the common mistake, “let’s just let the server administrator handle it.”  There has even been a term coined for this, “the reluctant DBA.”  I have actually heard SQL Server compared to Access since it has wizards and, because Access doesn’t require a production DBA, why the heck should SQL Server?  In my experience, this perception is especially prevalent in the small and medium-size (SMB) market, but there are large companies and managers that have grown up in medium-size companies that seem to reinforce this misconception.

 

Microsoft hasn’t really done anything to correct this misconception.  From a marketing perspective, I guess it is in their best interest for prospective SQL Server customers to think that it doesn’t take much to manage a production SQL Server environment.  When companies purchase Oracle or move their applications to Oracle databases, it is a foregone conclusion that dedicated production DBAs are necessary and so, these companies typically build that into their cost calculations.  I guess Microsoft feels that if customers don’t know a DBA is required, it makes their product look that much less expensive.

 

Now don’t get me wrong, I am not saying that database developers, server administrators, network administrators, etc. can’t become DBAs, they just can’t do it without training and experience, just like they didn’t fall into their current jobs without training and experience and the job of production DBA certainly can’t be done (at least not effectively) in someone’s “spare time.”  As SQL Server has matured, it has become extremely flexible and full featured, but these features come with a cost: complexity.  SQL Server can be used and configured in a myriad of different ways and for many different scenarios, but who is going to manage that and recommend how it should be configured and used for a particular purpose if you don’t have a production DBA?

 

In my next post, I will discuss what a production DBA does and what they shouldn’t do.

This makes perfect sense!

This makes perfect sense!

Today’s blog post is more of a question to generate some discussion and maybe, collectively, some ideas around how to track system dependencies.  When I say “system dependencies”, I mean the components that are dependent on each other in your enterprise data infrastructure.  For example, how do you keep track of the fact that Database A is dependent on SQL Server instance A, which in turn is dependent Server A, which may or not be a physical server and, if not, then is dependent on Virtual Host A.  And then going from the database out, what application(s) are dependent on Database A, what web servers or application servers are they dependent on and what is the criticality of those applications to the enterprise.  Additionally, reports and/or cubes could be dependent on Database A and those all have their own dependencies as well and so on.

 

I would be very interested to hear your thoughts on this subject and how you are tackling this issue within your organization.

Being a helper

28 September 2009
They looked a little happer than this

They looked a little happer than this

Last Sunday, Tim and I started a four week session helping out the 4th and 5th grade Sunday School class at our church.  Our church prefers that there is more than one adult in each class and we’re switching off every four weeks with another member of the congregation.  Helping out in a Sunday School class is something that we had talked about doing for quite awhile.

 

During the class, the main instructor was in her groove.  She’s been teaching this class for some time and the kids are used to her and the curriculum.  While I know that it was helpful that we were there (at least in the kid wrangling department), I wasn’t sure that we were making that much of a difference.  We sat with the kids, played games with them, sang with them, but pretty much followed the teacher like they did.  I wondered (and I think that Tim did as well), if our being there mattered.

 

Before I go on, let me be clear on something.  We didn’t volunteer to help for the fame or glory or high-fives or whatever else volunteering for a Sunday School class might get you.  We know that it’s our responsibility and our honor to serve.  We’re also fallible humans…

 

In any case, after Sunday School, we went to the church service.  While we were there, I saw a couple of the students from class and they gave me huge, beaming smiles.  That’s when I remembered – the kids don’t measure what you did or how you did it, but that you took the time to be with them.  I thought back to when I went to Sunday School and, although I can’t remember who did what, I do remember the ‘grown-ups’ that participated.  I remembered thinking that it was great that they wanted to help us to learn. 

 

I’m glad that we’re taking part in this class and working with these kids.  The smiles from the kids are perks that you never get at the workplace.

 | Posted by tledwards | Categories: Miscellaneous, Personal | Tagged: , , |