Jim McLeod

Feb 152011

I’ve done five SQL Server Microsoft Certification exams in the past few years, passing each on the first try, and only using an Exam Preparation book for the first exam. I believe there are two methods of studying for an exam – either cramming and hoping you retain enough knowledge during the exam, or creating a targeted list of topics, and living and breathing the subject until you know it intimately.

This article will talk about the second method, which I believe results in a much deeper understanding of the exam contents.

Daily Practice

Use SQL Server daily. This is the most important step. If you are wanting to take an exam on SQL Server, you should be using it daily (alright, you can have weekends off!). The point of this is to be continually using what you’ve been learning, reinforcing it, and completely internalising it. If you aren’t using SQL Server, why get certified in it? 

I recommend spending 30-60 minutes each day using SQL Server in addition to your day job. Create a database in your development environment, and practice the techniques and commands you are learning, and then utilise them in production where appropriate (and approved). Even if you’re doing routine work and have no current need to use the new features that will be on the exam, you should still aim to practice each day. This study period can be at any time of the day – whatever works for you. I recommend either during lunch, or before starting work in the morning. Personally, evenings don’t work for me, as I’m less alert at the end of the day and can’t get started until 9:30pm due to family commitments.

If you don’t have the correct environment at work (for example, you’re studying for SQL Server 2008 certifications, but you only use 2000 at work), it may be harder to get the necessary practice. In this case, I recommend discussing the options with your boss. It’s in his interests to get you skilled in the latest version of SQL Server, due to the amount of technical debt growing with older versions of SQL Server. If this is not possible, buy your own copy of SQL Server 2008 Developer Edition, and install it on your own computer (it runs fine on a netbook – just ask Noel McKinney!)

What’s Going To Be Examined?

How do you know which topics will be on the exam? The title of the exam is often not a very good indicator, and goes into no detail, but the exact requirements of the exam are freely available on the Microsoft Learning web site. For example, scan through the list of requirements for the 70-433 exam, available on the “Skills Measured” tab.

You’ll see that there are seven major sections, with four or five dot-points against each.  This tells you exactly which topics may be included, and as long as you have a good handle on each of these, you’re good to go!

I like to make a copy of this list, and cut out the topics that I already know well, leaving behind a list of subjects that I believe I need to work on. I then take one of these subjects, and throw myself into learning the topic.

This can be from:

  • reading Books Online
  • searching for online articles or blog posts on the topic
  • using one of the Exam Preparation guides
  • watching videos (e.g. from www.sqlshare.com)

Reproduce to Reinforce

When reading material with examples, reproduce each example in your own environment. Type out the code each time, run it, and check that it works.

After a period of time (an hour, a day), reproduce the example without looking at the original code. If you get stuck, use Books Online for syntax assistance.

This method of following the examples has two effects. Firstly, you are more likely to retain the syntax if you’re actively typing it, rather than passively copying and pasting. Secondly, by reproducing the example with minimal assistance, you’re proving to yourself that you now know the material.

Apply Your Knowledge

Finally, consider the pearls of wisdom that you’re learning, and see if you can apply it back to your own situation at work. If you’re learning about mirroring, and you currently have a cluster at work, consider the benefits of your current cluster, and contrast these with the benefits of switching to mirroring. You don’t actually have to change to a mirror, but consider how things would be different if you did. How does your DR plan change?

If you’re studying performance, and you don’t have a performance baseline for your servers, look into setting one up, and start to monitor performance more closely.

If you have a colleague available (preferably one that already knows this material) take them out for coffee and take them through your reasoning. The act of explaining your thoughts will solidify the concepts in your head, and they can suggest things you might have missed.

If no colleagues are available, you can throw your ideas to the Internet. Find a SQL Server forum, search for related threads, and if none are found, start your own. Or, utilise the #sqlhelp tag on Twitter.

Rinse and Repeat

Once you’ve completed the above steps for a specific topic, go back and choose another topic. Once you’ve worked your way through the list, you should be ready for the exam. Good luck, and if these tips are helpful, please let me know!

Feb 142011

I recently got around to doing exam 70-433 TS: Microsoft SQL Server 2008, Database Development.  As I had previously completed exam 70-451 PRO: Designing Database Solutions and Data Access Using Microsoft SQL Server 2008 while it was in beta two years previously, this had the result of granting me both the MCTS Database Development and the MCITP Database Developer 2008 certifications at the same time.


Next step – upgrade my MCITP Database Administrator from 2005 to 2008.

Jan 182011

imageWhen you think of a craftsman, a likely image to immediately pop into your head would be one of an older man, working with hand tools on a piece of wood.  Each movement he makes are deliberate and precise, and he seems to know intuitively what needs to happen next.  His tools are so familiar to him, and used so effortlessly that they seem like an extension of his body.

Although database work is a far cry from any form of woodwork, it is still a craft, albeit one that is difficult to garner as much sympathy at family gatherings as the classic crafts.  As a craft, one of the two main things database professionals will talk about is "what have you created or done?", and "which tools do you use?".

This second question is a great question, as you may discover a fantastic tool that you’d never heard of, and which can provide vast improvements in the efficiency of your regular time-consuming tasks, whether by providing necessary information more easily, or by automating tasks you currently do manually.

I frequently get odd looks when I state that I don’t regularly use third-party tools at all (with the exception of backup compression tools). 

What?  No Tools?

The primary reason for this stance is that I work for many different clients, and the majority do not have any tools over what is provided with SQL Server, and for various reasons, cannot justify acquiring any additional tools.

This is not to say that I am against third-party tools – I definitely do make use of them when available, but I believe that for every task that a tool provides assistance with, a database professional should be able to do without the tool at a pinch.

I liken this to our craftsman’s view of power tools.  There are many situations where power tools will greatly increase the speed and ease of creating some projects in the workshop, but for other situations hand-tools reign supreme.  Perhaps the craftsman is visiting family and is asked to repair a broken chair, or has a simple task to do that doesn’t justify setting up the power tools.

Let’s See Some Examples

As an example, Ola Hallengren’s excellent Index Optimisation scripts (amongst other things) are an excellent and free utility for more intelligently performing index maintenance.  However, at a pinch, a DBA should be able to query the sys.dm_db_index_* family of DMVs to determine which indexes require maintenance, and issue the appropriate ALTER INDEX command.

Automatic code-completion is another helpful tool that some people swear by.  As this is a relatively new feature in the SQL Server world (whether a third-party tool, or the one provided as part of SQL Server 2008′s Management Studio), many DBAs are used to not using this.  In the future, however, we will become used to having code completion tools available.  It’s still important to know how to quickly query the database structure using sp_help, sp_helptext, sys.objects, INFORMATION_SCHEMA, etc. (Incidentally, I wow at least 5 people a year by using the Alt-F1 shortcut for sp_help. Simply highlight a table name in a query window in Management Studio, and hit Alt-F1.  Very useful.)

There are a number of tools available that can be used to trace expensive queries, and I do enjoy using a tool developed in-house to provide trace control and analysis.  If this is not available, or if it would take too much time to set up, I’m happy to pull out Profiler, script out a focussed trace, collect some data, and then analyse the resulting trace files with fn_trace_gettable().

There are numerous other examples.  Can you read a deadlock graph, or do you need Profiler to draw a picture?  Can you read a blocking chain in sp_who2, or do you need Adam Machanic’s (blog | twitter) Who Is Active?  What if you are called upon to help with a SQL Server 2000 instance, where this is unavailable?

Regardless of whether you primarily use the base set of tools, or primarily use the "power" tools, it pays to be familiar with both types, and to be able to use each tool in the proper way in the right situation.

Jan 112011

Another T-SQL Tuesday has rolled around, and for once I was prepared for it.  This month’s topic, hosted by Jen McCown (blog|twitter) is on the topic of Resolutions.  I spent a week rolling around ideas in my head for this topic, hit upon some excellent angles, and then had fate completely change the direction of my post.


I’m not a big believer in "New Year’s Resolutions", mostly because they’re so clichéd and popular opinion is that you’re doomed to fail if you have one.  Instead, I appreciate the holiday period as a time to reflect and set more general life directions, as opposed to the focussed task of "exercise more". 

In terms of SQL Server related goals that I want to accomplish this year, a quick brainstorm came up with the following:

  • I really should blog more often, say once per week.   Something 95% of SQL bloggers say every few months.
  • I really should post a bit more on my favourite SQL-based forums.  Two quality posts a day sounds feasible.  This one mostly comes down to time.
  • Wow, that MCM certification looks awfully interesting, challenging, yet achievable.  Let’s do that!
  • 2011 will be the year where I get my head around recursive CTEs!
  • Can I please, please, please use the MERGE command in production code?
  • Denali, I’m playing with you.  Get ready to be swept off your feet!
  • I’m going to slow my handwriting down, and write legibly and neatly for the first time in 15 years. (Not a technical wish, but still important!)

That third point, the MCM should be enough to keep me busy for the first half of the year at least.  I’ll need to do the MCITP 2008: Upgrade from 2005 DBA exam (70-453), and then the MCTS 2008: Developer exam (70-433) (I’ve already done the 70-451, back when it was in beta).

I have already planned for these in a more serious way.  I changed different things in my life in November, December and January, so there’s nothing particularly significant about January 1.  Do it now – don’t wait until some arbitrary date!

However, it is important to consider the non-technical side of your life, which is something that hit home to me again last night as I received news of my wife’s family in flood-ravaged south-east Queensland.  Don’t be so fixated on SQL Server that you dilute or lose connections with your loved ones.  No matter how warm SQL Server may be (especially when you’re standing next to the server during business hours!), it’s a very poor substitute to family and friends.

Jan 072011

When a building is designed, one of the main requirements is to have an appropriate foundation, and a larger and stronger building requires a larger and stronger foundation.  A one-storey house, for example, requires a much simpler foundation than an 85-storey office building.  The deeper the foundation is, the higher the building can be, and the more resistant it is to outside forces, such as strong winds.  The foundations are hidden from the casual glance, but are absolutely crucial to the success of the building.

Careers, particularly in SQL Server administration, are much the same as a building.  You must first create a solid foundation of skills, and, like a building’s foundation, these may not be visible.  This will include both technical knowledge and "soft"skills, such as:

    · The Windows operating system (including monitoring)

    · File system knowledge

    · Knowledge of networking protocols

    · Scripting basics for automation purposes (e.g., Powershell)

    · Server architecture

    · Storage technologies and I/O concepts

    · Basic Windows/Active Directory security concepts

    · Interpersonal relationships and communication

    · Knowledge of table structure and basic SQL syntax

    · Data backups, restores, and recoverability

    · SQL Server security

(I hesitate to put SQL Server Internals in here as they are a foundational skill for more in-depth SQL Server work, but at a much deeper level, and requires the other foundations first.)

You’ll notice that there is not a lot on that list that is specific to SQL Server.  At the heart of it, a database administrator is a specialised system administrator, and requires similar base skills as a Windows sys admin, with the addition some database knowledge.  At the least, a DBA needs to be know how to configure security, query and change data, and provide a measure of data protection in the forms of backups.

The analogy with a building breaks down a little when you realise that, unlike building construction, you don’t have to complete your foundations prior to working on the "visible" parts of your career.  This is a good thing, though.  You can get started with a DBA career without knowing much about Windows Server, and your limited knowledge will be sufficient to keep you from blowing over during normal weather, but will be inadequate if you attempt to configure a clustered SQL Server deployment.

This holiday period is a good time to reflect.  If you made any New Years Resolutions, after a week they’re either sticking or they’re shot, so you can reflect now without having to worry about the stigma of frequently broken New Years Resolutions hanging over your head.  What areas of your foundations are a little shaky, and could use some attention?  Which SQL Server features would you like to focus on this year, but would require an improvement in the foundations before you can really understand it?  As an example, if you want to improve your backup speeds, you may need to improve your knowledge about I/O throughput to your SAN before you can confidently use SQL Server’s backup performance enhancements (compression, striped backup files, etc).

Personally, I need to improve my knowledge of SANs, and more up to speed with the features that they can bring to the table, and the basics how each of these features work – enough to understand the pros and cons of each feature.

So, get to it. Build those foundations stronger and become a rock!

Dec 222010

In my previous post, I mentioned that the superior error checking and correcting abilities of a solid state disk (SSD) may allow you to trust your database files to a single drive and avoid RAID.  Let’s have a closer look at the issues.

SSDs will, in theory, be more reliable due to their lack of moving parts and lower power and cooling requirements.  Additionally, their ability to detect and correct errors in a more superior way (via Hamming codes to allow double error detection, single error correction) provides more protection than a RAID 5 array.  However, it is still possible for the drive to fail, and this issue must be considered prior to betting the farm on a single SSD.

Firstly, unless you’re happy to roll back to your previous full backup, transaction logs should be mirrored, whether they’re on magnetic hard disks or SSDs, and appropriate off-server log backups should be taken frequently.

In the case of using a single SSD for TempDB, what happens if the SSD fails?

  1. If you have another server to fail over to, great.  You’ll have a short outage while the other server picks up the workload, and hopefully you won’t lose any transactions.  If the standby server has no SSDs then performance may be slower, but still acceptable.  If there is no standby server, read on.
  2. You’ll have a SQL Server outage for the amount of time it takes to re-home TempDB to another disk.
  3. You’ll need to source a new home for TempDB.  You may already have space available on another attached drive, or you may need to provision more space from the SAN (if you have one).
  4. If you have no additional space, you’re in deeper trouble – you’ll need more disks, or move your databases to another server.
  5. Once you have the space, you’ll also need to consider performance.  Did you originally use a SSD for TempDB because the workload was so high that regular drives barely handle it?  You load might have grown since then, and you simply cannot handle the TempDB load without a SSD.
  6. How long does it take to get a new SSD shipped in?  Until then, your system may be down.
  7. If you have a spare SSD sitting on the shelf that you can quickly slot in, why didn’t you just RAID it in the first place, or put it in a secondary server?

In this case, we don’t even care about the contents of the SSD – TempDB will be recreated when we restart SQL Server, and things will be back to normal.  The main issues stem from how quickly you can re-home TempDB, and the restrictions on where you can place it.

If you have TempDB on dual SSDs in a RAID configuration, and you have one fail, then you’ll continue running with no outage (although you may need to schedule one to replace the faulty SSD).  Of course, this is a much more expensive option, and it’s possible that neither SSD will ever fail – but that’s the insurance game.  You pay for more piece of mind.

Data files are a similar story, but there are some differences.  While the loss of TempDB will guarantee a full SQL Server outage, loss of a single data file will only result in that database having reduced availability.  If the data is critical, and your system cannot run without it, then you will have a problem until you failover to a standby, or restore the data elsewhere.  If you can survive without this data, and you can regenerate or restore it later, there’s less of an issue.

The summary is that you need to consider answers to the following questions for each of your SSDs, and then make an appropriate decision.

  1. If this SSD fails, what are the effects in terms of server availability, data loss, and the amount of time to recover?  What is the action plan, with estimated times to get this back online?
  2. During this downtime, what effect will this have on the business?  Will the entire organisation grind to a halt, or will a non-critical data warehouse be unavailable for 24 hours?  What is the cost of this (lack of productivity, lost sales, your job)?
  3. What is the additional cost associated with providing redundancy to ensure that this won’t fail, whether as an additional SSD in a RAID configuration, or a standby server (which may or may not have SSDs).

Non-critical data on a single SSD can replace a RAID-5 array, but you need to thoroughly understand the risks first, and have a solid contingency plan in place.

Nov 172010

I was at the Melbourne SQL Server user group last night, listening to Greg Linwood (of MyDBA) present on solid state disks (SSDs).  It was a very informative session, with Greg going into detail on how SSDs work, and the specific things you have to keep in mind when using them, such as providing sufficient OS memory to manage the SSD’s allocations, and over-provisioning to allow sufficient space to support the fact that updates involve writing to a new location and the space is not freed synchronously.

The first of three main points of the session were that SQL Server is a read oriented system, in that to perform any write operation, the rows to be updated/deleted must first be located and read in from disk.  In particular, indexes are the most likely objects to benefit from being on SSDs as the access pattern is completely random – an insert into a clustered index may be predictably via an IDENTITY value, but the non-clustered indexes will experience an insert into the middle of the B-tree structure.

The second main point is that enterprise-level SSDs, such as Fusion IO’s offerings, provide a large reduction in space requirements (the devices can be much more densely packed), cooling requirements (there’s no moving parts, so things don’t get hot), and subsequently lower power requirements.  There is also less of a need to RAID SSDs together – individual hard disks fail frequently due to their mechanical nature, but SSDs are less likely to experience this due to their more robust nature.  Of course, the entire SSD could fail, but this is akin to losing your RAID controller.  If your transaction logs are mirrored on a hard disk array (which is ideally suited to sequential write operations), then the error checking and correction (Hamming codes, so single error correction, double error detection, which is superior to RAID-5’s single error detection) capabilities may allow you to trust your data files to a single device.  I should point out that this is not a recommendation – it is up to you to perform risk assessment and contingency.

In terms of cost, hard disks beat SSDs when comparing single units, but the superior performance of SSDs, coupled with the lower space, cooling, power  and advantages make SSDs significantly cheaper.  Compare the TPC’s current TPC-C benchmarks – the best performing hard disk solution comes in at 6 million transactions per minute with $20 milllion for storage costs, while the best performing SSD solution comes in at 10 million transactions per minute with $3 million for storage costs. (It’s worth noting that both of these are DB2 systems, and not SQL Server, but the RDBMS isn’t as important here.)

SSDs are definitely the next big thing, and are now considered stable and mainstream enough to improve what can be considered the biggest bottleneck in database systems today – disk I/O.  Greg presented a captivating session and definitely cleared up a lot of misconceptions around SSD technology and their uses with database systems.

Jul 212010

As mentioned in my last post, it is possible to use RAISERROR WITH NOWAIT in order to immediately send a message back to the client.  This is useful for long, procedural (i.e., not set-based) stored procedures that loop over many different rows.

Consider the following stored procedure:

    DECLARE @i int;
    SET @i = 1;
    WHILE @i < 100
        RAISERROR('%d', 0, 1, @i) WITH NOWAIT;
        -- Do some processing!
        WAITFOR DELAY '00:00:01';
        SET @i = @i + 1;

This procedure is a simple loop that counts to 100.  Each time around the loop, a RAISERROR command is executed, passing out the value of @i.  Any message at all could be passed – you could include how many rows have been processed, how many to go, and what the primary key is of the current row.

On the client, consider the following C# console application.  All error handling has been removed, and I haven’t written any .NET code in two years, so your forgiveness is appreciated!

using System;
using System.Data.SqlClient;

namespace InfoMessages
    class Program
        static void Main(string[] args)
            SqlConnection conn = new SqlConnection(
                "Data Source=(local);Initial Catalog=AdventureWorks;" 
                    + "Integrated Security=SSPI;");
            conn.InfoMessage += 
                new SqlInfoMessageEventHandler(InfoMessage);
            SqlCommand cmd = new SqlCommand("exec dbo.InfoMsgTest", conn);
            cmd.CommandTimeout = 120;
            Console.WriteLine("Processing starting.");
            Console.WriteLine("Processing complete.");

        private static void InfoMessage (object sender, 
            SqlInfoMessageEventArgs e)
            Console.WriteLine("Percent completed: " + e.Message + "%");

Note that it is vital to use a cmd.ExecuteReader().  cmd.ExecuteNonQuery() will not fire the InfoMessage handler. 

And the output:


There you have it!  A GUI application shouldn’t be too much harder.  Little things like this can make the difference between having a responsive application that informs the user as to what is happening, versus a black box that appears to hang for 30 seconds while the stored procedure is executed.

Jul 192010

SQL Server provides two primary ways of communicating data to the client – Result Sets and Messages.  Typically, a client application will respond to Result Sets, and any error messages that are raised by SQL Server with a severity higher than 10.  For error messages with a severity of 10 or less, the .NET event SQLConnection.InfoMessasge can be used to return information during query processing.

In Management Studio, the difference between a Message and and Error is that the Error is flagged in red on the Messages result panel and may trigger rollbacks or break connections, depending on the severity of the error.


One use of communicating data back to the client is for stored procedures to let the user know where they are up to.  While this could be used for production code, it is usually used as a poor man’s debugger.  By sprinkling PRINT “Currently at point x” statements through your stored procedure, you can get an inkling of where the processing is up to.

However, PRINT has a noticeable drawback – the results are not returned immediately.  Instead, anything sent to PRINT will be buffered, and not released until the buffer is full, or the query completes.  This buffer is around 8KB in size.

“No problem!” I hear you cry. “I’ll just pad my PRINT message out to be 8KB!”  Nice try, but unfortunately, the PRINT statement will trim to varchar(8000) or nvarchar(4000), which isn’t enough.  For example:

PRINT 'A' + REPLICATE(' ', 8000)
PRINT 'B' + REPLICATE(' ', 124)
WAITFOR DELAY '00:00:05'

In this example, we’re using REPLICATE to try to pad out the PRINT’s message, but we need two PRINT statements to get anything back immediately.  By running the example, and flicking to the Messages screen in Management Studio, you can see if A is being returned before or after the WAITFOR DELAY statement.  In my tests, the 124 on the B line is not a static value – it was 134 for a different server. 

So, two PRINT messages does not really seem like an acceptable solution.


Enter RAISERROR. While the RAISERROR syntax is slightly more complicated, it’s also a lot more powerful (although the misspelling is quite annoying).

RAISERROR ('Message', 0, 1, ..., ...) WITH NOWAIT

The first parameter is simply a textual description of the error/message.  Next (0) is the Severity level.  If this value is 10 or less, it will be counted as a Message, and not as an Error.  The 1 indicates the State of the message – for a message, you’ll generally keep this at 1. After the State, you can list multiple parameters that will be inserted into the first parameter – more on this shortly.


Example 1 shows two methods of RAISERROR, one where the text of the message is stored in a variable, and one where it is included in the RAISERROR command.  This simply returns “Currently at position 56” in both instances.  Note the WITH NOWAIT.  This tells SQL Server to send the message back to the client immediately, effectively avoiding the problems PRINT has.

-- Example 1
DECLARE @msg nvarchar(200) = 'Currently at position %d.'
RAISERROR (@msg, 0, 1, 56) WITH NOWAIT
RAISERROR ('Currently at position %d.', 0, 1, 56) WITH NOWAIT

Note that the equivalent PRINT statement would be:

PRINT 'Currently at position ' + CONVERT(varchar(10), 124) + '.'


Example 2 shows how easy it is to output a text value.  This is useful for displaying the current value of the a loop.

-- Example 2
DECLARE @somevalue varchar(200) = 'Melbourne'
DECLARE @msg nvarchar(200) = '@somevalue is currently %s.'
RAISERROR (@msg, 0, 1, @somevalue) WITH NOWAIT


Finally, Example 3 shows how you can combine multiple values in your output.

-- Example 3
DECLARE @somevalue varchar(200) = 'Melbourne'
DECLARE @msg nvarchar(200) = '@somevalue is currently "%s" at position %d.'
RAISERROR (@msg, 0, 1, @somevalue, 124) WITH NOWAIT



Another benefit of RAISERROR over PRINT is that it is much easier to trace  RAISERROR in Profiler.  Simply capture “User Error Message” events for Error 50000, and you’ll get the messages.  Of course, you can always filter on the severity or the SPID, or any other filter that is appropriate.


So, there you have it!  RAISERROR is a much more sophisticated method of returning status messages to the client that using PRINT.

Jul 132010

(It’s T-SQL Tuesday #008 – Gettin’ Schooled)

I learn by doing, and by teaching.

Studies have shown that the best way to learn a topic is to teach it to someone else.  I agree wholeheartedly with this – you don’t really know a topic until you’ve had to put it into your own words and get someone else to understand.  Helping out people on the MSDN SQL Forums and the SQLServerCentral forums is a great way of learning.  It’s a very ad-hoc method, as there is no guarantee what you’ll be looking at on any particular day.  Although I might not know the answer, a well written question will pique my interest, and, as one of my strengths is researching how to do things with SQL Server, I’ll attempt to ferret out the answer.  This results in a deeper understanding for me, and (hopefully) a thankful person on the other end. 

Although helping completely unknown people on the Internet can be fun, it’s a lot more satisfying when helping in person, either through teaching courses, giving presentations at user groups, or one-on-one mentoring.  These require you to know the topic thoroughly up front, as there is much less of an opportunity to dart off to Books Online.

I don’t read too many SQL Server books anymore, with a few notable exceptions, such as the Inside SQL Server series, and the SQL Server MVP Deep Dives. These are highly recommended due to their deep technical nature.  The MVP Deep Dives is especially interesting, as it contains a wide range of topics about what MVPs find interesting, as opposed Books Online worded differently.  (This is not to bag authors – there’s definitely an audience for well written books – I’m just happy with Books Online!)  This is a very similar type of format to podcast interviews.  I don’t recall how many different times I’ve heard Paul and Kim go over the same material once more, but it’s always an entertaining listen!  With 90 minutes on a train each day, podcasts are quite useful, as long as they’re not dry technical topics.  Videocasts are not my thing as I rarely have the opportunity.

I keep up with blogs (thank you, Google Reader!) to see what current ideas are floating around, but it’s necessary to filter them – I don’t have time to read every blog post in detail, although many are deserving of that attention!  Instead, I’ll flick over the content to get a feel for the topic, and keep it in mind for later reference.  Blogs can be quite handy when searching, but it’s always worth remembering not to just blindly follow advice given.  Think through the offered steps and consider whether it makes sense before trying it out on your production system.

I believe in the value of certifications, although only as a supplement to experience.  I would love the opportunity to do the SQL Server MCM course as it appears to be an excellent test of all areas of SQL Server, but the wife and kids will insist on spending three weeks in Seattle!

If I had to pick one method of learning that I believe is optimal, I would choose mentoring.  It’s always important to have a mentor, even if you’re considered an expert, if only to bounce ideas off.  And it’s fantastic to give back by mentoring others.