Jan 272009
 

I bumped into a former colleague down the street today, and he asked how to go about converting existing Perfmon data from .blg (binary) to a different format, such as CSV or into a SQL database. (Yes, that’s right.  I run into people I haven’t seen in weeks, and we talk about SQL Server.  That’s normal, right?)

Going off on a slight tangent, which format of Perfmon log should you use?  The four choices are Binary (BLG), Tab Delimited, Comma Delimited, and SQL Database.  Binary is useful for most purposes, but becomes harder to extract the data out if you want to run any other processing on it.  Tab and Comma delimited have the benefit of being able to view the data in Excel (if you don’t have too many counters), and graphing can be easier.  SQL Database has the benefit of logging straight into a SQL database, and this allows you to write a query to process your Perfmon data.  For example, you could create an hourly average, or pull out the date/times of peak loads.  The drawback to using a SQL database is that it’s not as lightweight – you quickly store a lot of data in your database, and increase the load on your server. 

A good compromise may be to log into binary format, and then use the relog.exe command line tool to convert the data into SQL or comma/tab delimited format (which can then be opened in Excel or bulk-loaded into a database).

Relog.exe is a simple tool – it allows you to convert between Perfmon file formats, extract specific counters, or change the sample period.  There’s not really much to explain that isn’t mentioned in the Technet article at http://technet.microsoft.com/en-us/library/bb490958.aspx

Jan 192009
 

This tip might seem obvious, but I occasionally come into a situation where “something” (usually a third party backup tool) is taking backups of the SQL Server databases, but the system administrators don’t know exactly where, or how to restore the file.  The case that triggered this blog post involved a SQL Agent job that generated an error when being modified, and losing the final 32 steps.  Whoops.  My colleague rang me, and my suggestion was to restore a copy of MSDB from backup so he could script out the job again.

The initial phone call to the sys admin went something like this “Do we take backups of the MSDB database?”.  “Uh, err, let me check… … … No.”

I jumped on the server and identified that yes, something was backing up MSDB each night, but it was going directly to a device which I had no visibility over.  To cut a long story short, the backup was found, but the backup tool apparently only allows you to restore over the top of the database you backed up – not something we wanted to do.

Learning where your backups go, and how to restore your databases to any location is something you don’t have the luxury of doing when you’re under pressure.  You need a pre-generated sequence of steps to run through in an emergency to take the guesswork out of it.

In this situation, my colleague ended up re-entering all the missing steps from the description of each step from the job’s history.  What should have been a simple task (restore MSDB elsewhere, script out the job, re-install the job) became impossible, and this job had to be running soon.

A similar situation exists in if you haven’t tested every single backup, you may not know that your backup file is corrupted.  The odds of a dead backup are small, but it’s always possible.  The best way to be sure is to actually restore the backup to another location, but even running a backup verification is better than nothing.

Jan 192009
 

Worlde seems to be the in thing going around the SQL Server blogging community this week, so I figured I’d knock one up of this site.  Click for a larger image.

wordle

Apparently I enjoy talking about tables, indexes, queries and Perfmon.  Sounds about right.

Jan 062009
 

An important lesson for everyone that has adhoc access to SQL Server is that before you run any query in Management Studio is to think about the consequences first.

For example, if you want to view data in a table, your first thought is to SELECT * FROM Table, or to open it in Management Studio, which essentially performs the same function.  But before you run this, have a think about the possible consequences.

  • If this is a large table, you might start reading 10 GB of data out of SQL Server
  • If you’re reading 10 GB of data, you might destroy the buffer cache, causing queries to run slowly until everything’s cached again
  • You’ll likely end up with a table lock that will last until the end of the transaction, assuming the default isolation level of read committed.  How long will the SELECT take to run?

It’s important as a DBA to do no harm.  In this situation, there’s a couple of things you can do.  You can just get a quick selection of rows if you just want to get an idea of what’s in the table:

  • SELECT TOP 100 * FROM Table
  • SELECT TOP 100 * FROM Table (nolock)
Note that an ORDER BY clause will need to sort all rows, so unless the data is sorted, you’ve just loaded in the entire table to perform a sort.
We can find out the number of rows by:
  • SELECT * FROM sys.sysindexes WHERE object_id = object_id(‘Table’) 
  • SELECT COUNT(*) FROM Table (nolock) — Not Recommended

The first option will give an approximate number of rows, but it is fast. (I like sysindexes as it works on both 2000 and 2005.)  Note that I don’t recommend the second option as, again, it loads all data pages into memory, destroying the buffer cache.

What about building an index, assuming you don’t have Enterprise Edition and can’t build the index online?  Let’s think:

  • How many rows in the table, and how wide will each row in the index be?  If it’s small, I might be able to build it immediately.
  • How active is the table?  If it’s very active, any blocking during an index build can be disasterous, and so should wait for a maintenance window.
  • How long will the index take to build?  Can any users afford to be blocked for 20 seconds?  Some websites kill the query after 10 seconds, so users would see timeouts, whereas other applications might not mind if the query runs for up to 30 seconds (30 being the usual .NET timeout). 

By thinking of the possible consequences, you are more likely to keep the system running, and not accidentally get in the way of your users.  So, before hitting “Execute”, have a quick think of what could happen when you run that query (especially if you have an DELETE statement with no WHERE clause!)

Jan 052009
 

In my previous post, I discussed using Logman to automatically start Perfmon counters logging.  The solution involved running logman.exe to start the collection, and if the counters were already running you’d get an error message saying words to that effect.

The problem with this solution is SQL Agent will report that the job failed as the error code returned by Logman is non-zero.  You can tell SQL Agent a specific code to use as the “success” code, but the user interface will not allow big negative numbers, such as -2144337750 (which is what Perfmon returns if the counter is already running on my Vista machine.  Windows 2003 returns a different code).  While you may be able to enter this value via script, I’m not sure how Management Studio will handle it, and could cause problems down the line if you edit the job.

Instead, a solution is to use a batch file that can run logman, check the error code returned, and if the error code is “Already running”, it can then exit the batch file specifying an error code that SQL Agent can handle (such as 0):

@echo off
C:\Windows\System32\logman.exe start TestCollection -s SERVERNAME
IF %errorlevel%==-2144337750 (
 echo Collection already started!
 exit /B 0
) ELSE (
 echo Exiting with error level: %errorlevel%
)

That’s it!  Now the job will report as succeeded if the collection is already running.