Dec 192008
 

Monitoring your database server is crucial to being able to analyse its performance, and you want to be monitoring continually – it’s no good starting to monitor when you detect a problem, as you a) might not know that you have a problem, and b) you don’t know what normal operation looks like.

Perfmon is the current standard (although SQL Server 2008 has some nifty features that can complement Perfmon).  I won’t go into details about how to use Perfmon, as there’s plenty of information available on the web, but it can be tricky to configure to get Perfmon running automatically after reboot, or if for some reason Perfmon fails (such as a temporary out of disk space error).

The simplest way is to use the logman.exe program, supplied with Windows 2003 and later, and available for Windows 2000.  This command line program can start or stop Perfmon counter logs on either the local or a remote server, among many other tasks.

The way we use it is to schedule a task in SQL Agent or Windows Scheduler that runs every 5 or 10 minutes and attempts to start the Perfmon counter log specified.  If the counter log is already started, an error is returned, but the counter log is not affected.

The command to start a job is (note that logman should be in your C:\Windows\System32 folder):

logman.exe start CounterCollectionName -s RemoteServerName

More information about Logman and its parameters is available at http://technet.microsoft.com/en-us/library/cc755366.aspx.

Dec 182008
 

I spent time today analysing a system monitoring tool that collects a lot of information about your servers and stores the data into a table.  A lot of data is collected, and the table was currently at 23 GB with 180,000,000 rows, with only a clustered index on a non-unique GUID.  There was no primary key set, although there was also an identity column.

A delete process runs nightly, and deletes anything older than 30 days.  This delete process was highly inefficient, and was timing out after 10 minutes, causing the transaction to roll back, and nothing deleted.  Additionally, during these 10 minutes, the table was locked and inserts couldn’t not be performed.

The code for the deletion looked like this (although this query is from AdventureWorks, and 2500 days is needed for demonstration purposes):

DELETE FROM Sales.SalesOrderHeader 
WHERE datediff(day, ModifiedDate, getdate()) >= 2500 

If you examine the query, you’ll notice two things.  Firstly, as there is no index on ModifiedDate, we’ll need to perform a table scan.  That’s fixable with a non-clustered index:

CREATE NONCLUSTERED INDEX ncix_moddate ON
    Sales.SalesOrderHeader(ModifiedDate)

We’ll ignore the fact that our new index on 180,000,000 rows will take up 4.8 GB – a fast database is a large database!  Changing the clustered index to ModifiedDate, GUID would remove the need for the uniquifier and the new index.

The other thing you can notice about this query is that each row must process the datediff function to determine the number of days difference between the ModifiedDate and the current date.  180,000,000 function calls, and this can’t be indexed.

A much better way of writing this query would be to determine the boundary date.  Any rows earlier than this date will be deleted.  By checking the ModifiedDate against this boundary date (which is calculated once at the start of the query), the new index on ModifiedDate can be used to find the rows to delete.

DELETE FROM Sales.SalesOrderHeader 
WHERE ModifiedDate <= dateadd(day, -2500, getdate())

If we compare the execution plans, we can see the following:

image

Note that I’m using a SELECT instead of a delete, as AdventureWorks has cascading deletes on the Sales.SalesOrderDetails table.  The first query takes 61 reads – an entire index scan of my new index, whereas the second query takes just 6 reads – seeking to those parts of the index where the date matches.

The only bad news now is that I can’t modify this code, as it appears to be run by a service and not in a stored procedure.  I’m hoping there will be an option to turn off purging the table, and I can run my own delete command.

Dec 172008
 

I have a couple of quick questions for you.  Are you running SQL Server 32 bit?  Do you have more than 4 GB of memory in the server?  Do you know if you’re using as much memory as you can?

I often see servers that are experiencing performance problems due to a lack of memory.  A recent case was with a 2005 Standard server with 16 GB of memory, but without AWE enabled, so SQL Server was limited to 1700 MB in its buffer cache.  It was running (amongst others) a single query constantly which scanned a 2.5 GB table, and so the disks were flat out trying to service these requests.  Turning on AWE enabled most of the database to fit in memory, and the large scanning query dropped from 400 seconds to about 7 seconds.  An index was then added, and performance was even better.

To find out how much physical memory is installed, and how much physical memory is still available, have a look in the Performance tab of Task Manager.  From within SQL Server, you can run the query:

SELECT * FROM sys.dm_os_performance_counters WHERE counter_name IN (’Target Server Memory (KB)’,’Total Server Memory (KB)’)

This will show you the amount of memory SQL Server is aiming to use (Target Server Memory), and how much it is currently using (Total Server Memory).  If AWE is not enabled, these values won’t go above 1700 MB.

For information on how to configure AWE, see my previous post for 2005, or for SQL Server 2000.

Dec 162008
 

There are a few examples of DDL triggers available on the web, which use XML to retrieve the EventData() of the trigger.  EventData() retrieves an XML object containing properties about the event.  For example, Books Online describes ALTER TABLE event as:

<EVENT_INSTANCE>
    <EventType>type</EventType>
    <PostTime>date-time</PostTime>
    <SPID>spid</SPID>
    <ServerName>name</ServerName>
    <LoginName>name</LoginName>
    <UserName>name</UserName>
    <DatabaseName>name</DatabaseName>
    <SchemaName>name</SchemaName>
    <ObjectName>name</ObjectName>
    <ObjectType>type</ObjectType>
    <TSQLCommand>command</TSQLCommand>
</EVENT_INSTANCE>

By using XML query methods, we can extract data out of this XML structure.  For example:

DECLARE @eventdata xml
SELECT @eventdata = EventData()
SELECT @eventdata.value('(/EVENT_INSTANCE/EventType)[1]'
    , 'nvarchar(100)')

This works well in most cases, but there is one little gotcha that occasionally springs up.  To use the XML methods, you need to have a specific set of SET options applied to your session.  If you have a DDL trigger that uses XML methods to parse out the EventData, and you have incompatible SET options set in your session, your trigger will generate an error, and rollback the entire transaction.

Now, although you can change set options inside a trigger, and they will automatically roll back to their previous values once the trigger ends, you may not be aware that you need to do this, and once you deploy the trigger into production, you start getting errors from some clients.  The easiest way around this is to not use the XML methods in your trigger, and just retrieve EventData() and store it in an XML column.  You can then use a separate process which has the correct session options set to process the XML in greater detail.

The alternative option is to ensure you set the set options correctly in your trigger.  The correct settings to use are:

SET CONCAT_NULL_YIELDS_NULL ON
SET QUOTED_IDENTIFIER ON
SET ANSI_NULLS ON
SET ANSI_PADDING ON
SET NUMERIC_ROUNDABORT OFF
SET ANSI_WARNINGS ON

If you change any of these and attempt to run an XML method, you’ll get an error message telling you which option needs changing:

DECLARE @xml xml
SET @xml = '<Root><Product ID="1" Name="Widget, 25cm">
</Product></Root>'
SELECT @xml.query('/Root/Product')

Msg 1934, Level 16, State 1, Line 4
SELECT failed because the following SET options have incorrect settings: ‘ANSI_NULLS, QUOTED_IDENTIFIER, CONCAT_NULL_YIELDS_NULL, ANSI_WARNINGS, ANSI_PADDING’. Verify that SET options are correct for use with indexed views and/or indexes on computed columns and/or
query notifications and/or xml data type methods.

Applying a DDL trigger into production without first checking that the session values are compatible could be disastrous!

Special thanks to Jon Kehayias for technical reviewing of this post.

Dec 152008
 

I’m planning this “Quick Tip” section just to throw out any tips that I think might come in handy.  A lot of you may already know them, but I’m sure over time I’ll find something that has escaped your notice!

Management Studio 2005 and 2008 (and Query Analyzer, but no-one’s still using that out of choice, are they?) provide three shortcut keys that run T-SQL commands, and allows you to customise up to 12 different commands.  The defaults are:

  1. Alt-F1, runs sp_help.  If you have an object name highlighted, it will run sp_help on that item.  Very useful for seeing the columns, constraints and indexes on a table.
  2. Control-1, runs sp_who.  If you highlight a login name, the output will be for that single login.
  3. Control-2, runs sp_lock.  If you highlight a number, sp_lock will be run for that session ID.  If you highlight two numbers separated by a comma (e.g. “51, 52″, without quotes), you’ll pass both parameters into sp_lock.

Additionally, you can customise these commands in Management Studio from the Tools | Options | Environment | Keyboard dialog.  For example, you could set Control-F1 to be “sp_helptext”, or modify sp_who to be sp_who2.  Note that any changes will only apply when you open a new query window.

A good example of a custom Query Shortcut would be the following query:

SELECT TOP 10 * FROM AdventureWorks.Sales.SalesOrderHeader 
ORDER BY SalesOrderID DESC

If you assign it to a shortcut key, say Control-3, every time you press Control-3 you’ll see the 10 most recent SalesOrderHeaders. 

Personally, I don’t customise the shortcuts, as I deal with many different systems, and am constantly using different installations of Management Studio, so it’s not worth configuring the environment.  However, if you use a single installation of Management Studio, custom query shortcuts could save you lots of time.

Dec 152008
 

I experienced an issue recently where a cluster could not start SQL Server as I had modified the startup parameters and mistakenly entered one incorrectly.  I didn’t notice until after I’d stopped SQL Server, and was unable to restart it.  I changed the value again in SQL Server Configuration Manager, but discovered that every time I attempted to bring SQL Server back up using Cluster Administrator, the incorrect value was back in place.

The reason for this is that the cluster controls the startup parameters, and will replicate the value through all nodes of the cluster.  The only way to change this parameter is to change it on the active SQL Server node.  However, if you don’t have an active node, because you’ve just broken it and it won’t start…

Now that you understand the Catch-22, what can you do to fix it?  There’s really only two solutions – tell the cluster to replicate a different value to all the nodes (impossible without PSS’s help, as far as I know), or get SQL Server running without the cluster.

There’s two ways of booting SQL Server in this instance.  The first is to start SQL Server (sqlsrvr.exe) from the command line, specifying the paths to the master database (remember, we can’t read from the registry due to the cluster updating it).  The second option is to modify the registry, and then start SQL Server via the Services control panel.  Note that modifying the registry is highly NOT recommended, but may be your only option. 

Once SQL Server has started locally, use Cluster Administrator to start SQL Server on the cluster.  As SQL Server is already running, the Cluster Administrator hooks right into it.  From there, you can use Configuration Manager to fix the startup parameter, have it replicate through the cluster, and restart SQL Server.

Phew.  The moral of the story is to double check your parameters before shutting down SQL Server!

I’m happy to hear any other possible solutions to this problem that I might have missed.

Dec 142008
 

I haven’t posted in a while, due to a combination of work that hasn’t been very blog-worthy and family life.  However, I did attempt the SQL Server 2008 Beta exam 71-451: Designing Database Solutions and Data Access Using Microsoft SQL Server 2008 in late September.  It wasn’t a bad exam, and afterwards I received an invitation to participate in a workshop in Redmond in October to help finalise the exam.

It was my first trip to the US, and although I spent more time in the air than I did on the ground, it was a fantastic four days.  I was hoping to meet Jonathan Kehayias over there,  but he wasn’t able to make it.  Instead, I met a whole group of SQL Server guys, mostly from Europe, and they showed me around Seattle.  Seattle reminded me a lot of Melbourne, but with wonderful shades of autumn red, orange and yellow that Melbourne just doesn’t have.

I received the results of the exam this week, and the news was quite pleasant!