Aug 012007
 

I recently received an email from a colleague asking if he was missing something in an email he had received from a vendor. His application was running a report that was taking too long, and the vendor had suggested shrinking his transaction log (TLog) from 60 GB to 1 GB, and to make hourly tlog backups. The vendor had apparently run a test that proved that a 60 GB log file would cause the report to run in around 3 minutes, whereas shrinking the TLog improved performance drastically, taking just 20 seconds to run. My colleague was confused, as he couldn’t see any reason why this would be so.

The TLog is simply a revolving file that holds a record of all transactions made that make changes to the database. The TLog has a head marker, which keeps track of where it is up to, and when you get to the end, it goes back to the start of the file. You get a TLog full error when the head tries to go past the tail. When you do a TLog backup, you back up everything from the tail to the head (at time of backup) and then mark that as empty space, moving the tail back to the point in time that the backup was performed.

If your transaction log is on a single physical disk, which is reserved for ONLY the TLog, then there is no benefit to having a smaller log file. In fact, it should take up the entire disk, if that’s all that’s running on it.

So, what could cause a report to run faster after shrinking the TLog? Two thoughts spring to mind:

a) It’s a report. Reports typically don’t use the TLog as they don’t write data to the log. Usually a report will create some temporary tables, but those would be stored in TempDB, unless these temporary tables are created as permanent tables in the application’s database.

b) Was SQL Server fully warmed up for both experiments? It’s possible that vendor tested the database by attachking the database, running the report, shrinking the database, setting up hourly TLog backups, and running the report again. In this situation, the first time the report was run, all pages would have had to have been loaded physically off the disk. The second time a lot of the pages would have still been in memory, so only logical reads would have been required. Logical (memory) reads are measured in nanoseconds, while physical (disk access) reads are typically measured in milliseconds.

I think there’s a large gap between most developers’ knowledge about their database engine of choice and how it operates – it’s a little more complicated that just sticking things in 3rd normal form and understanding SELECT, INSERT, UPDATE and DELETE statements. This example cements this fact, as the person recommending using a smaller TLog doesn’t understand the principals of what a TLog does, and the concept of SQL Server warming up.

 Leave a Reply

(required)

(required)

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>