Aug 242012

Sometimes, you just need to get a list of filenames into Management Studio. I typically need this when choosing files to restore for non-production backups, but there’s plenty of possible uses.

The easiest way I’ve found of getting a list of files into SSMS (or any other editor) is via the clipboard, specifically using clip.exe, found in C:\Windows\System32 on Windows 7 and Windows Server 2008.

Clip.exe allows you to pipe any cmd.exe or Powershell output directly into the clipboard, without having to copy out of the console window. For example, we have the following directory with 5 long filenames:

 Volume in drive D is Seagate 2TB
 Volume Serial Number is E6B1-FEB0

 Directory of D:\SQLBackup

24/08/2012  04:39 AM    <DIR>          .
24/08/2012  04:39 AM    <DIR>          ..
16/08/2012  09:39 PM       193,052,672 AdventureWorks_Full_20120806_1833.bak
16/08/2012  09:39 PM       193,052,672 AdventureWorks_Full_20120807_1834.bak
16/08/2012  09:39 PM        93,052,672 Northwind_Full_20120806_1830.bak
16/08/2012  09:39 PM        93,122,304 Northwind_Full_20120807_1834.bak
16/08/2012  09:39 PM        23,052,672 Pubs_Full_20120806_1833.bak
               6 File(s)    965,333,176 bytes
               2 Dir(s)  1,417,170,239,488 bytes free

By using dir’s /b (bare format) switch, and piping the results to clip.exe, we get:

dir /b | clip

Of course, you can use any filter on the dir command, to show all .bak files for a specific date (based on filename):

dir *20120806*.bak /b | clip

The results are now in the clipboard, and can be pasted into any application:


In Powershell, you can use the following (where “dir” can be replaced with gci, ls, or Get-ChildItem):

dir *20120806*.bak -name | clip

But I’m on Windows Server 2003!

If you don’t have clip.exe available, you can get similar functionality by adding one extra step – redirecting the output to a text file, from where you can easily cut and paste. Note that using the “start” command will allow you to use whichever program is set as the default for .txt files.

The cmd version is:

dir *20120806*.bak /b > tmp.txt
start tmp.txt

Powershell allows you to keep the entire contents on a single line:

dir *20120806*.bak -name > tmp.txt; start tmp.txt

Remember to go back and delete tmp.txt.

That’s it!

It’s a simple trick, but one that can be incredibly useful, and well worth a place in your toolbox.

Jun 072012

It’s one of those simple things that you’re sure you should know, but are pleasantly surprised when you discover it.

The scenario was that a vendor-supplied batched audit history deletion job was running, and the transaction log was growing… and growing… The batching appears to work well from Management Studio, but not so well from the application – my guess is that the application is creating an outer transaction, preventing the inner transactions in the batch process from actually committing.

After an hour of running, the job failed, running out of log space with a 60 GB transaction log (no major issue – it was a test server), and started to rollback.

How long will this rollback take? In theory, it should be less than the hour we’d already spent – we don’t have to grow the log, and we don’t have to run through the batching process repeatedly to determine which rows to delete.

The solution? The KILL command can retrieve the status of the rollback. Normally you’d use this to see how long a killed process will take to rollback, but it will work fine for out-of-space rollbacks too.


With the result being:

spid 101: Transaction rollback in progress. Estimated rollback 
completion: 65% Estimated time left: 933 seconds.

Note that running the command repeatedly won’t make the rollback happen any faster, but it’s not likely to slow it down any either.

Sep 022009

(Updated due to noeldr’s comment being a better option)

We all know that SELECT * is frowned upon when writing queries, but some tables have lots of columns that you don’t want to type out all the column names for.  What’s the fastest way of getting a column list?

I used to script out the CREATE statement for the table, and then remove any extra square brackets and types.  This had a lot of editing associated with it, which is relieved with Management Studio 2008 which provides the “SELECT TOP 1000 Rows” when right clicking on the table.  This still provides square brackets, however.  Other options were to put the results into text mode, and copy the header line.  However, the fastest method I’ve found is using the Query Designer, which I’ve never had use for. Another option is to use the Query Designer, which I still have no use for.

To get a list of columns for a table, create a SELECT * statement for it:

USE AdventureWorks
SELECT * FROM Production.Product

Next, highlight the SELECT statement, and press Control-Shift-Q or use the menu item “Design Query in Editor…” under the “Query” menu to bring up the Query Designer:


Select or deselect the columns you don’t want, and copy the list to the clipboard.  That’s it!

Noeldr’s option, from the comments below is even better, and works in either Management Studio or Query Analyzer.  Simply open the Object Explorer up to the table, expand the table so that the “Columns” node is visible, and drag this item from the Object Explorer into your query window.  I love how you can work with a product for years and still find little tips like this!

Are there any other easy options I’ve missed, or should I always use square brackets when using column names?  Let me know!

Jan 192009

This tip might seem obvious, but I occasionally come into a situation where “something” (usually a third party backup tool) is taking backups of the SQL Server databases, but the system administrators don’t know exactly where, or how to restore the file.  The case that triggered this blog post involved a SQL Agent job that generated an error when being modified, and losing the final 32 steps.  Whoops.  My colleague rang me, and my suggestion was to restore a copy of MSDB from backup so he could script out the job again.

The initial phone call to the sys admin went something like this “Do we take backups of the MSDB database?”.  “Uh, err, let me check… … … No.”

I jumped on the server and identified that yes, something was backing up MSDB each night, but it was going directly to a device which I had no visibility over.  To cut a long story short, the backup was found, but the backup tool apparently only allows you to restore over the top of the database you backed up – not something we wanted to do.

Learning where your backups go, and how to restore your databases to any location is something you don’t have the luxury of doing when you’re under pressure.  You need a pre-generated sequence of steps to run through in an emergency to take the guesswork out of it.

In this situation, my colleague ended up re-entering all the missing steps from the description of each step from the job’s history.  What should have been a simple task (restore MSDB elsewhere, script out the job, re-install the job) became impossible, and this job had to be running soon.

A similar situation exists in if you haven’t tested every single backup, you may not know that your backup file is corrupted.  The odds of a dead backup are small, but it’s always possible.  The best way to be sure is to actually restore the backup to another location, but even running a backup verification is better than nothing.

Jan 062009

An important lesson for everyone that has adhoc access to SQL Server is that before you run any query in Management Studio is to think about the consequences first.

For example, if you want to view data in a table, your first thought is to SELECT * FROM Table, or to open it in Management Studio, which essentially performs the same function.  But before you run this, have a think about the possible consequences.

  • If this is a large table, you might start reading 10 GB of data out of SQL Server
  • If you’re reading 10 GB of data, you might destroy the buffer cache, causing queries to run slowly until everything’s cached again
  • You’ll likely end up with a table lock that will last until the end of the transaction, assuming the default isolation level of read committed.  How long will the SELECT take to run?

It’s important as a DBA to do no harm.  In this situation, there’s a couple of things you can do.  You can just get a quick selection of rows if you just want to get an idea of what’s in the table:

  • SELECT TOP 100 * FROM Table
  • SELECT TOP 100 * FROM Table (nolock)
Note that an ORDER BY clause will need to sort all rows, so unless the data is sorted, you’ve just loaded in the entire table to perform a sort.
We can find out the number of rows by:
  • SELECT * FROM sys.sysindexes WHERE object_id = object_id(‘Table’) 
  • SELECT COUNT(*) FROM Table (nolock) — Not Recommended

The first option will give an approximate number of rows, but it is fast. (I like sysindexes as it works on both 2000 and 2005.)  Note that I don’t recommend the second option as, again, it loads all data pages into memory, destroying the buffer cache.

What about building an index, assuming you don’t have Enterprise Edition and can’t build the index online?  Let’s think:

  • How many rows in the table, and how wide will each row in the index be?  If it’s small, I might be able to build it immediately.
  • How active is the table?  If it’s very active, any blocking during an index build can be disasterous, and so should wait for a maintenance window.
  • How long will the index take to build?  Can any users afford to be blocked for 20 seconds?  Some websites kill the query after 10 seconds, so users would see timeouts, whereas other applications might not mind if the query runs for up to 30 seconds (30 being the usual .NET timeout). 

By thinking of the possible consequences, you are more likely to keep the system running, and not accidentally get in the way of your users.  So, before hitting “Execute”, have a quick think of what could happen when you run that query (especially if you have an DELETE statement with no WHERE clause!)

Dec 172008

I have a couple of quick questions for you.  Are you running SQL Server 32 bit?  Do you have more than 4 GB of memory in the server?  Do you know if you’re using as much memory as you can?

I often see servers that are experiencing performance problems due to a lack of memory.  A recent case was with a 2005 Standard server with 16 GB of memory, but without AWE enabled, so SQL Server was limited to 1700 MB in its buffer cache.  It was running (amongst others) a single query constantly which scanned a 2.5 GB table, and so the disks were flat out trying to service these requests.  Turning on AWE enabled most of the database to fit in memory, and the large scanning query dropped from 400 seconds to about 7 seconds.  An index was then added, and performance was even better.

To find out how much physical memory is installed, and how much physical memory is still available, have a look in the Performance tab of Task Manager.  From within SQL Server, you can run the query:

SELECT * FROM sys.dm_os_performance_counters WHERE counter_name IN (’Target Server Memory (KB)’,’Total Server Memory (KB)’)

This will show you the amount of memory SQL Server is aiming to use (Target Server Memory), and how much it is currently using (Total Server Memory).  If AWE is not enabled, these values won’t go above 1700 MB.

For information on how to configure AWE, see my previous post for 2005, or for SQL Server 2000.

Dec 152008

I’m planning this “Quick Tip” section just to throw out any tips that I think might come in handy.  A lot of you may already know them, but I’m sure over time I’ll find something that has escaped your notice!

Management Studio 2005 and 2008 (and Query Analyzer, but no-one’s still using that out of choice, are they?) provide three shortcut keys that run T-SQL commands, and allows you to customise up to 12 different commands.  The defaults are:

  1. Alt-F1, runs sp_help.  If you have an object name highlighted, it will run sp_help on that item.  Very useful for seeing the columns, constraints and indexes on a table.
  2. Control-1, runs sp_who.  If you highlight a login name, the output will be for that single login.
  3. Control-2, runs sp_lock.  If you highlight a number, sp_lock will be run for that session ID.  If you highlight two numbers separated by a comma (e.g. “51, 52″, without quotes), you’ll pass both parameters into sp_lock.

Additionally, you can customise these commands in Management Studio from the Tools | Options | Environment | Keyboard dialog.  For example, you could set Control-F1 to be “sp_helptext”, or modify sp_who to be sp_who2.  Note that any changes will only apply when you open a new query window.

A good example of a custom Query Shortcut would be the following query:

SELECT TOP 10 * FROM AdventureWorks.Sales.SalesOrderHeader 

If you assign it to a shortcut key, say Control-3, every time you press Control-3 you’ll see the 10 most recent SalesOrderHeaders. 

Personally, I don’t customise the shortcuts, as I deal with many different systems, and am constantly using different installations of Management Studio, so it’s not worth configuring the environment.  However, if you use a single installation of Management Studio, custom query shortcuts could save you lots of time.