Apr 092013

It’s a Tuesday, and that means there’s a decent chance that there’s a T-SQL Tuesday event on today. This month’s event is brought to us by Bob Pusateri (blog|twitter), on the topic of presenting.

Like most people in the world, I never liked presenting. Who really wants to stand up in front of an audience for 15+ minutes and have people watching you… Staring… Judging… Just wanting to tell you how wrong you are? Thankfully, audiences are rarely like that – you might have one person that’s being critical, but on the whole, I find people fall into one of the following categories:

  • Interested in what you’re saying, as you know more about the topic than they do
  • Disinterested, but can’t or don’t want to leave
  • Know as much as you, but are too polite to call you out on minor mistakes

While in university, I landed a job at a Darkzone laser tag centre. I went on to write software (primarily Delphi code) for the laser tag game, which is probably as close to being a game developer as I’ll ever get. One of the main tasks in working at Zone is to perform pre-game briefings to up to 60 people, although usually the audience would be around 15 people. After performing these briefings 10-15 times each week, you completely lose the fear of standing up in front of people and telling the briefing story one more time, and it certainly helps to reassure you that if you practice what you’re presenting enough, you can easily deliver to an audience. Even if you’re making it up off the top of your head, you are still comfortable enough to stand there. That skill came in handy a couple of months ago, where I had the pleasure of making a cultural gaffe in front of 400 non-English speakers.

I’m still relatively new to presenting technical ideas in groups greater than about 5 people, but have given a number of presentations, with more planned – hello SQL Saturday 186 in Auckland! That will allow me to make the claim of being an international speaker, but the reality is that I know I have a long way to go to improve my skills. Every time I present, I go through a phase about a week beforehand, where I think “Why did I sign up for this?” I could have stayed at home, relaxing, rather than volunteering to put myself through this scrutiny – completely forgetting that most people want you to succeed. Darkzone briefings are easy – you practice the 5-10 minute talk regularly, and with no great technical accuracy. A technical presentation, however, needs to be correct (as much as possible), and due to its length, you can’t rehearse hundreds of times. At the end of the presentation, however, I’ve come through unscathed, improved myself by researching a topic well, and have imparted some knowledge to other people.

I think that’s worth it, and it’s a great feeling when someone gives you positive feedback about how they were able to use the information they gave you, even 18 months later.

My biggest tip for presenting would be to respect the audience, and put in a ton of effort in developing and rehearsing. They’re giving you an hour of their time – make it worth it, and you’ll reap the rewards.

Feb 122013

This month’s T-SQL Tuesday is brought to us by Wayne Sheffield (blog | twitter), who asks for some awesome uses of Powershell to manage SQL Server.

As with most scripts, I had a flash of inspiration that I could use Powershell to make my life a little bit easier. I spent 2 hours to save 10 seconds of effort, but it keeps saving me often. Six months ago, I needed to sort a list of server names. Nothing too exciting, and most people would fire up Excel to perform the sort.

You would first copy your unsorted list into the clipboard, paste it into Excel, highlight the rows (or the entire column) and hit “Sort”.  Then you’d select the newly sorted rows, copy it back into the clipboard, and paste it elsewhere.

That’s way too much effort.

Let’s try this workflow instead:  You copy your unsorted list. You click on an icon in your taskbar, watch a Powershell window flash up for half a second, and then you paste your clipboard, noting that the results are sorted.

The script isn’t too long, only complicated by the fact we need to hook into System.Windows.Forms to gain access to the clipboard:

## This will only work if you run Powershell with the -sta switch:
#    Powershell -sta

function Get-ClipboardText()  
    Add-Type -AssemblyName System.Windows.Forms
    $tb = New-Object System.Windows.Forms.TextBox
    $tb.Multiline = $true

function Set-ClipboardText($text)
    # Load System.Windows.Forms assembly.
    $null = [Reflection.Assembly]::LoadWithPartialName(“System.Windows.Forms”)

    $dataObject = New-Object windows.forms.dataobject

    # Add generated strings to data object.
    $dataObject.SetData([Windows.Forms.DataFormats]::UnicodeText, $true, $text)

    [Windows.Forms.Clipboard]::SetDataObject($dataObject, $true)

$data = Get-ClipboardText;
$data = $data.Replace("`r`n", "|||||||")
$data.Split("|||||||") | Sort-Object | Set-Variable -Name sorted

[string] $output = ""

foreach ($line in $sorted)
    if ($line -ne "") 
        $output += $line + "`r`n"

I’m sure there must be an easier way to perform the sort than my 10 line solution, but this was the best solution I was able to come up with in the limited time I allowed myself.

In order to set this up as an icon, we simply create a shortcut with the target:

powershell.exe -sta ./Sort-Clipboard.ps1

making sure to “Start In” the path with the Sort-Clipboard.ps1 file. The –sta flag is necessary to run Powershell in Single Threaded Apartment mode, in order to gain access to the Clipboard class.

If you want to use the Sort-Clipboard.ps1 script from within Powershell, you’ll need to spin up a new Powershell session in –sta mode, which can be done using a simple one-line script:

Start-Process powershell -args "-sta -noprofile .\Sort-Clipboard.ps1"

Credit and code for these techniques must go to Brian Reiter, Dave Regal, and the MSDN Powershell blogging team. Thank you all!

Server names are one example, but there’s many more situations where you might want to sort a list in day-to-day database management. You could adapt the code to sort anything – the really useful part here is the access to the clipboard.

Aug 142012

It’s T-SQL Tuesday again, and this month’s gracious host is Mike Fal (Blog | Twitter), urging us to deliver wonderful stories about SQL Server trick shots. The first thing that comes to mind is the Yo Gabba Gabba “Cool Tricks” segment. You should probably read the rest of my post before watching the video, or you’re liable to end up watching Flight of the Conchords shorts for the next twenty minutes.

My trick shot is nothing particularly revolutionary, but my hope is that at least one person hasn’t seen this trick before. I’ve used it often enough that I’ve got it down to a fine art.

The trick is using SQL Server Profiler to discover the inner workings of SQL Server Management Studio. Almost everyone knows about the “Script” button at the top of most dialog boxes to generate the T-SQL commands to perform a specific action, but occasionally you want to see exactly how SSMS came up with something it is displaying on screen. There’s nothing particularly special about SSMS – it doesn’t appear to interface with SQL Server in any way other than you or I could, using primarily T-SQL commands, with a bit of SMO thrown in. This generally means that anything you can see or do with SSMS, there will be an equivalent, and you can capture this activity.

The trick in action

For example, one of my preferred methods to see how much space is available in a file is to use the Shrink File dialog (right click the database in Object Explorer, Tasks, Shrink, Files).


There’s 220 MB allocated, and 36.75 MB available. That could be a very useful thing to know, and to use in my own set of diagnostic queries. How does SSMS determine this? Time to bring out the trick shot.

Fire up SQL Server Profiler from the Tools menu, and connect to a non-production machine. You can keep all the default events, but the important part is the filter. If you’re on a machine by yourself, a simple filter on Application Name  (like ‘%Management Studio%’) is sufficient, but you can add extra filters for your login name if necessary.


Once the trace is running, do the smallest amount of steps necessary to trigger what you want to capture, clearing the trace window if need be. For example, right clicking on a database in the Object Explorer causes 39 lines worth of activity to appear. The smallest amount would be to get into the Shrink File dialog, then change the drop down lists to choose which file you want to display. The fewer rows to wade through, the easier it will be to find that T-SQL you’re looking for. In our case:

exec sp_executesql N'use [AdventureWorks];select 
CAST(CASE s.type WHEN 2 THEN s.size * CONVERT(float,8) ELSE dfs.allocated_extent_page_count*convert(float,8) END AS float) AS [UsedSpace],
CASE s.type WHEN 2 THEN 0 ELSE @_msparam_0 - dfs.allocated_extent_page_count*convert(float,8) END AS [AvailableSpace] 
sys.filegroups AS g
inner join sys.database_files AS s on ((s.type = 2 or s.type = 0) and (s.drop_lsn IS NULL)) AND (s.data_space_id=g.data_space_id)
left outer join sys.dm_db_file_space_usage as dfs ON dfs.database_id = db_id() AND dfs.file_id = s.file_id
s.name = @_msparam_1 and g.data_space_id = @_msparam_2
',N'@_msparam_0 nvarchar(4000),@_msparam_1 nvarchar(4000),@_msparam_2 nvarchar(4000)',@_msparam_0=N'225280',@_msparam_1=N'AdventureWorks2008R2_Data',@_msparam_2=N'1'

This returns a UsedSpace of 187648 KB (183.25 MB), and AvailableSpace of 37632 KB (36.75 MB), the sum of which is the 220 MB shown above. We’ve managed to discover that SSMS uses sys.dm_db_file_space_usage to determine free space.

What have I learned here?

I’ve learned many things in the past using this trick, but the most recent was while writing up this post, which was done using SQL Server 2012 (with an upgraded 2008 R2 AdventureWorks database). I originally tested using SSMS 2008 R2, connecting to a 2008 instance, and those tests showed that SSMS was actually using the FILEPROPERTY function to get the space used, with code similar to:

SELECT FILEPROPERTY('AdventureWorks2008R2_Data', 'SpaceUsed') * CONVERT(float, 8)

When I ran the test again to write it up, I was a little surprised to see sys.dm_db_file_space_usage appear. This has been available since SQL Server 2005, but this part of SSMS just hadn’t been updated to take advantage of the DMVs until now.

Parting words

There you have it – if you see any interesting value come out of SSMS, use Profiler to see exactly how they did it, and you can learn from it.

My name’s Jim, and my cool trick is using my Profiler x-ray glasses (go watch the video now).

Apr 122011

Welcome to T-SQL Tuesday for April 12, 2011. This month is generously hosted by Matt Velic (Blog | Twitter), who poses the topic of APPLY, one of the many excellent additions to the T-SQL language in SQL Server 2005.

If you read other T-SQL Tuesday posts from today, you’ll get some excellent tips on how to use APPLY, and a list of the excellent things you can do with it. I’m going to go in a slightly different direction, and examine what your options are when using APPLY with database compatibility level 80.

Books Online states that the database must be at least compatibility level 90 to use APPLY.

It turns out that you can create a query that uses APPLY, and have it run in compatibility level 80. Consider the following setup:


USE Compat80
CREATE TABLE A (i int, name varchar(20)) 
CREATE TABLE B (i int, amount decimal(15,3)) 
INSERT INTO A (i, name) VALUES (1, 'Jack') 
INSERT INTO A (i, name) VALUES (2, 'Bob') 
INSERT INTO A (i, name) VALUES (3, 'Sally') 
INSERT INTO B (i, amount) VALUES (1, 25.0) 
INSERT INTO B (i, amount) VALUES (1, 50.5) 
INSERT INTO B (i, amount) VALUES (2, 16.0) 
INSERT INTO B (i, amount) VALUES (3, 110.0) 

Can we use an APPLY in Compatibility 80?

We’ve created a database in compatibility level 80, and created two tables, A and B. Let’s try a query with CROSS APPLY:

SELECT A.Name, dv.Amount FROM A 
CROSS APPLY (SELECT i, SUM(Amount) Amount 
             FROM B WHERE B.i = A.i GROUP BY i) dv 


There’s not much to say here, except that I proved Books Online wrong – I can do a CROSS APPLY in Compatibility 80. I don’t know the exact reason why I can do this, but it’s likely to be the simplistic nature of the query (which makes it less interesting). I’ll do a quick little victory dance, and we’ll move on.

A more useful use of APPLY

Where APPLY really shines is when functions are used. Let’s go ahead and create a simple function to test with:

CREATE FUNCTION dbo.fn_getB(@i AS int) 
      i   int NOT NULL 
    , amount decimal(15,3) 
        INSERT INTO @result 
        SELECT i, amount FROM B WHERE i = @i 

The logic for this function is slightly different than the previous, but it only serves to make it more interesting. Let’s go ahead and use this function in an APPLY, remembering that we’re still in Compatibility 80:

CROSS APPLY dbo.fn_getB(A.i) dv 

Msg 102, Level 15, State 1, Line 2

Incorrect syntax near ‘.’.

That’s disappointing, and rather cryptic. Let’s try again after switching to Compatibility 90:

CROSS APPLY dbo.fn_getB(A.i) dv 



Getting around this restriction

We’ve seen that APPLY using functions is a handy tool, but can we still use it in databases that are in Compatibility 80? The following is a workaround, but it’s a bit of a hack. I just like thinking outside the box.

The limitation here is that we cannot let a Compatibility 80 database execute the query. We can, however, run the query from within a different database, such as Master (ignoring permission issues) by using sp_executesql.

EXEC sp_executesql N'USE Master; 
        SELECT * FROM Compat80.dbo.A 
        CROSS APPLY Compat80.dbo.fn_getB(A.i) dv ' 

The main reason I’d want to use APPLY is to use the DMFs with the DMVs provided by SQL Server 2005, such as sys.dm_exec_*. However, these DMVs typically supply a database_id column, meaning you can run the query from within Master anyway. With that in mind, I can’t think of any real benefit of this technique – it’s almost always a better option to simply upgrade your databases to compatibility 90 or above – we are in 2011, after all.

Thanks again for this month’s edition of T-SQL Tuesday, Matt!

Mar 082011

The subject of T-SQL Tuesday #016, kindly hosted by Jes Borland (blog | twitter), is Aggregation. Although it’s a fine topic, I could not figure out what I could write about aggregation that would capture my interest, and more importantly, dear reader, yours.

I thought about the topic ceaselessly while doing dishes, feeding children, and cleaning up the house. I considered aggregate functions, and optional grouping parameters. I twisted the topic inside and out, trying to figure out an interesting angle. I considered telling a story about the Tuples vs the Aggregates on their way to Query Station (and spent an embarrassingly long time on that train of thought). Finally, I went out for a run, thought of a topic pretty quickly and started turning the idea over in my mind for the next 7 km.

Columnstore Indexes

A columnstore index is a new feature of Denali, whereby an index is stored vertically – one column per page.  Here is the whitepaper download – highly recommended, and only 5 pages of text. Imagine a row of a typical covering index (in this case, AdventureWork’s Person.Person table, and I’ve covered every column):


Every row stored in this particular index is stored on the same page. As each row is quite wide, only 5 rows will fit per 8 KB page.

A columnstore index, however, stores each column on a separate set of pages. I am unsure of the ordering of the rows* but each page is compressed. As the column is likely to contain similar data, this means that each page can have excellent compression ratios – a factor of 4-15 is quoted in the whitepaper – and thousands of rows can fit on a single page.

* Due to a lack of internals documentation so far. Are they ordered by the column, or by the clustering key? I can see arguments for either way, but haven’t yet dived deep enough to decide one or the other.

This allows for fantastic performance gains when performing aggregates, as fewer pages need to be read to service the query, and only the columns actually required by the query need to be accessed.

Howerver, due to the fact that columnstore indexes are read only, they really only have applications in data warehouses.

Can we reproduce columnstore indexes in SQL Server 2008?

The question of whether it is possible to reproduce columnstore indexes in SQL Server 2008 popped into my mind, and the short answer is “No, not a chance”. Sorry.

Let’s look at the properties of a columnstore index.

Read Only – SQL Server 2008 can certainly build index structures on read-only data. And, as it’s read only, separate aggregation steps can be created in 2008 to summarise data ahead of time. However, one of columnstore’s selling points is to remove the need to aggregate. Columnstore gives you a massive performance benefit with little effort.

Highly Compressed – SQL Server 2008 can compress indexes. However, to get highly compressible indexes, similar to columnstore’s you’ll only want a single column, which makes the index less useful, and forces you to join frequently.

Only select required columns – A query accessing a columnstore index will only access those columns that are needed by the query. To get the same sort of functionality in 2008, you would need to either create a covering index for each query you want (nothing like columnstore!), or join multiple indexes together (resulting in a large amount of logical reads).

Different table structure – Denali is able to use a completely different page structure, particularly as this data is read-only. This is not possible in SQL Server 2008, and still be usable with T-SQL.

No code changes – Denali’s implementation allows us to create a single columnstore index on all columns in the table, and we don’t need to modify any queries to use these columnstore indexes. It’s completely seamless.

I tried to create an example where we could create similar compressed indexes in 2008, but I was never able to merge multiple indexes together to query the data efficiently. Denali has specific optimisations available for columnstore indexes, and these cannot be reproduced. About the best we can do is to create a read-only compressed covering index, which is nowhere  near impressive these days.

After a few hours considering this, my conclusion is that you are better off optimising your data warehouse for those queries that you know are going to come, and ad-hoc queries will need to continue to run slowly, at least until you’ve upgraded to Denali. Frequently run queries will benefit from extra customisation (pre-aggregation, specialised indexes with compression), but there no chance of a similar payoff inside the database engine.

Not too long to wait now, and you’ll be able to take advantage of columnstore indexes!

Jan 112011

Another T-SQL Tuesday has rolled around, and for once I was prepared for it.  This month’s topic, hosted by Jen McCown (blog|twitter) is on the topic of Resolutions.  I spent a week rolling around ideas in my head for this topic, hit upon some excellent angles, and then had fate completely change the direction of my post.


I’m not a big believer in "New Year’s Resolutions", mostly because they’re so clichéd and popular opinion is that you’re doomed to fail if you have one.  Instead, I appreciate the holiday period as a time to reflect and set more general life directions, as opposed to the focussed task of "exercise more". 

In terms of SQL Server related goals that I want to accomplish this year, a quick brainstorm came up with the following:

  • I really should blog more often, say once per week.   Something 95% of SQL bloggers say every few months.
  • I really should post a bit more on my favourite SQL-based forums.  Two quality posts a day sounds feasible.  This one mostly comes down to time.
  • Wow, that MCM certification looks awfully interesting, challenging, yet achievable.  Let’s do that!
  • 2011 will be the year where I get my head around recursive CTEs!
  • Can I please, please, please use the MERGE command in production code?
  • Denali, I’m playing with you.  Get ready to be swept off your feet!
  • I’m going to slow my handwriting down, and write legibly and neatly for the first time in 15 years. (Not a technical wish, but still important!)

That third point, the MCM should be enough to keep me busy for the first half of the year at least.  I’ll need to do the MCITP 2008: Upgrade from 2005 DBA exam (70-453), and then the MCTS 2008: Developer exam (70-433) (I’ve already done the 70-451, back when it was in beta).

I have already planned for these in a more serious way.  I changed different things in my life in November, December and January, so there’s nothing particularly significant about January 1.  Do it now – don’t wait until some arbitrary date!

However, it is important to consider the non-technical side of your life, which is something that hit home to me again last night as I received news of my wife’s family in flood-ravaged south-east Queensland.  Don’t be so fixated on SQL Server that you dilute or lose connections with your loved ones.  No matter how warm SQL Server may be (especially when you’re standing next to the server during business hours!), it’s a very poor substitute to family and friends.