SQL Saturday 496 – Exeter 2016 – Announcement Notes

We’re alive! We’re live! And we’re planning the next SQL Saturday Exeter.

Here are some links and some information that might help.

SQL Saturday 469 Event Homepage

Dates: 11th / 12th March 2016.

Venue: Same as last time, and the time before – Jury’s Inn, Exeter.

Transport stuff: multistorey car park opposite, open air car park round the corner. There are also trains and planes, and there are rumours of buses / coaches too. Lots of options.

Speakers wanted – for both the full day precon on the Friday, and the Saturday itself. Fancy a go? We’re all in favour of getting new speakers. Don’t be shy. You’ve got until 8th January 2016 to sign up as a speaker.

Format: for those of you who haven’t been to one of these events before, Friday 11th will be a choice of full day training courses, very reasonably priced, but you need to book in advance. The Saturday will be a day of short sessions (we run 50 minutes per session, rather than an hour), and you’re free to go to whatever sessions you want.

Nervous about crowds? Don’t be – the SQL Saturdays in Exeter aren’t huge events, but they are friendly and the atmosphere is relaxed.

Want to make a weekend of it with the family? There are various attractions in the area that are child-friendly. I recommend Pennywell Farm, about half an hour away.

Any questions? Feel free to get in touch with the committee – we’re friendly. Twitter’s your best bet. This year, the committee consists of Jonathan Allen (@fatherjack), Annette Allen (@mrs_fatherjack), Mark Pryce-Maher (@tsqltidy), Rob Sewell (@fade2blackuk) and me (@thomasrushton). The event itself has a twitter account (@SQLSatExeter), as does the SQL PASS Chapter organising it – @SQLSouthWest. If you’re local, you might want to get to SQL South West User Group – regular meetings.

Posted in SQLServerPedia Syndication | Tagged | Leave a comment

T-SQL Tuesday #72 – Data Modelling Gone Wrong

This is the 72nd (72? Nearly 6 years…) T-SQL Tuesday, started by Adam Machanic. Each month, someone sets a topic – this month, Mickey Stuewe (Blog|@SQLMickey) has set the topic “Data Modelling Gone Wrong”.

I noticed this rather too late to do a proper post, so I’ll just mention a couple of things I’ve seen. These are from the past – I’m sure nobody’s doing anything quite so unpleasant any more!

Field named Field

Nothing to do with agriculture, I promise! I came across a table that had, in addition to an ID field, various more useful fields, the following group of fields (in order):

  • Field3
  • Field1
  • Field2

Yes, those really were the names, and that really was the order in which they appeared in the table definition. I have no idea how that came to be. Supporting documentation was never found.

Multi-purpose / Overloaded Fields

Once upon a time, I was working on a system which had a big table full of address-type data, for multiple different entity types (banks, solicitors, customers, clients etc) – something over a dozen different address “types”. The tricky part was that sometimes fields were repurposed – for example, the Fax Number field for a particular address type would actually be some other internal reference number. Oh, and the only documentation for this was to be had by reading the source code of the applications that interacted with this data.

My Plea

Please, for the sake of everyone’s sanity, don’t do this sort of thing.

Posted in SQLServerPedia Syndication, T-SQL Tuesday | Tagged , | 2 Comments

Using Redgate SQL Monitor with Azure Virtual Machines

We have an environment built out of SQL Servers installed on Azure Virtual Machines. We want to use Redgate’s SQL Monitor to monitor this environment; however, as of version 4.x it wasn’t completely supported on the Azure platform. Mainly, I suspect, because of the network requirements…

Our basic architecture was:

  • Multiple VLANs containing SQL Servers to be monitored
  • VLAN containing the monitoring server

Probably not the best for what we were wanting to do, but you work with what you’re given. I installed SQL Monitor, fired it up, and nothing worked.

After much trial and error, and a lot of network monitoring by a very enthusiastic young infrastructure guy, here are the inbound rules that we needed to put in place on each SQL Server VLAN to get this working:

Name Action Destination Protocol

SQL Allow *:1433 TCP

SMB Allow *:445 TCP

RPC Allow *:135 TCP

Netlogon Allow *:137 TCP

RPC_Dynamic Allow *:49152-65535 TCP

TCP Deny *:* TCP

UDP Deny *:* UDP

ALL Allow *:* *

Missing fields:

  • Type – all inbound
  • Source Address: all from the IP address of the monitoring server
  • Source port: * (there’s probably a more elegant way of doing this)

Those last three rules are just to allow PING traffic; without this, Redgate SQL Monitor (and probably most other monitoring systems) won’t work.

Everything is now fine…

…except version 5 introduced functionality to also monitor SQL Server Availability groups by talking directly to the SQL AG IP address, and we haven’t yet managed to get that working properly. But we’re working on it, and will update this post when ready.


I am a friend of Red Gate, and I do get free stuff to play with; however, this isn’t one of those tools, and Redgate hasn’t asked / paid me to write this post. I’ve written partly so that I have notes for the next time, and partly because it might help someone else.

Posted in SQLServerPedia Syndication | Tagged , , , , | 1 Comment

SQL Server TempDB, Azure Virtual Machines, and the D: Drive

Why put the TempDB on the D: drive?

In the wonderful world of Azure Virtual Machines, for certain VM types, the D: drive is hosted on SSDs – great for performance. However, in this world, the D: drive is “non-persistent storage”, meaning that there’s no guarantee that what you put there will still be there after you reboot the server. Heck, MS even leaves a little note on D:\ to that effect in the file called (another hint) DATALOSS_WARNING_README.TXT:


Any data stored on this drive is SUBJECT TO LOSS and THERE IS NO WAY TO RECOVER IT.

Please do not use this disk for storing any personal or application data.

For additional details please refer to the MSDN documentation at : http://msdn.microsoft.com/en-us/library/windowsazure/jj672979.aspx

Sounds ideal for SQL Server’s TempDB. Indeed, there’s a recommendation to do this in the Azure documentation.

However, this non-persistent nature of the Azure Virtual Machine’s D: drive causes a problem for SQL Server, as for some reason, it can’t cope with building the path to the TempDB files on boot. It’s quite happy rebuilding the TempDB files themselves, but it relies on the folder structure being built already.

Obvious question: why not put the TempDB (and the SSAS equivalent) in the root of D:? In the immortal words of the ancient philosopher Clarksonius, how hard can it be?


We’re using a scripted SQL server installation, so it’s easily and quickly repeatable, and we have control over the configuration script being used to build SQL Server without having to go through the individual steps.

In these attempts, where I suggest a change to the “SQLTEMPDIR” setting, you might also want to change the “ASTEMPDIR” setting, if you’re also doing an SSAS installation.

First attempt

First, and most obvious thing to try: change the line




This bombs out quite quickly with this message in the summary report:

Overall summary:
  Final result:                  Failed: see details below
  Exit code (Decimal):           -2061893624
  Exit facility code:            1306
  Exit error code:               8
  Exit message:                  The input setting "SQLTEMPDBDIR" has invalid character " " ". Retry setup with valid value for "SQLTEMPDBDIR". 
  Start time:                    2015-07-31 08:15:44
  End time:                      2015-07-31 08:18:30
  Requested action:              Install
  Exception help link:           http://go.microsoft.com/fwlink?LinkId=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=12.0.4100.1&EvtType=0x5225C3D0%400x4BDAF9BA%401306%408&EvtType=0x5225C3D0%400x4BDAF9BA%401306%408

Next attempt

Let’s try without the quotes. Change the SQLTEMPDIR line to:



Overall summary:
  Final result:                  Failed: see details below
  Exit code (Decimal):           -2068578304
  Exit facility code:            1204
  Exit error code:               0
  Exit message:                  The folder path specified is invalid. Please enter a valid folder path.. Folder Path=D:"
  Start time:                    2015-07-31 08:23:40
  End time:                      2015-07-31 08:24:37
  Requested action:              Install

Third attempt

Try getting a bit cocky here, with the “.” notation:



Overall summary:
  Final result:                  Failed: see details below
  Exit code (Decimal):           -2068578304
  Exit facility code:            1204
  Exit error code:               0
  Exit message:                  The folder path specified is invalid. Please enter a valid folder path.. Folder Path=D:\.
  Start time:                    2015-07-31 08:31:04
  End time:                      2015-07-31 08:31:54
  Requested action:              Install

Hang on…

Hang on. I missed something. Didn’t the result of attempt two, where I had


have something else in the error message? “Folder Path=D:” That’s not what I put in. Let’s try one more:


Oh, this looks hopeful. It’s not bombing out so quickly. Time for a coffee?

Some time later… Fail. Multiple occurrences of the following set of error messages:

  Feature:                       Database Engine Services
  Status:                        Failed: see logs for details
  Reason for failure:            An error occurred during the setup process of the feature.
  Next Step:                     Use the following information to resolve the error, uninstall this feature, and then run the setup process again.
  Component name:                SQL Server Database Engine Services Instance Features
  Component error code:          0x84CF0004
  Error description:             While updating permission setting for folder 'D:\' the permission setting update failed for file 'D:\pagefile.sys'. The folder permission setting were supposed to be set to 'D:P(A;OICI;FA;;;BA)(A;OICI;FA;;;SY)(A;OICI;FA;;;CO)(A;OICI;FA;;;S-1-5-80-3880718306-3832830129-1677859214-2598158968-1052248003)'.
  Error help link:               http://go.microsoft.com/fwlink?LinkId=20476&ProdName=Microsoft+SQL+Server&EvtSrc=setup.rll&EvtID=50000&ProdVer=12.0.4100.1&EvtType=0xB11EAE11%400xE9BC3D64&EvtType=0xB11EAE11%400xE9BC3D64

For the following features:

  • Feature: Reporting Services – Native
  • Feature: Database Engine Services
  • Feature: Data Quality Services
  • Feature: Full-Text and Semantic Extractions for Search
  • Feature: SQL Server Replication

To put it another way, the SQL Server process tries to take control of the folder that contains TempDB, and, in this case, fails miserably.

And in the end

What I ended up doing was, as recommended in this post, creating a Scheduled Task to run a PowerShell script at boot time that rebuilds the folder structure I wanted for TempDB, SSASTempDB, etc, and then starts the SQL Server services.

What have we learned?

  • SQL Server really should be clever enough to rebuild the path to TempDB – after all, that’s probably the easiest part of rebuilding TempDB
  • TempDB probably can’t be in the root directory
  • We need another plan
  • There’s only so many build failures I can take in a morning
  • I really needed to be a bit cannier about when to Google, as I found that Scheduled Task recommendation about an hour after starting to do that.
Posted in SQLServerPedia Syndication | Tagged , , | Leave a comment

Notes on Instant File Initialization and Running Scripts on Multiple Servers

Instant File Initialization (IFI)

IFI can be a useful performance booster if you’re in an environment where your database files grow regularly, although there’s an argument that if that happens regularly, you should grow them massively up front in a controlled out-of-hours way, rather than doing it piecemeal, but that’s a discussion for another time.

What normally happens when a file grows is that space is allocated to the file and then filled with zeroes. With IFI, however, that last step is omitted, allowing for files to be created and resized significantly more quickly. This applies to SQL Server Database Data files, not transaction log files.

Configuring IFI Manually

To configure IFI manually, you heed to look in the “Local Security Policy” tool – Start->Run “secpol.msc”. In the treeview on the left, go to Security Settings -> Local Policies -> User Rights Assignment.

Security Policy - Perform Volume Maintenance

In the listview on the right, go to “Perform Volume Maintenance Tasks”, double click, and add the user that runs the SQL Server service.

Easy enough, for one server. What if you’ve got many?

Multiple servers

What if you’ve got many servers to deal with? Well, you could do them one at a time, going through the whole log in, fire up secpol, rummage around, find out the user ID for SQL Service, and give it the right rights rigmarole, but that would be a real pain in the backside.

What you should do is have this as part of your build script (what do you mean, you’re not doing scripted deployments?), and Mike Fal demonstrates this in this blog post on Powershell and Automating SQL Server Builds.

What if you don’t use the same service account for each server? Or what if you don’t trust the guy who set the servers up to have used the same user ID? Then you’ll need to detect the service account being used by the SQL Server service, and for that, I’m indebted to the partial script in The Scripting Guy’s post on finding service accounts with Powershell.

As for the whole “looping round all the servers” bit, well, I’ll leave that to you. In my case, I have an Azure environment wherein all the SQL Servers (and only the SQL Servers) have a name that matches the pattern “*SQLSRV*”, so I can use the Get-AzureVM method, and a bit of a lashed-together script to loop through and create remote sessions on each server.

The final script

$SQLServers = get-azureVM | where-object {$_.name -like '*SQL*'}
$UserName = '<<admin user name>>'
$UserDomain = '<<domain name>>'
$UserPassword = '<<admin user password>>'
$UserFullName = $UserDomain + '\' + $UserName
$UserCred = New-Object -TypeName System.Management.Automation.PSCredential `
                         -ArgumentList $UserFullName, `
                                       (ConvertTo-SecureString -String $UserPassword -AsPlainText -Force)
$so = New-PSSessionOption -SkipCACheck -SkipCNCheck
$SQLServers | ForEach-Object -Process {
    $SQLServer = $_
    $s = New-PSSession -ComputerName $SQLServer.Name -UseSSL -SessionOption $so -Cred $UserCred
    Invoke-Command -Session $s -scriptblock {
        #Configure Instant File Initialization – from MikeFal
        $svcaccount = (get-wmiobject win32_service | where-object {$_.Name -ilike 'MSSQLSERVER'} ).startname  #This bit from Scripting Guy
        secedit /export /areas USER_RIGHTS /cfg C:\Source\templocalsec.cfg
        $privline = Get-Content C:\Source\templocalsec.cfg | Select-String 'SeManageVolumePrivilege'
        (Get-Content C:\Source\templocalsec.cfg).Replace($privline,"$privline,$svcaccount") | Out-File C:\Source\newlocalsec.cfg
        secedit /configure /db secedit.sdb /cfg C:\Source\newlocalsec.cfg

Yes, I know, there are some weaknesses here – no error handling, should get User Credentials in a more secure manner, but this is a quick one-off hack. Make sure you know what you’re doing before you run it. Back your environment up, check your will is up to date, and don’t drink and drive.

Posted in SQLServerPedia Syndication | Tagged , , | 3 Comments

Possible SQL Server Trace Flag Gotcha

I was trying to turn on Traceflag 3226 in order to suppress the swarm of successful transaction log backup messages that floods the SQL Server (Error) Logs (& Windows Application logs). After all, if you’ve got more than a few databases, and are running regular frequent transaction log backups due to a tight recovery point objective (RPO), you’ll see a lot of these messages getting in the way of more useful and important information.

To see which trace flags are currently active, use the DBCC TRACEFLAG statement:

DBCC TraceStatus 1

Oh, look. We’ve got one already. Googling tells me that this is the flag that’s enabled by default by Microsoft to provide useful deadlock information (an XML graph) rather than just reporting the fact that a deadlock has occurred.

The way to add a new global “permanent” traceflag is with SQL Server Configuration Manager. Go into the SQL Server properties, and head to the startup parameters page.

SQL Server Properties

Add the trace flag by typing “-T3226” into the “Specify a startup parameter” box and clicking the “Add” button. Unfortunately, you’ll now need to restart SQL Server to pick up this change.

And now check your work was successful.

DBCC TraceStatus 2

Hang on. What happened to the trace flag 1222, so kindly provided by MS? Good question. It looks as though you lose any MS-provided defaults if you specify your own additional traceflags. Back to configuration manager to add the other flag:

SQL Server Properties 2

Re-restart, and re-check:

DBCC TraceStatus 3

NB: T1222 seems to be enabled by default on the SQL Servers (SQL Server 2014) I’ve built that have either MSDN or “real” licences, but only sporadically enabled on machines running evaluation / developer editions. Which is a bit odd, as the installations done on those latter machines was all run through the same script and configuration file…

MSDN Reference links:

Posted in SQLServerPedia Syndication | Tagged , | 1 Comment

SQL Server Performance Monitor counters missing

Another day, another batch of servers to build and add to those being monitored in Redgate’s SQL Server monitor.

Reminder – these servers are being built in a consistent process, using a brace of PowerShell / azure scripts and a pre-prepared SQL Server configuration file.

Went to add to the SQL Server Monitor, and got an error: “Connection failed (Bad data)”. That doesn’t sound good. One nice feature about SQL Server Monitor is that you can see the full error log – in this case, we see:

Redgate Monitor Connection Failed bad data

Fortunately, the error message is nice and specific (although I’m not sure what the “#Rnv” Exception code means). Basically, Perfmon didn’t have all the counters that were required. I could see counters for SQLAgent, and for SSIS, but no sign of SQL Server.

To Google!

Pinal Dave had this post on missing Perfmon SQLServer counters, which shows very nicely the problem in PerfMon. I followed the steps he went through (in summary, checking sys.dm_os_performance_counters, the registry keys in HKLM\SYSTEM\CurrentControlSet\Services\MSSQLSERVER\Performance) and everything looked as it should be. I went through the unload/load process too, and that also didn’t fix the problem. Hmm.

A Technet article led me to try lodctr /R – a forced reload of *all* performance counters. That threw up an error along the lines of “Unable to rebuild performance counter – error 2”. Which wasn’t helpful either, but did manage to give me something else to look for, which directed me to an article on the Geeks With Blogs site, which led me to find a discrepancy between the “Last Counter” value in HKLM\Software\Microsoft\Windows NT\CurrentVersion\PerfLib and the last number (not the highest number – these things aren’t in numeric order…) in the multi-line registry key value Counter in HKLM\Software\Microsoft\Windows NT\CurrentVersion\Perflib\009. Fix the former with the value from the latter, run through the lodctr /R again, and then reboot – that seemed to do the job. Thankfully.

Back to Redgate SQL Monitor, back to the server, retry connection, and all is good.

Posted in SQLServerPedia Syndication | Tagged , , , | 4 Comments