Skype for Business – SQL Server Frustrations

Scenario

The environment is new and secure.

Assumptions: You are a DBA who knows how to build WSFCs and AGs. If you need help with that, then you’re in the wrong place for this post.

The Windows 2016 / SQL Server 2016 environment has been built, SQL Server 2016 Enterprise Edition installed, the Windows Failover Cluster configured, HA has been enabled on the SQL Servers, the Skype for Business server software has been installed (but not configured) and all is well.

Now to configure / create the Skype For Business topology. How hard can it be?

NB

This is being done from memory, so the actual implementation detail may be a little bit hazy, and I haven’t got any screenshots or actual error messages as this was done on someone else’s computer.

What went wrong, though – that’s real. And the reasoning is also as near as I can remember.

Creating the Topology

Configuring for Availability Groups

Launch the Topology Configuration wizard/app. By default it wants to go with mirroring, so you tell it to use Availability Groups. This causes the app to ask up front for the availability group name, which doesn’t exist, and then ask for a server name.

Don’t assume that this is a mistake and give it the server name. Alternatively, don’t get confused and give it a server name. You’ve got to get this right up front. If you don’t, you’ll have to strip out the whole thing and start over.

Use PowerShell rather than the GUI

The GUI feels nicer (barring obvious confusion above), but doesn’t adequately report error / warning / problems. We were experiencing problems with the installation, and all the messages that the skype engineers were seeing led them to view it as a “problem with the database server”.

Using the Install-CsDatabase command gives a whole heap of more useful information, including showing that a connection was made to the server and configuration settings (for database file paths) were read correctly. It’s not a problem with the server.

Oh, and use the -UseDefaultSQLPaths parameter, assuming you’ve configured your server up correctly, as it’s easier than trying to figure out the various other path options.

What $ share?

The output from Install-CsDatabase showed that the installer was attempting to copy database files to the appropriate locations, but had translated the value retrieved from SQL Server, say, E:\MSSQL\SQLData to \\SkypeServer1\E$\MSSQL\SQLData – the transaction log path had suffered similar indignities. Yes, that’s using the $ / “admin” shares. The problem is that, for various security reasons, these shares have been removed.

If you try creating the blank databases, the installer complains that the database is there, but it’s version 0, and that the installer can’t upgrade from version 0 to version 12. So drop those databases as they’re not doing any good.

The only way forward on this appeared to be to get an exemption logged and create those shares. Installation can (finally) proceed. Hurrah.

When the databases have all been created, create AG, failover, rerun topology-push to create users & jobs on the secondary, and failback.

That was a painful afternoon.

Still, all is not entirely well.

The Next Day

The following day, you’ll notice that various SQLAgent jobs have failed. Checking the server, you will notice that there are the following jobs:

  • LcsCDR_Purge
  • LcsCDR_UsageSummary
  • LcsLog_Purge
  • QoEMetrics_Purge
  • QoEMetrics_UsageSummary

These jobs have been set up in a sensible way – step 1 is a check to see if the database / server is the primary or the secondary, and step 2 actually does the work (if this is the primary). Here’s that step 1 in full:

declare @_MirroringRole int
set @_MirroringRole = (
    select mirroring_role
      from sys.database_mirroring
     where database_id = DB_ID('LcsCDR')
     )
if (@_MirroringRole is not NULL and @_MirroringRole = 2) begin
    raiserror('Database is mirrored', 16, 1)
end
else begin
    print ('Database is not mirrored')
end

That’s great, if you’re using mirroring. But we’re not. And Skype-for-Business *knows* that we’re not, as we told it that right up front. What this check should be looking at is the sys.dm_hadr_database_replica_state DMV and working against that. This code seems to work:

declare @_AGPrimary bit
if exists (select * from sys.dm_hadr_database_replica_states where database_id = DB_ID('LcsCDR'))
  BEGIN
    set @_AGPrimary = (SELECT is_primary_replica FROM sys.dm_hadr_database_replica_states WHERE database_id = DB_ID('LcsCDR') and is_local = 1)
    IF (@_AGPrimary = 0)
      BEGIN
        RAISERROR('Database is AG secondary',16,1);
      END
      BEGIN
        print ('Database is AG primary');
      END
  END

Your choice – either replace the existing step 1, or add this code to it, and the jobs now work. Depending on how nit-picky you are about these things, you might want to change the two references to ‘LcsCDR’ to match the database mentioned in the job… Given that the databases are / should be in the same AG, though, this might be overkill/paranoia. I’m paranoid. It goes with the territory.

Creating the Listener – Active Directory Permissions

Stepping back a bit, one other thing that bit us was that we had problems creating the availability group listener – the cluster didn’t have the rights to do this. This is fixed by either:

  1. (Preferred) granting the failover cluster virtual server object permission to create objects in the OU in which it lives.
  2. (alternatively, should work, untested by me) create the listener computer object first, and grant the failover cluster virtual server CNO rights to control / edit that object.

All done

The engineers can now carry on with configuring / building out the skype-for-business environment. Rather them than me. I need a beer after all that!

Posted in SQLServerPedia Syndication | Tagged , , , , | Leave a comment

What A Difference An Index Makes

M’colleagues over the way were having problems with an upgrade / feature installation of McAfee EPO UDLP (no, I have no idea what that means). Looking at what was happening on the SQL Server that’s at the back of that system, with the aid of SolarWinds and the excellent (and free) sp_WhoIsActive, we quickly found a possible culprit query:

SELECT
    *
FROM
    dbo.UDLP_EventUserGroups
WHERE
    UserId = @UserID
    AND GroupId = @GroupID ;

As many as 70 simultaneous occurrences of that query, according to sp_WhoIsActive. Here’s what SolarWinds says for that query:


100k runs of the query in a ten minute interval? yeowch. Yeah, this should be optimised if possible. The primary wait type was CPU – indicating that the data was all in RAM, but the CPU was having to schlep through the entire table to find what it needed. Or to find that it didn’t need anything. Or something.

The Table

The table itself is very simple – just three integer fields. However, the indexing was purely a clustered index on the primary key (an ID). The full schema of the table is:

CREATE TABLE [dbo].UDLP_EventUserGroups
(
    [AutoId]  [INT] IDENTITY(1, 1) NOT NULL ,
    [UserId]  [INT] NOT NULL ,
    [GroupId] [INT] NOT NULL ,
    CONSTRAINT [PK_UDLP_EventUserGroups]
        PRIMARY KEY CLUSTERED ( [AutoId] ASC )
        WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90 ) ON [PRIMARY]
) ON [PRIMARY] ;
GO

(As generated by SSMS…)

SQL Server wasn’t providing any index recommendations on this 250k row table. But I wondered if we could speed things up a little…

I took a copy of the table onto a development instance of SQL Server:

CREATE TABLE [dbo].UDLP_EventUserGroups_Indexed
(
    [AutoId]  [INT] IDENTITY(1, 1) NOT NULL ,
    [UserId]  [INT] NOT NULL ,
    [GroupId] [INT] NOT NULL ,
    CONSTRAINT [PK_UDLP_EventUserGroups_Indexed]
        PRIMARY KEY CLUSTERED ( [AutoId] ASC )
        WITH ( PAD_INDEX = OFF, STATISTICS_NORECOMPUTE = OFF, IGNORE_DUP_KEY = OFF, ALLOW_ROW_LOCKS = ON, ALLOW_PAGE_LOCKS = ON, FILLFACTOR = 90 ) ON [PRIMARY]
) ON [PRIMARY] ;
GO
 
SET IDENTITY_INSERT dbo.UDLP_EventUserGroups_Indexed ON ;
 
INSERT INTO dbo.UDLP_EventUserGroups_Indexed
(
    AutoId ,
    UserId ,
    GroupId
)
SELECT
    AutoId ,
    UserId ,
    GroupId
FROM
    <LinkedServer>.<EPODatabase>.dbo.UDLP_EventUserGroups AS ueug ;
 
SET IDENTITY_INSERT dbo.UDLP_EventUserGroups_Indexed OFF ;

Let’s build an index on the UserID and include the GroupID.

CREATE NONCLUSTERED INDEX ncidx_EUG_UserPlusGroup ON dbo.UDLP_EventUserGroups_Indexed(UserId) INCLUDE (GroupId);

That didn’t take long…

…and onto the testing:

SET STATISTICS IO ON ;
 
DECLARE @GroupID INT = 1 ;
DECLARE @UserID INT = 2937 ;
 
SELECT
    *
FROM
    dbo.UDLP_EventUserGroups
WHERE
    UserId = @UserID
    AND GroupId = @GroupID ;
 
SELECT
    *
FROM
    dbo.UDLP_EventUserGroups_Indexed
WHERE
    UserId = @UserID
    AND GroupId = @GroupID ;

The output:

Logical reads down from 719 to 3. Win!

Officially

McAfee’s guidance for creating indexes on their databases amounts to “go ahead, but you’ll have to delete them before we’ll support you.” I can live with that. After all, the create & drop index commands aren’t exactly complicated…

We’re leaving the index in place for the moment – the guys say that it’s improved the general performance as well as just during the upgrade…

Pictures

Going back to SolarWinds. The index was cleared for creation in production, and implemented, as near as makes no odds, at 2:50 that afternoon, so we have some nice before & after pictures.

Remember that summary shot from earlier? Here’s the post-index-creation version:

So we’ve gone from 100k runs of the query each taking 0.286s to 1.25M runs of the query, each taking something under 0.002s. Just one little index did that.

Reminder: That last ten minute window has 12x the number of runs of that query, but uses way less server resources.

Here’s the CPU usage graphs from SolarWinds.

Again, remember that the index was created at 2:50, and there were significantly more queries run after that time than before – you can see that on the log flushes / transaction log throughput:

Posted in SQLServerPedia Syndication | Tagged , , , | Leave a comment

Database Migration / Upgrade problem – Fulltext Wordbreaker, filter, or protocol handler does not exist

Long title. Sorry about that.

What?

I was migrating a few databases to a slightly upgraded server – new hardware, same OS/SQL main version, but the new one was fully service packed / updated etc. The environment dictated that I was doing this by the backup-restore method.

Restoring 27 databases; they all restored properly, but 15 of them gave a warning along these lines:

Warning: Wordbreaker, filter, or protocol handler used by catalog ‘FOOBARBAZ’ does not exist on this instance. Use sp_help_fulltext_catalog_components and sp_help_fulltext_system_components check for mismatching components. Rebuild catalog is recommended.

NB: the name displayed doesn’t necessarily match the name of any of the databases in use, just to annoy – they were (in this case) close, which is why I thought they might be…

What to do?

Firstly, check there’s really a problem by doing what the error message says – use sp_help_fulltext_catalog_components and sp_help_fulltext_system_components to check the versions of libraries that are expected by your database, and those installed on your system, respectively.

In the database that’s complaining, run the following:

EXEC sp_help_fulltext_catalog_components;

The versions the database is expecting are that the Wordbreaker is v12.0.6828.0, and the profile handler v2005.90.3042.0 (ie, the version of SQL that the server was running…)

Now, run the following to check which components / versions are actually installed on your server:

EXEC sys.sp_help_fulltext_system_components @component_type='all';

(Edited / abbreviated results):

As you can see, the versions don’t match – the installed versions of those components are v12.0.9736.0 for the wordbreaker, and v2005.90.5324.0 for the protocol handler. And, yes, I know it’s an old version of SQL Server…

How do we fix this?

(The version mis-match, thing, that is – upgrading SQL Server isn’t in scope for this project, unfortunately…)

Read the error message again. It tells you what to do.

Rebuild catalog is recommended.

Rebuild the catalog. OK.

Picking one of the databases at random, run:

SELECT * FROM sys.fulltext_catalogs AS fc

In this particular situation, I had about 15 to do, and going through each one of those 25 similarly-named databases looking for each of the fifteen full-text catalogs that may (or may not) have the same name as the database struck me as being not much fun. So, let SQL Server do the heavy lifting by generating the ALTER FULLTEXT CATALOG statements for you.

The query that needs to be executed in each database is relatively simple:

DECLARE @sql NVARCHAR(MAX);
SELECT @sql = ISNULL(@sql, N'') + N'USE ' + QUOTENAME(DB_NAME()) + N'; ALTER FULLTEXT CATALOG ' + QUOTENAME(fc2.name) + N' REBUILD;
'
FROM sys.fulltext_catalogs AS fc2;
PRINT @sql;
--EXEC (@sql); -- yes, I'm paranoid, and want to see what query is going to be run before I actually run it.

This can be wrapped up in a call to the undocumented sp_MSforeachdb to go through and generate that query on every database with fulltext catalog(s):

EXEC sp_MSforeachdb
    '
DECLARE @sql NVARCHAR(MAX);
USE [?];
IF EXISTS (SELECT * FROM sys.fulltext_catalogs AS fc)
BEGIN
       SELECT @sql = ISNULL(@SQL, N'''') + N''USE '' + QUOTENAME(DB_NAME()) + N''; ALTER FULLTEXT CATALOG '' + QUOTENAME(fc2.name) + N'' REBUILD;
''
    FROM sys.fulltext_catalogs AS fc2;
       PRINT @sql
       --EXEC (@sql) -- Uncomment this EXEC when happy
END
' ;

NB: The last line in the command being processed is commented out, to make sure you’re happy with what’s going on before you actually do anything you might regret. Such as running code you don’t understand against your production environments. Because you would never do that, would you?

When you’re happy with the query you’re seeing, uncomment the EXEC (@sql) line.

The Results

Check your work by running the original query again:

EXEC sp_help_fulltext_catalog_components;

Success! The versions now match what we see in the results from sp_help_fulltext_system_components.

Job done. I hope. But, y’know, backup those databases before doing anything else…

Posted in SQLServerPedia Syndication | Tagged , | 1 Comment

SQL Server Non-Unique Clustered Indexes Are Evil!

Non-Unique Clustered Indexes Are Evil And We Have The Proof.

You won’t believe it when you see it.

Face it – I can’t write click-bait…

I was reading Grant Fritchey’s (Blog|@gfritchey) comments in the SQL Server Central forum thread “Advice on Creating Unique Clustered Index“, and was reminded of something I saw in Kalen Delaney’s Precon at SQLBits 11 (Nottingham).

Here’s how to prove the evil-ness of nonunique clustered indexes.

Preparing for the evil

Quick! Grab a SQL Server instance, and create a new table. Don’t worry, it’s tiny. At least, to start off…

CREATE TABLE ClusteredIndexCrash ( i TINYINT NOT NULL ) ;
CREATE CLUSTERED INDEX ClusteredIndexCrashIdx ON dbo.ClusteredIndexCrash( i ) ;

The theory behind clustered indexes is that they are (usually) unique – after all, they define the logical layout of your table on disk. And if you have multiple records with the same clustering index key, then which order would they be in? If you don’t define the CI as unique, then SQL Server will add (behind the scenes) a so-called “Uniqueifier” (or maybe “uniquifier”) to fix that. Grant’s first post in the thread referenced above gives some information about how to see this Uniqu[e]ifier in the table structure itself.

Add data

INSERT INTO dbo.ClusteredIndexCrash
SELECT TOP 1000000
    1
FROM
    syscolumns c1 ,
    syscolumns c2 ,
    syscolumns c3 ;

1 million rows – that’s not going to be enough. Let’s add a few more, in batches.

INSERT INTO dbo.ClusteredIndexCrash
SELECT TOP 1000000
    1
FROM
    syscolumns c1 ,
    syscolumns c2 ,
    syscolumns c3 ;
GO 2146

That is going to take a while. Fire up another window so you can run a different query to keep an eye on how far it has got.

SELECT
    OBJECT_NAME(i.object_id) ,
    p.rows / 1000000 ,
    p.rows ,
    POWER(CONVERT(BIGINT, 2), 31) ,
    POWER(CONVERT(BIGINT, 2), 31) / 1000000
FROM
    sys.partitions p
INNER JOIN
    sys.indexes i ON i.index_id = p.index_id
                      AND i.object_id = p.object_id
WHERE
    i.object_id = OBJECT_ID('dbo.ClusteredIndexCrash')
    AND i.index_id IN ( 0, 1 ) ;

Got to love the confusion caused by columns being named with function names…

When that query above has finished, assuming SSMS hasn’t crashed from the effort of displaying “1000000 rows affected” (or whatever) 2k+ times…

Generating The Evil

Try running the insert one more time.

INSERT INTO dbo.ClusteredIndexCrash
SELECT TOP 1000000
    1
FROM
    syscolumns c1 ,
    syscolumns c2 ,
    syscolumns c3 ;

And now (finally) we get the error of the beast:

Msg 666, Level 16, State 2, Line 21
The maximum system-generated unique value for a duplicate group was exceeded for index with partition ID <foo>Dropping and re-creating the index may resolve this; otherwise, use another clustering key.

What have we learned?

Apart from a table containing 2 billion tinyints takes up a lot of time and disk space?

Internally, the implementation of the uniquifier appears to be equivalent to an IDENTITY(1,1), just like your average DB designer would use to create an internal ID. However, because the range starts at 1, you lose half the range of numbers available to fit into this, limiting your uniquifier to 2.14-something billion rows. Again, just like an IDENTITY field, if you’ve deleted records from the middle of the table, you don’t automatically get to reuse those identities. Hence the suggestion that dropping and recreating the index may resolve this – by effectively removing the gaps and shoving everything back up again.

Rebuilding the index probably won’t have much effect, is just delaying the inevitable, and you should be rethinking your clustering strategy for this table.

Oh, and don’t do this on a production server. I mean, what were you thinking? Really? Intentionally generating massive tables with big errors in? Tsk.

Posted in SQLServerPedia Syndication | Tagged , , | 2 Comments

Windows 2012 R2 KB2883200 / KB2919355

In case you ever get round to trying to install vNext (or SQL2016SP1) on a Windows 2012R2 server, you may encounter some problems with missing updates. At least, I did, while following these steps:

  • Create new Windows 2012 R2 VM
  • Apply all updates
  • Apply all updates
  • Wonder why KB2883200 isn’t installing

unnamed.png

Grovelling around in MS’s website, you’ll find a link that says that KB2883200 has been superceded by KB2919355, which is hidden from Windows Update by default. WHY?

So you think “I’ll just install it manually”. Which also fails.

You first need to install KB2919442.

This then causes KB2919355 to appear in Windows Updates. (I’m fairly sure I saw it and KB2883200 at the same time, but I’ve not been able to replicate that; if this one does appear, then don’t bother with it.)

Confused? You will be. Particularly when you realise that 2919355 < 2919442, implying that this is a fix to fix a cockup that was intended to fix another cockup. Or something like that.

I also liked / enjoyed the way that the link under "this update fixes some issues" just takes you to the Microsoft Support homepage, rather than anything more useful. Helpful.

So, install that update, reboot, re-run windows updates. Oh look. Another 184 updates to install. Deep joy.

Other references:

I should feel the need to rant, but this happened a couple of months ago, and I’ve only just got round to writing a quick blogpost about it. And even then, I cheated – this is just a regurgitation of an email to a colleague…

Posted in SQLServerPedia Syndication | Tagged , | Leave a comment

Setting SQL Server Maximum Memory

One of the dafter defaults in SQL Server is the Maximum Server Memory (MB) – set to a value of 2147483647. Who has a server with 2PB RAM? Give me a call if you do – I want to have a look!

SQL Server’s standard operating procedure is to gradually take over all the memory it can, up to the limit specified, and then some – SQL Server also puts some objects outside of that pool. However, when it does that, there’s a risk that the operating system (and anything else running on the server) will be starved of resources, and SQL Server will be forced to yield memory back to the OS, which can result in a performance hit while that happens. [I blogged about a Memory Pressure query last year.] Generally, it’s a Good Thing that SQL Server does this – after all, the bulk of this is cached data – previously read, and retained in memory in case someone else wants to use it later – improves performance by reducing the number of times the server has to go to disk to get data.

The thing to do, ideally, is to configure the maximum server memory when you build the server; however, sometimes you walk into a place where there are many servers where this hasn’t been done, or are otherwise looking for a quick way to determine what the setting should be. Jonathan Kehayias of SQLSkills blogged about a sensible SQL Server Maximum memory calculation (in response to a post elsewhere about a really dodgy memory config advisor, but I’m not going to link to that…)

What I’ve done below is codify that knowledge into a nice friendly T-SQL query that you can run, below. It makes use of the sys.dm_os_sys_info DMV to get the memory physically in the server; that DMV, though, has changed form between SQL 2008R2 and SQL 2012, the new version reporting physical_memory_kb whereas the previous version had physical_memory_in_bytes. Hence a bit of dynamic SQL nastiness at the start of the query.

DECLARE @CurMemoryMB BIGINT ;
DECLARE @PhysMemMB BIGINT ;
DECLARE @CalcdMemMB BIGINT ;
 
SELECT
    @CurMemoryMB = CONVERT(BIGINT, value)
FROM
    sys.configurations
WHERE
    name LIKE 'max server memory%' ;
 
DECLARE @physmemtable TABLE ( PhysMemMB BIGINT ) ;
 
DECLARE @ver INT ;
DECLARE @ProductVersion VARCHAR(20) ;
DECLARE @memoryquery VARCHAR(4000) ;
 
SELECT
    @ProductVersion = CAST(SERVERPROPERTY('ProductVersion') AS VARCHAR(20)) ;
 
SELECT
    @ver = CAST(LEFT(@ProductVersion, CHARINDEX('.' ,
                                                   @ProductVersion
                                               ) - 1) AS INT) ;
 
SELECT
    @memoryquery = 'SELECT ' + CASE
                                   WHEN @ver >= 11
                                       THEN 'physical_memory_kb / 1024 '
                                   ELSE
                                       'physical_memory_in_bytes / (1024*1024) '
                               END + ' AS PhysMemMB FROM sys.dm_os_sys_info' ;
 
INSERT INTO @physmemtable EXEC(@memoryquery) ;
 
SELECT
    @PhysMemMB = MAX(p.PhysMemMB)
FROM
    @physmemtable AS p ;

Yeah, it’s not the prettiest bit of code I’ve ever written, but it does the job.

This next chunk is where we take the @PhysMemMB and calculate what Jonathan recommends it should be (approximately – I’ve tweaked it a little for low memory servers, of which we have enough to depress me). His recommendations can be summarised as:

  • reserve 1GB RAM for the OS
  • reserve a further 1GB RAM for every 4GB between 4GB and 16GB
  • reserve a further 1GB RAM for every 8GB above 16GB

My calculation routine:

SELECT
    @CalcdMemMB = CASE
                      WHEN @PhysMemMB <= 2048
                          THEN 1024
                      WHEN @PhysMemMB <= 4096
                          THEN @PhysMemMB - 1024
                      WHEN @PhysMemMB <= 16384
                          THEN @PhysMemMB * 3 / 4
                      ELSE
                          12288 + (( @PhysMemMB - 16384 ) * 7 / 8 )
                  END ;

Close enough. 🙂

The last chunk of the query is based around calling sp_configure to alter the “max server memory (MB)” setting. You may need to enable advanced settings first; I’ve been polite, and written code to check whether the advanced options are visible, make them visible if not, and re-invisiblificate them if applicable…

The full query:

DECLARE @CurMemoryMB BIGINT ;
DECLARE @PhysMemMB BIGINT ;
DECLARE @CalcdMemMB BIGINT ;
 
SELECT
    @CurMemoryMB = CONVERT(BIGINT, value)
FROM
    sys.configurations
WHERE
    name LIKE 'max server memory%' ;
 
DECLARE @physmemtable TABLE ( PhysMemMB BIGINT ) ;
 
DECLARE @ver INT ;
DECLARE @ProductVersion VARCHAR(20) ;
DECLARE @memoryquery VARCHAR(4000) ;
 
SELECT
    @ProductVersion = CAST(SERVERPROPERTY('ProductVersion') AS VARCHAR(20)) ;
 
SELECT
    @ver = CAST(LEFT(@ProductVersion, CHARINDEX('.' ,
                                                   @ProductVersion
                                               ) - 1) AS INT) ;
 
SELECT
    @memoryquery = 'SELECT ' + CASE
                                   WHEN @ver >= 11
                                       THEN 'physical_memory_kb / 1024 '
                                   ELSE
                                       'physical_memory_in_bytes / (1024*1024) '
                               END + ' AS PhysMemMB FROM sys.dm_os_sys_info' ;
 
INSERT INTO @physmemtable EXEC(@memoryquery) ;
 
SELECT
    @PhysMemMB = MAX(p.PhysMemMB)
FROM
    @physmemtable AS p ;
 
SELECT
    @CalcdMemMB = CASE
                      WHEN @PhysMemMB <= 2048
                          THEN 1024
                      WHEN @PhysMemMB <= 4096
                          THEN @PhysMemMB - 1024
                      WHEN @PhysMemMB <= 16384
                          THEN @PhysMemMB * 3 / 4
                      ELSE
                          12288 + (( @PhysMemMB - 16384 ) * 7 / 8 )
                  END ;
 
SELECT
    @CurMemoryMB AS CurrentMaxMemory ,
    @PhysMemMB   AS PhysicalMB ,
    @CalcdMemMB  AS CalculatedMB ;
 
DECLARE @Advanced INT ;
 
SELECT
    @Advanced = CONVERT(INT, value_in_use)
FROM
    sys.configurations
WHERE
    name = 'show advanced options' ;
 
IF @Advanced <> 1
BEGIN
    EXEC sp_configure 'show advanced options', 1 ;
 
    RECONFIGURE WITH OVERRIDE ;
END ;
 
EXEC sp_configure 'max server memory (MB)', @CalcdMemMB ;
 
RECONFIGURE WITH OVERRIDE ;
 
IF @Advanced <> 1
BEGIN
    EXEC sp_configure 'show advanced options', 0 ;
 
    RECONFIGURE WITH OVERRIDE ;
END ;

That’s it. A nice, easy query to set maximum memory to a sensible value. Of course, it may need tuning for certain servers if they’re running SSRS, SSAS, or any other software, but for a general SQL Server config, this is a good starting point. I’ve tested it on multiple versions of SQL from 2005RTM upwards, with memory from sub-4GB to 128GB+ machines.

A couple of options for running this quickly on multiple servers – either use the SQL Server Management Studio Registered Servers to run a query against multiple servers, or, for a little more control, open the query in SQLCMD Mode in SSMS, put a :CONNECT <> line at the top, and just change the servername.

Disclaimer: Don’t run code you don’t understand. No, really. Do not open it. Apparently this one is pretty nasty. It will not only erase everything on your hard drive, but it will also delete anything on disks within 20 feet of your computer.

It demagnetizes the stripes on ALL of your credit cards. It reprograms your ATM access code, screws up the tracking on your VCR and uses subspace field harmonics to scratch any CD’s you attempt to play. It will re-calibrate your refrigerator’s coolness settings so all your ice cream melts and your milk curdles. It will program your phone autodial to call only your ex-spouses’ number. This virus will mix antifreeze into your fish tank. It will drink all your beer. It will leave dirty socks on the coffee table when you are expecting company. Its radioactive emissions will cause your bellybutton fuzz (be honest, you have some) to migrate behind your ears. It will replace your shampoo with Nair and your Nair with Rogaine, all while dating your current boy/girlfriend behind your back and billing their hotel rendezvous to your Visa card. It will cause you to run with scissors and throw things in a way that is only fun until someone loses an eye. It will give you Dutch Elm Disease and Psitticosis. It will rewrite your backup files, changing all your active verbs to passive tense and incorporating undetectable misspellings which grossly change the interpretations of key sentences. It will leave the toilet seat up and leave your hair dryer plugged in dangerously close to a full bathtub. It will not only remove the forbidden tags from your mattresses and pillows, but it will also refill your skim milk with whole milk. It will replace all your luncheon meat with Spam. It will molecularly rearrange your cologne or perfume, causing it to smell like dill pickles. It is insidious and subtle. It is dangerous and terrifying to behold. It is also a rather interesting shade of mauve. These are just a few signs of infection. Bad times, indeed

Posted in SQLServerPedia Syndication | Tagged , , , | 2 Comments

T-SQL Tuesday #85 – Backup Compression

TSQL2sDay150x150T-SQL Tuesday time again. First one I’ve taken part in for over a year – oops. This time, Kenneth Fisher (blog|@sqlstudent144) is hosting a blog party about backups.

So here’s a quick post about Backup Compression.

Backup Compression

Told you.
Available since SQL Server 2008 Enterprise edition, or SQL Server 2008 R2 in other editions, this neat little bit of functionality allows you to compress your backups as they are made, but it needs to be enabled at the server level. And, by default, it’s not enabled. Thanks, Microsoft.

I’m not going to talk about the details of enabling backup compression here. Just one of the side-effects of enabling it. It’s OK, it’s a good one.

What does it do?

SQL Server backup compression – does what it says on the tin. Instead of SQL Server taking a backup by reading pages / extents of data from the database file and writing them out to the backup file, it compresses the data before it writes. It’s not the best compression you’ll get, as it won’t read the entire file before compressing it; however, it’s good enough that on OLTP databases that contain normal varchar / numeric data you could see savings of over 75% – indeed, I have some databases that save 90% of disk space. And that’s made my storage guys happy. Well, less unhappy.

You mentioned a side-effect…

Yes…

As well as the obvious not-taking-up-so-much-disk-space, there’s another little benefit – the backup jobs, now they don’t have to write out so much data, are rather quicker.

How much quicker?

Well, it depends. (Sorry.)

Remember that thing I did a while ago about pulling SQL Server job history into an Outlook calendar? Well, here’s what our backups looked like a couple of weeks ago – at least, some of ’em…

This is what the backups looked like a few weeks ago, before we enabled compression globally. Yes, I know, all the jobs (well, a lot of ’em) start at the same time, but that’s kinda the point – hammering the storage…

These are the timings for Monday-Wednesday, 21st-23rd November:

backupcompressionbefore

And here, two weeks later, after compression was enabled everywhere, Monday-Wednesday, 5th-7th December:

backupcompressionafter

Spot the difference.

Posted in SQLServerPedia Syndication, T-SQL Tuesday | Tagged , , | 2 Comments