This applies to the CTP – it might not apply to any RTM version
Have you been following the SQL Server 2016 at Microsoft.com site? I really haven’t had much time to pay attention to this recently, but this one thing in the list of benefits caught my eye.
Stretch Database technology keeps more of your customer’s historical data at your fingertips by transparently stretching your warm and cold OLTP data to Microsoft Azure in a secure manner without application change
Oh, this sounds like it could be useful for just about everyone with a reasonable transaction history to look after for whatever reason – get the data out of your main OLTP database, but still have it accessible for reports, auditing or compliance. Just about every client I’ve ever worked for could benefit from this.
And look – there’s a white paper to download!
I’m a sucker for a good whitepaper, so I downloaded and had a little read of the SQL Server 2016 Hyper-scale Cloud technical white paper, as that seemed to be the right one.
Oh, man, this is good stuff. Pages 10-12 of the white paper talk big numbers – a billion row table, 1TB table, 5TB databases, tiered storage. All good, sensible arguments for looking into data archiving. Indeed, this feels a bit magic bulletish.
With SQL Server 2016 Stretch Databases, you can stretch an on-premises table to Azure transparently for near-infinite capacity with low TCO.
The headlines are that things like backup, restore, index maintenance etc, is done only on the local database, the warm/hot data, and not on the stuff that’s been migrated up to the cloud (the cold, archived data). Massive potential speed improvements could result here for the OLTP and maintenance stuff; but anything that uses the cloud data is going to be (not surprisingly) a bit slower.
There are moderate performance reductions when accessing the cold data
The MSDN documentation for the Stretch Database makes this seem less like the magic bullet – well, once you start digging a little.
There are various limitations around this functionality. The ones that drew my eye were that you weren’t allowed to have foreign keys referencing the table that’s being stretched into the cloud. That’s probably not an issue for transaction history records.
Every time you run the wizard to enable Stretch for a database, the wizard creates a new Azure SQL Database server. By default, the number of servers that you can create per subscription is limited.
By default, the wizard creates a SQL Database server with the Standard service tier and the S3 performance level
Now, I’ve been doing some digging in this area recently, and that didn’t seem to me to be so promising. And indeed it isn’t. According to the Azure SQL Database Service Tiers and Performance Levels documentation, the S3 tier has a maximum size for the database of 250GB. (The highest level, Premium/P3, gives 500GB.) Oh, and the maximum size for a blob in storage is also 500GB.
From the white paper talking about multi-terabyte / near infinite archives to the maximum size of a Azure SQL Database being 250GB (or 500, I know…) – something seems to me to be missing, and I (as yet) can’t see what it is.
Yes, this is only a CTP of SQL Server 2016, and things might change in the Azure architecture. But that’s a big shortfall.