We had a communication from an Ops team saying that they intended to apply Windows patches to all our SQL Servers in one six hour period. This just felt wrong, on so many levels, including:
- QA, UAT, DR, Production all being done at once
- Only giving me 12 hours notice of their intention to do this
- No, that webserver you told me about *still* doesn’t have SQL Server installed on it…
- …neither does that telephony server. Y’know, the one that’s running Linux. You’re going to find it hard to patch that, Mr Windows.
Am I being unreasonable in expecting more notice than this? Particularly given that we have to give a certain amount of notice to end users and clients that there may be some instability of service. I have now been given the link to the patching schedule, but it’s only got a couple of weeks of information in it…
Also, all environments in one fell swoop? Really? Does this seem remotely sane? Particularly given that we’re talking about less-than-modern systems with some really strict support criteria?
My proposal was that there should be a phased rollout, roughly as follows:
- QA & UAT systems first – those servers aren’t the fastest in the world, and will probably take that long by themselves. And, more to the point, that gives us a chance to make sure that the patching hasn’t broken something – always important!
- Next, do the DR environment
- finally, after everything has tested sufficiently, and the testing team have signed off, do the production environment. This is complicated by all the production servers being clustered, but it’s not that difficult to do… Patch the current passive server, test, failover, patch the now-passive server.
C’mon, guys, how hard can it be to just think this stuff through?