Meme Monday: I got 99 SQL problems and the disk ain’t one

OK, so I’m late to the party. In my defence, yesterday was a Bank Holiday here in the UK, and for personal reasons I wasn’t even thinking about work. But now I’m back to it, and having been wondering what to write about, it’s nice to see that Thomas “SQL Rockstar” LaRock (blog | twitter) has put up a Meme to work with.

Yesterday’s meme is trying to identify 99 items that can go wrong with a SQL Server that aren’t disk-related.  Tom asks us to each identify 9 problems that we frequently see.  Let’s have a think…

  1. Unknown purpose.  “What’s this for?  What’s it doing?  Why’s it here?  Who’s the owner?” – questions I have asked in the past, and distressingly often the people being asked (DBAs) don’t know.  Document your systems, guys!
  2. Lack of backups.  In production systems, this is not at all a good thing to find.  I remember working at one place where I found over half a terabyte of SQL Server databases not being backed up at all.
  3. Performance issues due to round trips.  Too many applications make too many calls on the database where they could be either caching the data at the client side or just writing more intelligent queries that return more than one field at a time.
  4. Performance issues due to lack of load testing.  Again, many applications don’t appear to be tested with a decent load of data.  “It works fine in development” – yes, right, but you’ve only got a couple of thousand records in dev.  When the code gets into production, where they’ve got a few million records, does your code still respond as snappily?
  5. Old / unused data cluttering up the system – do you really need to keep the data for closed cases in the same place as that for the caseload that you’re running?  This can be a relatively quick performance win – put together a system to migrate out data relating to cases that have been closed for a certain length of time and are no longer referenced with any regularity.  The last time I did this, the “live” data volume dropped by 80%, resulting in a much more responsive system.
  6. Users.  Specifically, those users that claim that they need sa rights.  And it’s OK to give them that because they’ve been on a training course…
  7. SOD.  That’s “Segregation of Duties”, not a capitalised swearword.  Although it could be, given the amount of hassle it can cause.  The theory’s all well and good – you have infrastructure people to provide a server environment & disk space for you to build your SQL Server infrastructure.  They look after the OS and hardware etc.  All you (the DBA) look after is SQL Server itself.  Except you can’t read the old log files to work out what’s just caused the problem because you don’t have sufficient permissions on the server…
  8. Old third party applications, old server software.  I have very recently (last month) switched off a couple of SQL 7 instances, and last used SQL 6.5 last year.  These are systems that have been obsolete for a decade, and yet there are places that depend upon it, because they either can’t or won’t get their application vendors to provide an updated platform.  Get maintenance contracts in place to cover this sort of thing so that you can keep running with newer, supported software.
  9. Lack of communication between management and team, and between teams.  This “silo” mentality can get in the way of actually getting the job done.

So.  Have I brought anything new to the party?  Probably not!

Advertisements
This entry was posted in Meme Monday, SQLServerPedia Syndication and tagged , . Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s