An early start this morning, as I need to back up a big reporting database before the guys at work start running some heavy optimisation process to rationalise and compress the data. However, I don’t necessarily know how long the backup is going to take.
It’s relatively straightforward to see how long a scheduled task has taken in the past – take a look at the
sysjobhistory table in the
MSDB database. That is, after all, what the SQL Server Management Studio Job History section of the Log File Viewer GUI does. A simple query here would be:
SELECT step_id, step_name, run_status, run_date, run_time, run_duration FROM sysjobhistory WHERE job_id = ( SELECT job_id FROM sysjobs WHERE name = 'User Databases - Full Backup') -- change this to match your job name ORDER BY run_date, run_time, step_id
The above query brings back the following results (edited for brevity):
Note that the
run_duration fields are integer representations of the actual date. The
run_date field is giving us an eight digit number with the date the job step started as yyyymmdd;
run_time field is a six digit number (you’ll have to imagine the leading zero(s) if it’s only showing fewer than six digits) with the time at which the job step started as hhmmss;
run_duration is encoded similar to
run_time. Not helpful for doing any meaningful maths on without slicing & dicing first. And I have no idea what the
run_duration field looks like if a job runs for longer than a day, and the MSDN page referenced above likewise seems to think it’s not an option…
Results here indicate that the Backup Databases step takes anywhere between 38:54 and 40:02. The thing to bear in mind here is that your backup job may also include a step to verify the backup. Mine does. And that the backup step may be backing up multiple databases – in this case, I’m backing up 17 databases, only one of which is of any significant size.
If you’re only interested in the backup time, then you could also take a look at a previous backup file. (You can see that the database I’m looking at is pretty much static, given the size of the differential backups.)
Here, we can see that the difference between the file creation date and the file modified date (ie the time at which the backup was completed) is between 33 and 34 minutes.
…Is No Guarantee Of Future Performance
Of course, none of these “guesstimates” may be available to you. Or the dataset could have changed significantly since the last full backup. What do we do then?
We can query the
sys.dm_exec_requests DMV. Go on – take a look. You’ll see a lot of useful information about what queries are running and the most recent query for a spid that’s not currently running anything. What I’m interested in, however, is running a query like this:
SELECT start_time, percent_complete, estimated_completion_time, DATEADD(ms, estimated_completion_time, GETDATE()) AS ETA, GETDATE() AS Current_DateTime, command, DB_NAME(database_id) AS Database_Name FROM sys.dm_exec_requests WHERE command LIKE 'BACKUP%'
Here, I’m getting information about all
BACKUP statements currently running. Most of what I’m selecting is self-explanatory, except be aware that the
estimated_completion_time field is a millisecond countdown of the time remaining, so I’m adding that to the current system date/time (which I’m also showing, in case you’re not in the same timezone as your server). The DMV also provides the database ID rather than the database name… but that’s easy to fix with that cal to the
DB_NAME() function. I’m lucky in that the backup software (Quest – see below) I’m using does seem to switch to the database being backed up.
If you think these figures are slower than they might be, then you’re right. I’m backing up a 240(ish)GB database using Quest Software’s LiteSpeed for SQL Server, with maximum compression and encryption. The backup is simultaneously being written out to another datacenter. The server is not new and the disk layout is, erm, “sub-optimal”.
The size of data being backed up? Well, the database’s MDF is around 240GB; there’s about 140GB of actual data and 100GB of slack space (the optimisation process does some weird things). And, as you can see in the screenshot from Windows Explorer earlier on in this post, that’s being backed up to a 10GB file.
…back to work.