By: Ben Snaidero | Updated: 2011-11-17 | Comments (6) | Related: More > Backup
Database systems seem to be growing almost exponentially every year and as a DBA one of our primary responsibilities is to make sure this data is backed up regularly. In this tip we will look at options in SQL Server 2008 that can decrease the duration of your backups and/or reduce the overhead that this process has on your system.
Note: All tests were done on a SQL Server 2008 database with a single .mdf (16GB) and server specification as follows: 6GB RAM, 2x2.83GHz CPU, Single 1TB SCSI disk device. Test scripts were run 3 consecutive times and an average taken for each data value. Tests should be run on your own environment as results may vary based on server hardware/configuration.
For this performance test we are going to look at the following database backup parameters/options and their effect on CPU usage and backup duration. Details on each of the options/parameters can be found here
- multiple backup devices
- parameter - maxtransfersize
- parameter - buffercount
- parameter - compression
Before we get into the results of the test I want to outline a few things I came across during my testing. First, the extra memory used during the backup process can be calculated using the options selected above and is basically "maxtransfersize x buffercount + some small overhead". Unless you are using really large values for buffercount (maxtransfersize max is 4MB) this results in only a small amount of extra memory required for the backup process hence I decided not to track this during this test. The default value of buffercount parameter can also be calculated using the following formula "NumberofBackupDevices* 3 + NumberofBackupDevices + (2 * NumberofVolumesInvolved)" (also found here). In my case this meant a buffercount equal to 7, 12, 17, 22 and 27 for 1, 2, 3, 4 and 5 disk devices respectively. You can also check these values for any backup command you run by checking the SQL error log after enabling trace flags 3605 and 3213.
Testing backup to nul device without compression
For our first test we will test the first 3 options above using the 'nul' device as the location to eliminate any I/O or network bottlenecks to the backup location. I normally run all my backups to a CIFS share across the network so sometimes network latency and/or IO can be an issue. The script used for this test can be found here and below are the results. These first two charts show results after just varying the maxtransfersize parameter. We can see that as we increase the maxtransfersize the duration of the backup decreases, but CPU usage increases although not by a lot relatively speaking. From these results we can see that we get the best performance when using a 4MB maxtransfersize and 5 disk files for the backup.
In the next 4 charts we will show the effects of varying the buffercount parameter and see how much if any we can improve the performance. The results below are from the tests that used a 1MB maxtransfersize (first 2 charts) and a 4MB maxtransfersize (second 2 charts) as these two values yielded the best performance in the results above.
From this test we can see that the larger buffercount, 47, yields the best performance regardless of the maxtransfersize parameter. One thing to note when using the smaller maxtransfersize we use less CPU to complete the process. Another thing to note is that in this case the number of disk files used for the backup seemed to be irrelevant. This can probably be attributed to the fact that in the background SQL Server increases the buffercount parameter based on the number of disk files you specify.
Testing backup to physical file device without compression
For our second test we will perform the exact same test scenario as above with the only difference being that we will write the backup to a physical file instead of to the 'nul' device. The script used for this test can be found here and below are the results.
We can see from the results of this test that we get the best performance by using a large value for maxtransfersize (1MB or greater) and the default value for buffercount (which will be based on the number of disk files used for the backup). With these settings we complete our full backup of 16GB in about 98 seconds. Under the default scenario, if you were to run a backup using a single disk file with defaults for other parameters a backup would take about 138 seconds, that's quite an improvement. Also interesting to note is that when we use a physical file and vary the buffercount we don't see the nice tight CPU pattern we see with the other tests.
Testing backup to physical file device with compression
This final test also uses the same scenario as the previous 2 tests only this time we will include the 4th option listed above, compression, to all of the backup statements. The script used for this test can be found here and below are the results.
We can see from these results that our biggest improvement comes by adding compression. By having a large value for maxtransfersize (1MB or greater) and a large value for buffercount (27 or greater) we can get this backup to run in under 60 seconds, more than twice as fast than using a single disk file with the default parameters. The only drawback to using compression, which comes out in the graphs, is the amount of CPU needed to perform compressed backups. Our initial tests without compression revealed we were using about 400-1000ms of CPU during a backup however with compression we can see that we are using anywhere from about 1000-3000ms of CPU during the backup process (I am only taking into consideration the CPU usage for the faster backup durations). Since the extra CPU usage is over the full duration of the backup I would not be concerned as an extra 2000ms of CPU over a 60 second backup is really not that significant. Another benefit from compression not shown in the charts is the size of the backup. Without compression our backup file was about 16GB, with compression this file was only 5GB. One thing to be mindful of with the decrease in the size of a compressed backup is that the data/datatypes used in your database will determine just how much disk space savings you will see when adding this option.
While this test case revealed that we were able to decrease our backup duration from 138 seconds to 54 seconds and backup file size from 16GB to 5GB with the only drawback being an increase in CPU usage from at most 1000ms to 3000ms, testing should be done in your own environment to find the optimal settings for your backups.
- Use differential backups to reduce backup times
- Run similar test on different hardware configurations (eg. multiple physical backup disks)
- Run similar test on different database configurations (eg. multiple .mdf files)
- Use above parameters for RESTORE command
- Download all of the scripts
Last Updated: 2011-11-17
About the author
View all my tips