Auto Grow, Auto Shrink and Physical File Fragmentation in SQL Server

By:   |   Comments (4)   |   Related: > Fragmentation and Index Maintenance


Problem

I've read many articles about defragmenting indexes and how it's a good practice. However, I've noticed that my physical database files are terribly fragmented too. What can cause this? Can I benefit from defragmenting them?

Solution

SQL Server only reports on logical index and data page fragmentation and does not have any in-built utility to check on actual physical file fragmentation. It's up to you to check the fragmentation level of your physical disks using the disk defragmenting utility of your choice.

Physical database files that are heavily fragmented can certainly hurt performance since queries that access index and data pages are initially read from disk before being placed into the buffer cache. The task of finding these pages on heavily fragmented physical disks causes disk reads to become inefficient. By defragmenting heavily fragmented database files, you can improve system I/O when reading physical pages from disk. As an added benefit, the time it takes for maintenance operations such as index rebuilding can be greatly improved if the database files are contiguous. You should note that physically defragmenting data files does not repair logical fragmentation; the defrag simply makes the physical data file contiguous. Re-indexing is still required to repair any logical fragmentation that is internal to the file.

Physical database file fragmentation is generally caused by database events that grow and shrink the physical files such as auto grow and auto shrink. These are options which can be set when a database is created or altered manually or when a database is created or altered via the SQL Server UI tools (i.e. Management Studio or Enterprise Manager). Manually shrinking or growing files using T-SQL commands can also contribute to physical file fragmentation.

Auto shrink is a database option that is used to conserve disk space. It creates a thread that runs periodically to detect when the database has more than 25% free disk space allocated to it. The database will be shrunk leaving it with 25% free disk space while the rest of the previously allocated disk space is released back to the Windows operating system. I personally don't like using the auto shrink option as it can do more harm than good. When it executes, it can greatly consume system resources such as CPU and disk which can severely impact server performance. In addition, you have no control over when it executes; it's done automatically. More than likely, the freed disk space will probably end up being needed by the database again. If you have auto shrink enabled on your database to conserve disk space, consider moving the database to a new server with more disk space or adding new disks while shutting the setting off. Disk is cheap these days!

Auto grow is a database file option that can be set to allow your database to automatically grow its data and log files in either percentage or fixed size increments as more disk space is required. This will kick in when database operations such as a large transaction occur and the file does not have enough space to accommodate the data required. Auto growing can be another performance killer for the same reasons as auto shrinking. I do set my databases to auto grow. However, I try to minimize auto growth by sizing my database files with enough space when they're initially created to accommodate future growth. In addition, I try to make my auto growth size large enough so this feature kicks in as infrequently as possible. Sizing your databases as large as possible at creation can greatly minimize the effects of physical file fragmentation. Allowing frequent auto growth to occur with auto shrink set could also potentially cause an interesting scenario where files are constantly growing and shrinking leading to rapid physical fragmentation as well as server performance issues. Monitoring your server's disk space on a regular basis should be a part of your maintenance activities.

If you decide to defragment the physical disks that your SQL Server data files reside on, you should first take the precaution of first backing up all databases that have files that reside on the disks to be defragmented. Stop the SQL Server service along with its related services and then run your defragmenter. After the disks have been defragmented, you can restart all services. If you're a shop that cannot afford to take your database offline, there are commercial utilities that exist that can defragment your disk files while the database is still online.

Next Steps
  • Examine your database settings and evaluate the turning off of auto-shrink if it's set on
  • Examine your database file sizes and consider manually expanding them to minimize future auto-growth
  • Read about the other database options
  • Read this tip about rebuilding indexes


sql server categories

sql server webinars

subscribe to mssqltips

sql server tutorials

sql server white papers

next tip



About the author
MSSQLTips author Armando Prato Armando Prato has close to 30 years of industry experience and has been working with SQL Server since version 6.5.

This author pledges the content of this article is based on professional experience and not AI generated.

View all my tips



Comments For This Article




Monday, April 21, 2014 - 4:43:21 PM - Vijay Kumar Back To Top (30396)

But in case of VLDB's how to perform index maintenance. Suppose my database size is some 2 TB, which is taking around 8 hrs on production DB.

In this case how to plan for maintenance and how to run update statistics?

Generally what are the recommended mainteance plans for VLDB in production environmnet?

This is the one of the question i faced in interview?

 


Thursday, May 1, 2008 - 7:41:24 AM - aprato Back To Top (930)

 Hi Ami

 http://www.sql-server-performance.com/article_print.aspx?id=974&type=art

 The above article by Brad McGehee was one I bookmarked some time back. 

Refer to the section: Disk Fragmentation Level

Also, this is a good whitepaper on SANs and RAIDs and fragmentation in general 

http://whitepapers.zdnet.com/abstract.aspx?&scname=RAID&docid=131601 


  


Tuesday, April 29, 2008 - 8:05:17 AM - Ami Levin Back To Top (927)

I think you were wrong in your basic assumption that "physical file fragmantation hurts performance" 

Physical fragmantation of files is relevant only on a single physical disk or in disk mirroring array.

Once you create a disk RAID array (5,10...) which "slices" the data and spreads it over different spindles, you are actually creating a deliberate physical fragmantation. The OS is "unaware" of this and will treat it "as if" it was a single spindle but actually, nothing gets really "Defragmented".

Since most DB files reside on RAID 5/10 or similar arrays, you get no benefit by "virtually" defragmenting the files so that the OS will "think" it is contigous when in fact, it's not.


Monday, April 21, 2008 - 4:16:38 PM - GreyDBA Back To Top (900)

Yup, taking a database off line is an issue.  But, typically large databases are "fragmented" anyway. At least in big chunks all over the disk array.  For those with databases that can be backed up and restored in a reasonable timeframe, instead of spending gobs of money on a third party 'tool' to physically defragment the database try this....

1.  Make sure you disable database access for this.
2.  Backup the database.
3.  Flush the Transaction log.  A full backup will ensure that everything is on the disk.
4.  Set the recovery mode to simple and shrink the transaction log file.   Should go to its smallest size.
5.  Backup the database again (this is the one you will be using so keep it safe !)
6.  Leap of faith time.  Delete the database.  Yes I know it hurts but its necessary for the OS to reclaim the file space on the disk.
7.  Now restore the database from the restore in step 5.   The first part of the restore is to create and reserve file space for the database, in as contiguous as the OS will allow.   The restore then serially places the data back into the new file.

Now to make sure it stays in a contiguous state. 
1.  Turn AUTO Shrink OFF.  Turn AUTOGrow OFF.
2.  Some capacity planning required here but now increase the size of the datafile to accommodate say 6 months growth (this very much depends on the use of the database).
3.  Increase the size of the transaction log to accommodate what you consider to be a workable size.  Rule of Thumb 25-30% of the Database, but some may need more.

Make sure you place appropriate alerts on the size of the Transaction log and have TxLog backups in place to manage its %used - IMPORTANT.

Now the secret.  

1.  Manually grow the Datafile size and the logfile size,  in large enough chucks to accommodate forecasted growth, usage and available disk space.
Also don't be afraid of adding filegroup to the database and addition log files.  
2.  Recreate the indexes the database

Major benefits.

1.  You control where the data and log files reside and how big they are.
2.  Transaction throughput is improved as SQL server doesn't have to shuffle file space before attending to your needs.
3.  For those people who use triggers, and who doesn't.   Because the 'Inserted' and 'Deleted' tables are read from the transaction log guess what,  you are not flogging the disk to death trying to pull all the data together for your trigger finish and yuk, if you are using a cursor inside a trigger I'm sorry but no end of fragmentation fixin will improve that.
4.  You backups are faster as there is less I/O operations on the disk searching for all of the part of the file.
5.  Yes, the restores are faster as the OS doesn't need to search for all the fragments to delete before creating the restoration database.

Yes the Maintenance overhead is a little bigger, but hey, you the DBA !!

Hope that helps.















get free sql tips
agree to terms