SQL Server Fragmentation How to address it (Part 8 of 9)
In our 2nd to last post in the Fragmentation series, we'll discuss our options on addressing and removing/correcting fragmentation. In our next and final post in the series, we'll end with a full-fledged SQL script that will walk you through all the different things we've talked about in the series to give you some up close and personal experience with what we've gone through.
Once you've detected that you have fragmentation, understand what it impacts in your workload, and understand the different types of fragmentation and the type you're trying to correct, you have a couple of options to address it - realistically, you have 3:
- DEFRAG / REORGANIZE - A defrag operation (called a reorganize in 2005) will address logical fragmentation by trying to both increase page density in an index(es) and also by re-ordering pages in-place to create properly ordered pages as best as possible. A defrag will not however create contiguous pages if the existing pages are not contiguous, as a defrag does not allocate new pages during the operation, it only shuffles existing allocated pages in place trying to reorder them into proper logical order. Therefore, a defrag/reorg will not address extent fragmentation at all, only logical fragmentation - however this is traditionally the most intrusive type of fragmentation, so many times a defrag/reorg is the best option.
- REINDEX / REBUILD - A reindex/rebuild operation will do everything that a defrag/reorganize does and also try and build fully contiguous pages as well by allocating new pages to the index where appropriate. A rebuild is a completely atomic operation (i.e. it is all or nothing), and is an 'offline' operation as well (though 2005 Enterprise Edition allows a new online rebuild option as well). One other side-benefit to a rebuild is that statistics related to the index are rebuilt in addition to the index itself (unlike a defrag)
- DROP and CREATE - Typically there is no difference between a full rebuild and a drop/create operation. One benefit to a drop/create is that you can also change the index keys and included columns (sql 2005).
The most frequently used options are the 1st and 2nd listed - here's a matrix comparing some of the key functional differences between the rebuild and reorganize options:
|Functionality||REBUILD / DBREINDEX||REORGANIZE / DEFRAG|
|Online / Offline||Offline; Online possible with 2005 Enterprise Edition||Online|
|Faster when logical fragmentation is||High||Low|
|Can be stopped/started without losing completed work to that point||No||Yes|
|Able to untangle interleaved pages||Yes||No|
|Additional free space required in data file for process||Yes - 1.2x - 2x existing size||No|
|Faster on larger indexes||Yes||No|
|Log Space Used||Full Recovery - High; Bulk/Simple Recovery - Low;||Depends on work performed|
|May skip pages on busy system||No||Yes|
|Can specify additional options (fillfactor, etc.)||Yes||No|
Of course, one other option to 'addressing' fragmentation could be to choose to not fix it. Hopefully you're not the type of engineer who simply rebuilds/defrags indexes every day just for the sake of doing so or because it makes you feel better - instead, try to work out a plan to understand which indexes in your schema actually matter the most, are impacted by the different kinds of fragmentation, etc. and those that aren't.
Let's take a look at the internals for defrag and reorganizing - reorganizing an index is performed in basically 2 stages:
- The page compaction stage - this stage is meant to try and make pages have a 'fullness' near the original fillfactor specification for the index. This operates at the leaf-level of the index only, and pages are compacted by shuffling rows towards the left-side of the B-tree and dropping ghosted records and freeing pages made empty.
- Page defrag-ing stage - this stage is meant to make the logical order of pages match the allocation order. Again this operates at the leaf-level only, by performing a logical-ordered scan and an allocation-ordered scan in lockstep with one another, re-establishing a scan position after every page. This is a totally online operation, aside from an eXclusive lock on each page for the duration of each given re-ordering. Remember that no new pages are allocated during a defrag operation (aside from the single page for temporary space), which means that logical re-ordering is achieved by shuffling pages that are already allocated to the index only - this is why a defrag operation can't solve interleaved pages, since only existing pages are shuffled.
That wraps up our 2nd to last post in the series, in our next and final post we'll wrap it up with an all-in-one script for everyone to use in a test environment of your own to verify everything we've discussed.
About the author
View all my tips
Article Last Updated: 2008-02-23