Learn more about SQL Server tools

mssqltips logo

Tutorials          DBA          Dev          BI          Career          Categories          Webcasts          Whitepapers          Today's Tip          Join

Tutorials      DBA      Dev      BI      Categories      Webcasts

DBA    Dev    BI    Categories


Incrementally shrinking a large SQL Server data file using PowerShell

By:   |   Last Updated: 2014-03-06   |   Comments (9)   |   Related Tips: More > Database Administration


I have a large database on a SQL Instance with more that 70% free space due to data cleanup. Unfortunately the SQL instance is running out of disk space and I don't have any free drive bays left to add space. The database is not expected to ever need the free space that it has available because data retention policies have changed. Therefore, in this case it makes sense to shrink the database files, and re-claim the space.

I am hoping to recover more than 200 GB of free space from the database I will shrink. I have tested the amount of time it takes to shrink the database by 100 MB to be roughly 10 minutes. This will clearly take quite some time, so I don't want to just tell SQL to shrink the database 200 GB and walk away. What is a good way to shrink a database roughly 200 GB?

I should note, that normally it is a VERY BAD idea to shrink a database. Usually you would be much better off to just add more storage than to shrink an existing database. This article explains why not to shrink a database: Why you should not shrink your data files. Essentially the problem is that when SQL Server try's to shrink a database, it first clears space at the end of the file. It does this by moving the data closest to the end of the file to the free space closest to the beginning of the file. It does this until it has cleared enough space to truncate the file, thereby shrinking it. This can cause near perfect fragmentation as Paul points out.  Here is also a tip from MSSQLTips.com Andy Novick on Issues with running DBCC SHRINKFILE on your SQL Server data files.


Let's start by getting size and free space information for our database. In my previous tip Check SQL Server Database Consistency and Backups Without Impacting Performance I demonstrated how to retrieve a database object using PowerShell and Microsoft SQL server Management Objects (SMO). Arshad Ali talks about SMO in his tip: Getting started with SQL Server Management Objects (SMO). We can use the SMO database object to get size and free space information for the database.

For my example, I have created a sample database called MyBigDB that is 50 GB with roughly 153 MB used.

$server = New-Object Microsoft.SqlServer.Management.Smo.Server 'V2012'
$Database = $Server.Databases['MyBigDB']

Unfortunately this is not the size property I am looking for. What I really need is the size and free space information for a file. To get to that I need to first access the filegroups for the database, in this case there is only one filegroup which is the PRIMARY filegroup. Luckily for us, the SMO database object has a filegroups property which is a collection of filegroup objects. The filegroup object has a files property, which is a collection of datafile objects. In our case we need the PRIMARY filegroup, and we need the only file in that file group which has a logical name of MyBigDB.

$server = New-Object Microsoft.SqlServer.Management.Smo.Server 'V2012'
$Database = $Server.Databases['MyBigDB']
$Filegroup = $Database.filegroups['Primary']
$File = $Filegroup.Files['MyBigDB']

Now I have the data file that I need, I can get the size and freespace information I need. A datafile object has three size properties:

  • AvailableSpace - Gets the amount of available space in the data file in KB.
  • Size - Gets or sets the current size of the data file in KB.
  • UsedSpace - Gets the amount of used space in the data file in KB.

The properties AvailableSpace, Size and UsedSpace are in KB. To convert them to MB, divide them by 1024. Here is what I have so far:

$server = New-Object Microsoft.SqlServer.Management.Smo.Server 'V2012'
$Database = $Server.Databases['MyBigDB']
$Filegroup = $Database.filegroups['Primary']
$File = $Filegroup.Files['MyBigDB']
'Datafile size properties'

GAC Version Location 
--- ------- -------- 
True v2.0.50727 C:\Windows\assembly\GAC_MSIL\Microsoft.SQLServer.Smo\11.0.0...
Datafile size properties

My sample database (MyBigDB) is 50 GB with just over 49.8 GB free. MyBigDB only has one data file MyBigDB (not really a good idea, but makes my tip much simpler). I could just create a DBCC ShrinkDB command to shrink the database to 160 MB.


Unfortunately, this would mean waiting while all of the freespace is cleared, then the file would be shrunk. I don't like to do this because it means you don't recover any freespace until all of it is available. In some cases things are getting pretty urgent, and waiting a long time would be a not good thing. What I prefer to do instead is shrink a small amount at a time so that I get back each small chunk as soon as it becomes available. The following DBCC SHRINKFILE commands would accomplish this, but I would have a couple hundred more commands to write and run.


Let's create a quick script to accomplish this task without all the repetition.

    [Parameter(Mandatory=$False)][int]$ShrinkIncrementMB = 10
$server = New-Object Microsoft.SqlServer.Management.Smo.Server $SQLInstanceName
$Database = $Server.Databases[$DatabaseName]
$Filegroup = $Database.filegroups[$FileGroupName]
$File = $Filegroup.Files[$FileName]
$SizeMB = $File.Size/1024
"Starting Size = $SizeMB"
if($SizeMB - $file.AvailableSpace -le $ShrinkSizeMB)
    while ($SizeMB -ge $ShrinkSizeMB)
        $File.Shrink($SizeMB - $ShrinkIncrementMB, [Microsoft.SQLServer.Management.SMO.ShrinkMethod]::Default)
        $SizeMB = $File.Size/1024
        "Size after shrink = $SizeMB"

Notice that I created some parameters that can be used so the script is re-usable. Parameters that are marked as mandatory will prompt the user for a value when the script is run if one is not provided for them. Also notice that after each shrink operation, I refresh the server object, the database object, and the datafile object which allows us to get a new datafile object to check the size of and perform the next shrink operation.

Here is a sample command to run the above script when it is saved as ShrinkFile.ps1.

.\ShrinkFile.ps1 -SQLInstanceName 'v2012' `
                 -Database 'MyBigDB' `
                 -FileGroupName 'Primary' `
                 -FileName 'MyBigDB' `
                 -ShrinkSizeMB 48000 `
                 -ShrinkIncrementMB 20
Next Steps
  • Defragment datafile after shrinking
  • Deal with Large Objects for shrinking file
  • Set SQL Server Recovery Model to Simple See: SQL Server Recovery Models shrink operations are logged, switching to SIMPLE mode makes it go MUCH FASTER

Last Updated: 2014-03-06

next webcast button

next tip button

About the author
MSSQLTips author Richard Vantrease Richard Vantrease is a lead SQL Server DBA for a large company. His specialties include architecture, the data engine and automation.

View all my tips
Related Resources

Post a comment or let the author know this tip helped.

All comments are reviewed, so stay on subject or we may delete your comment. Note: your email address is not published. Required fields are marked with an asterisk (*).

*Name    *Email    Email me updates 

Signup for our newsletter
 I agree by submitting my data to receive communications, account updates and/or special offers about SQL Server from MSSQLTips and/or its Sponsors. I have read the privacy statement and understand I may unsubscribe at any time.


Thursday, September 17, 2015 - 2:35:43 AM - Meet Back To Top

Cannot index into a null array.

At C:\Users\meet.parikh\Desktop\ShrinkFile.ps1:13 char:1

+ $Filegroup = $Database.filegroups[$FileGroupName]

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    + CategoryInfo          : InvalidOperation: (:) [], RuntimeException

    + FullyQualifiedErrorId : NullArray


Cannot index into a null array.

At C:\Users\meet.parikh\Desktop\ShrinkFile.ps1:14 char:1

+ $File = $Filegroup.Files[$FileName]

+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    + CategoryInfo          : InvalidOperation: (:) [], RuntimeException

    + FullyQualifiedErrorId : NullArray



I am getting above error after eexecuting script and after entering parameters.Can you suggest me any solution?


Thursday, March 06, 2014 - 10:37:16 PM - Richard Vantrease Back To Top

Csaba Toth

What did he experience? Does the SHRINK grows the transaction log file in full recovery model? Does the reindex grows the transaction log file in full recovery model? How much do these operation increase the log file size in simple recovery model?
There is a good TechNet article: Choosing a Recovery Model for Index Operations that states the following:

Some fully logged, large-scale index operations can generate large data loads that can cause the transaction log to fill quickly whether the operation is executed offline or online.

Thursday, March 06, 2014 - 10:34:29 PM - Richard Vantrease Back To Top

SQLCereal, to answer your question:

Could you please explain how being in SIMPLE mode would help?

It is my understanding that data page moves are logged.  Unfortunately, I cannot currently find a reference to that.  However I can tell you from experience that when I don't put the database in simple recover mode, the log file is extremely active.  Usually my log files don't have to grow, because I usually have plenty of space for them.

I'm fairly certain I read an article about why page moves need to be logged that was in relation to a clustered index rebuild, maybe someone else has read the article I am thinking of.

Thursday, March 06, 2014 - 9:26:27 PM - Richard Vantrease Back To Top

David, in response to your question:

Can you confirm this statement is accurate: "It does this by moving the data closest to the end of the file to the free space closest to the beginning of the file."?

If you look at the link above Why you should not shrink your data files, which was written by Paul Randal, who used to own the shrink code, he verifies the way pages are moved.

Wow! After the shrink, the logical fragmentation is almost 100%. The shrink operation *completely* fragmented the index, removing any chance of efficient range scans on it by ensuring the all range-scan readahead I/Os will be single-page I/Os.

Why does this happen? A data file shrink operation works on a single file at a time, and uses the GAM bitmaps (see Inside The Storage Engine: GAM, SGAM, PFS and other allocation maps) to find the highest page allocated in the file. It then moves it as far towards the front of the file as it can, and so on, and so on. In the case above, it completely reversed the order of the clustered index, taking it from perfectly defragmented to perfectly fragmented.

Thursday, March 06, 2014 - 4:52:39 PM - Csaba Toth Back To Top

I'm eager to see what Richard Vantrease will say about LOB data. I have some!

Reference to my SHRINK sufferings: http://dba.stackexchange.com/questions/47310/shrinkfile-best-practices-and-experience

Note: just as the autor says, I know SHRINK is a very bad practice as a part of a maintenance plan, but like this case of this article, there are rare occasions when it needs to be done. Notice how I didn't get any useful advice on StackOverflow/DBA stackexchange at all!!! What I got was a lot of downvote and commotion. I kept logging my experiences to the bottom of that questions though.

Thursday, March 06, 2014 - 4:48:32 PM - Csaba Toth Back To Top

SQLCereal: it must be because of log file growth, and I really advise anyone to watch out! The writer also in the situation where there isn't too much free space on the server! Log file would "explode" especially during the 0th step (when you free up the space) if you were in Full Recovery model. If you don't switch from Full recovery model, you'll need a lot of free space for your log file, since you can claim back the freed up space from the DB just later.

Once you are in simple recovery model, I'd stay there for the SHRINK and the reindex phases too. Switching to simple mode breaks the backup chain though. I don't know what to do if someone (a client with a problematic DB) doesn't want to switch to simple mode.

I'm curious about the insights of the article writer about that.

What did he experience? Does the SHRINK grows the transaction log file in full recovery model? Does the reindex grows the transaction log file in full recovery model? How much do these operation increase the log file size in simple recovery model?

Thursday, March 06, 2014 - 4:41:50 PM - Csaba Toth Back To Top

Thank you for your article, I cannot wait for the next parts!

I'm in a similar situation with more databases (for e.g. I can free up 19 gigs from a 34 gig database). Anywhere I tried to get advise I didn't get valuable information. I'm happy to read from someone who experienced the same issues (what I mean is not the incremental shrinking, but SHRINK itself at all).

My experiences:

1. For the SHRINK I spearately do a DBCC SHRINK NOTRUNCATE, and then a DBCC SHRINK TRUNCATEONLY with the target size. FOr some reason actual file shrink didn't happen for me otherwise.

2. For reindexing DBCC commands (DBCC SHOWCONTIG, DBCC DBREINDEX) supposed to be outdated, so I use dm_db_index_physical_stats DMV to query the fragmentation percentage, and ALTER INDEX REORGANIZE or REBUILD depending on the fragmentation, as per advised by Technet Article of "sys.dm_db_index_physical_stats (Transact-SQL)", see Example D: http://technet.microsoft.com/en-us/library/ms188917.aspx



Thursday, March 06, 2014 - 4:34:12 PM - SQLCereal Back To Top

I am curious as to why chaning the recovery model to SIMPLE would increase the speed of the shrinks (from Next Steps). SHRINKFILE is not listed as a minimally logged action, so it would not reduce what needs written to the log. The one thing that it may help is to reduce the potential growth of the log since you are shrinking using multiple transactions and inbetween those transactions VLFs should be getting marked for reuse.

Could you please explain how being in SIMPLE mode would help?

Thursday, March 06, 2014 - 11:55:46 AM - David Wanta Back To Top

Thanks for the article and helpful information.  I understand your dilemma. 

Can you confirm this statement is accurate: "It does this by moving the data closest to the end of the file to the free space closest to the beginning of the file."?

We have always operated with the concern that splitting the shrink into multiple commands could cause the data to be even more messed up than a single shrink command.  We realize you can stop a shrink command at any time with very little concern, but have been hesitant to run successive shrinks like you suggest.  In the case you described, I might have done an initial shrink to clear some drive space and then run a subsequent shrink to get me to the point I desired.  And hopefully be able to run our index maintenance routines right after.

Thanks again for your help and have a great day,


Learn more about SQL Server tools