Options to Move a Big SQL Server Database to a New Drive with Minimal Downtime

By:   |   Comments (16)   |   Related: More > Database Administration


Problem

My company bought new storage and we were instructed to move our SQL Server databases to it. The problem arose when we needed to move a 2 TB database with a time frame of 15 minutes. In this tip I will show you how I managed to successfully do it with the native SQL Server options.

Solution

There are several ways to move a database, each one has its pros and cons. Based on this assumption we can state that there isn't a right and wrong method, but there will always be one method that is more suitable for a given scenario.

I will briefly cover the most common methods to move a database and enumerate its pros and cons.

The SQL Server Database Detach and Attach Method to Move a Database

This is the first method that comes to mind when we need to move a database. It consists of executing sp_detach_db system stored procedure, moving the database files and then issuing a CREATE DATABASE…FOR ATTACH statement.

Pros

  • It is the easiest way to move a database.
  • It doesn't produce fragmentation on tables and indexes.

Cons

  • Database is inaccessible while files are being moved.
  • For a replicated database to be detached, it must be unpublished.
  • When a database is detached, all its metadata is dropped. If the database was the default database of any login accounts, master becomes their default database. Also cross database ownership chaining is broken.
  • Before detaching a database, you must drop all of its snapshots.
  • If database is being mirrored it cannot be detached until the database mirroring session is terminated.

The SQL Server Database Backup and Restore Method to Move a Database

The second easiest method, in my opinion, to move a database is to back it up and restore it by using the WITH MOVE switch to specify a new file location.

Pros

  • Anybody who knows how to backup and restore a database can do it.
  • It doesn't produce fragmentation on tables and indexes.

Cons

  • Database or filegroup is inaccessible until the database restore finishes.

The SQL Server OFFLINE Database Method to Move a Database

By setting the database offline we can then manually move the files and issue an ALTER DATABASE MODIFY FILE command to update the database's metadata and after that set the database back online.

Pros

  • It is much like the Detach – Attach method.
  • It doesn't produce fragmentation on tables and indexes.
  • It preserves database metadata unlike the Detach – Attach method.
  • It works with published and mirrored databases.

Cons

  • Database is inaccessible while files are being moved.

The SQL Server New Filegroup Method to Move a Database

This method requires a little more effort. It consists of creating a new filegroup and moving objects into the new filegroup by rebuilding tables and indexes. After that, you can delete the old filegroup unless it was the primary filegroup.

Pros

  • It doesn't produce fragmentation on tables and indexes.
  • It preserves database metadata.

Cons

  • This method won't work for the primary filegroup.
  • Since you are altering the database design, it may be prudent to speak with someone responsible for the application like the developer or perform a lab test prior to executing this task.
  • Requires more work than the earlier mentioned methods.

The SQL Server Log Shipping Method to Move a Database

For this method, you configure your database as a log shipping primary and during a maintenance window you switch databases (remove log shipping and rename databases). Of course if your database is in simple recovery model you need to change it to the full or bulk logged recovery model prior configuring log shipping.  Here is a tip that explains Step By Step SQL Server Log Shipping.

Pros

  • It doesn't produce fragmentation on tables and indexes.
  • It preserves database metadata.
  • Database remains online, so users can continue accessing to it.

Cons

  • It needs SQL Server Agent service up and running.
  • Since Log shipping uses a folder to store backups you may need to review folder permissions if you are running SQL Server Agent and SQL Server services with different accounts.
  • Although this method doesn't require much work compared to other options it requires a little bit more knowledge; you need to know how to set up log shipping and failover to the database on the new storage in a successful manner.

Manual SQL Server Log Shipping to Move a Database

Basically log shipping automates the process of having a synchronized secondary database and is great if you need to maintain the synchronization over time, but in this case we can do the same without the automation because we want to have a synchronized database for maybe one day and then switchover. So, just to save steps, we can restore a full backup as a new database without recovery and in the maintenance window take a log backup and restore it to the secondary with recovery and perform the switchover (rename the databases) to our new database.

Pros

  • It doesn't produce fragmentation on tables and indexes.
  • It preserves database metadata.
  • Database remains online, so users can continue accessing to it.
  • It requires fewer configurations than Log Shipping Method

Cons

  • Remember to use WITH MOVE when restoring the backup.

The DBCC SHRINKFILE Method to Move a Database

This option consists of emptying each file by executing the DBCC SHRINKFILE with the EMPTYFILE argument. You probably heard about shrink procedures and fragmentation, but this is not something you will do every day. You will probably rarely change database file locations. Books Online states that DBCC SHRINKFILE with the EMPTYFILE argument "empties a file by moving the data from the specified file to other files in the same filegroup, allowing the file to be removed from the database".

This is the slowest of all methods. Just to let you know, it took almost 2 weeks to move my 2 TB database. The long running times of shrink operations are mainly because those are single threaded operations unlike table or index rebuilds that can be parallelized. Furthermore, if you have enabled HyperThreading on your CPU it can take even longer because when this feature is enabled the processor cache per core is divided by two. Take for example a CPU which has 4 cores and 8 MB of cache memory. When HyperThreading is disabled each core has a single execution thread, so each execution thread has 2 MB of cache memory (8 MB divided by the number of cores). In opposition, when HyperThreading is enabled, each core hosts two execution threads and that gives us 1 MB of cache memory per thread (8 MB divided by the number of cores, 4 and then divided by the number of threads per core, 2).

If you use this method then you should also consider the impact on performance. Even being that this a single threaded operation it could have a profound effect on performance. Not for the data movement per se, but for data fragmentation. We all know that shrinking a database is the worst thing you can do for index performance due to logical file fragmentation. Of course the impact will depend on your environment and factors like tables size and query types.

Also if you use this method; remember that shrinking a data file generates transaction log records, so be careful if your database is the primary database on a log shipping configuration.

Pros

  • Database remains online, so users can continue accessing to it.

Cons

  • It may take longer than the other methods.
  • It produces a lot of fragmentation on tables and indexes.
  • It is a logged operation, so if your database is in FULL recovery model this method won't be suitable.
  • Not recommended when using SQL Server Log Shipping.

Sample Code

1. First we create our sample Database

USE [master]
GO
CREATE DATABASE [TestDB]
 CONTAINMENT = NONE
 ON  PRIMARY 
( NAME = N'TestDB', FILENAME = N'E:\MSSQL\TestDB.mdf' , 
 SIZE = 4096KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB ), 
( NAME = N'TestDB_1', FILENAME = N'E:\MSSQL\TestDB_1.ndf' , 
 SIZE = 4096KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB ), 
( NAME = N'TestDB_2', FILENAME = N'E:\MSSQL\TestDB_2.ndf' , 
 SIZE = 4096KB , MAXSIZE = UNLIMITED, FILEGROWTH = 1024KB )
 LOG ON 
( NAME = N'TestDB_log', FILENAME = N'E:\MSSQL\TestDB.ldf' ,
 SIZE = 4096KB , MAXSIZE = 2048GB , FILEGROWTH = 1024KB)
GO
ALTER DATABASE [TestDB] SET RECOVERY SIMPLE 

2. This will be our test table

USE TestDB
GO
IF OBJECT_ID('dbo.SampleTable', 'U') IS NOT NULL
    DROP TABLE dbo.SampleTable
GO
CREATE TABLE dbo.SampleTable
    (
      ID INT IDENTITY(1, 1) ,
      Data DATETIME PRIMARY KEY CLUSTERED ( ID )
    )
GO

3. Now we insert some test data.

Change the number after GO for your needs.

USE TestDB
GO
INSERT INTO dbo.SampleTable
        ( Data )
VALUES  ( GETDATE()
          )
GO 1000000

4. In this step we create the files on our new drive.

USE [master]
GO
ALTER DATABASE [TestDB]
 ADD FILE ( NAME = N'TestDB_1_New',
    FILENAME = N'G:\MSSQL\TestDB_1_New.ndf' ,
     SIZE = 4096KB , FILEGROWTH = 1024KB 
    ) TO FILEGROUP [PRIMARY]
GO
ALTER DATABASE [TestDB]
 ADD FILE ( NAME = N'TestDB_2_New',
    FILENAME = N'G:\MSSQL\TestDB_2_New.ndf' ,
     SIZE = 4096KB , FILEGROWTH = 1024KB 
    ) TO FILEGROUP [PRIMARY]
GO

5. We must change auto grow settings for the old files, so when we move data it goes to the new files and not the old ones.

USE [master]
GO
ALTER DATABASE [TestDB] MODIFY FILE ( NAME = N'TestDB', FILEGROWTH = 0)
GO
ALTER DATABASE [TestDB] MODIFY FILE ( NAME = N'TestDB_1', FILEGROWTH = 0)
GO
ALTER DATABASE [TestDB] MODIFY FILE ( NAME = N'TestDB_2', FILEGROWTH = 0)
GO

6. It's time to move the data.

If your primary file is too big you may want to truncate the free space to move the file quickly. Also when empting the primary file you will receive an error message telling you that system's tables data can only reside into primary file and cannot be moved. Since our sample database is configured using the simple recovery model, to empty our transaction log we execute a CHECKPOINT and then proceed to truncate the log.

USE TestDB
GO
DBCC SHRINKFILE('TestDB_1', EMPTYFILE)
GO
DBCC SHRINKFILE('TestDB_1', TRUNCATEONLY)
GO
DBCC SHRINKFILE('TestDB_2', EMPTYFILE)
GO
DBCC SHRINKFILE('TestDB_2', TRUNCATEONLY)
GO
DBCC SHRINKFILE('TestDB', EMPTYFILE)
GO
DBCC SHRINKFILE('TestDB', TRUNCATEONLY)
GO
CHECKPOINT
DBCC SHRINKFILE('TestDB_log', TRUNCATEONLY)
GO

7. We put our Database into OFFLINE state to move our Database primary data file and transaction log.

USE master
GO
ALTER DATABASE TestDB SET OFFLINE WITH ROLLBACK IMMEDIATE

8. Now we update the Primary File Location in our database's Metadata executing an ALTER DATABASE command.

USE master
GO
ALTER DATABASE TestDB MODIFY FILE 
(
NAME = N'TestDB',
FILENAME = N'G:\MSSQL\TestDB.mdf'
);
ALTER DATABASE TestDB MODIFY FILE 
(
NAME = N'TestDB_log',
FILENAME = N'G:\MSSQL\TestDB.ldf'
);

9. We put our Database back into ONLINE state.

USE master
GO
ALTER DATABASE TestDB SET ONLINE WITH ROLLBACK IMMEDIATE

10. In this step we remove the old files.

USE [TestDB]
GO
ALTER DATABASE [TestDB] REMOVE FILE [TestDB_1]
GO
USE [TestDB]
GO
ALTER DATABASE [TestDB] REMOVE FILE [TestDB_2]
GO

11. This step is to rename new files. You can omit this if you don't want to use the original filenames.

USE [TestDB]
GO
ALTER DATABASE [TestDB] MODIFY FILE (NAME=N'TestDB_1_New', NEWNAME=N'TestDB_1')
GO
USE [TestDB]
GO
ALTER DATABASE [TestDB] MODIFY FILE (NAME=N'TestDB_2_New', NEWNAME=N'TestDB_2')
GO
Next Steps


sql server categories

sql server webinars

subscribe to mssqltips

sql server tutorials

sql server white papers

next tip



About the author
MSSQLTips author Daniel Farina Daniel Farina was born in Buenos Aires, Argentina. Self-educated, since childhood he showed a passion for learning.

This author pledges the content of this article is based on professional experience and not AI generated.

View all my tips



Comments For This Article




Monday, March 16, 2015 - 6:34:52 AM - Govind Back To Top (36539)

It is better to use ROBOCOPY from windows command prompt after taking the SQL Server Instance down. Once copying is over, then reassign the same drive letter.  The copying will take around 3 hours for 1 TB data.  No need to change the datafiles path in SQL Server.  Just bring the SQL Server Services up.


Sunday, September 14, 2014 - 12:33:52 PM - Hugo Bernachea Back To Top (34525)

Why not mirroring?


Thursday, August 21, 2014 - 5:30:06 PM - Jeremy Kadlec Back To Top (34228)

Sri,

Thank you for the post.  We have updated the tip.

Thank you,
Jeremy Kadlec
Community Co-Leader


Thursday, August 21, 2014 - 4:28:52 PM - Sri Back To Top (34227)

Thanks Daniel for yet another nice piece of article.  

Just one correct though -- Whn you say "if your database is in single recovery model...." you actually mean "if your database is in simple recovery model......."...!!!


Wednesday, July 30, 2014 - 1:04:17 PM - Trinidad Back To Top (33940)

I mean, What is the way to move big DB?

 


Friday, May 2, 2014 - 9:17:55 AM - Bernie Black Back To Top (30594)

This was a great, informative article.  The comments were thought provoking.  I never thought of some of these methods.


Thursday, May 1, 2014 - 9:03:28 AM - Jeremy Kadlec Back To Top (30582)

Shilipi,

Thank you for the feedback.  As the tip title states, the focus for this tip was to outline options to move a large database. 

I think Daniel provided a few different options and community members chimed in with other options related to replication, SAN tools and I believe there may be some additional techniques available which are specific to particular hardware/software vendors.

As is the case with a number of business\SQL Server problems, there are numerous ways to solve the problem.  As Mike states, the best solution for you is dependent on your environment. 

Although in these situations I have used log shipping, I think it is valuable to know what other options exist to select the best option based on the circumstances.

HTH.

Thank you,
Jeremy Kadlec

MSSQLTips.com Community Co-Leader


Thursday, May 1, 2014 - 1:14:15 AM - Michael Oberhardt Back To Top (30579)

One method I've used several times is setting up a merge publication to the new location. When ready to cutover, do a final sync, drop the replication, and take the old DB offline.


Wednesday, April 30, 2014 - 7:14:48 PM - Daniel Farina Back To Top (30577)

Shilpi,

I never stated that this is the best way to move a database, I just tell the options available to do it with pros and cons. As Mike Lidstrom wrote early "It all depends on the tools available and having adecuate test systems to find the best method". The code is for illustrative purposes only and the best way to move a database is up to you as a DBA to choose the solution according to your environment.

Best Regards!


Wednesday, April 30, 2014 - 9:19:47 AM - Mike Lindstrom Back To Top (30561)

This is all dependent on your infrastructure.  We just moved 2.5 TB of databases in less than 3 hours to a new SAN.  We utilized the SAN utilities to do block level copies.  The LS method is definitely the way to go, if you don't have to worry about time.  We attached the new SAN to the server, created presented the new LUNS/ to the server during one 15 minute downtime.  Then during the maintenance window, shut down SQL server, did a SAN to SAN block level copy over a 10GB connection, removed the old LUNS, assigned the new LUNS the same drive letters from the new SAN, then restarted SQL.  Total time inlcuding the first downtime was 2.5 hours.  It all depends on the tools available and having adequate test systems to find the best method.


Wednesday, April 30, 2014 - 7:47:19 AM - Shilpi Balan Back To Top (30558)

I too vouch for the LS  shipping method to move data. The author is taking down the wrong path. I don't understand what he is trying to convey here.

Just that he did a move or is it a better way of dong things. I hope MSSQL tips moderator looks into the kind of articles that are being be published.


Monday, April 28, 2014 - 8:45:34 PM - Daniel Farina Back To Top (30540)

Exactly! I wrote about the DBCC method just to give an use  to EMPTYFILE option.

 

Best Regards!


Monday, April 28, 2014 - 2:03:36 PM - jeff_yao Back To Top (30534)

I think the author's way (i.e. using dbcc shrinkfile) is not recommended for big database, esp. with heavy work load. When you do the dbcc shrinkfile it will have a big performance issue, such as blocking, huge I/O, and I think it may also generates huge amount of log files.

It is far more easier and elegant to use Log Shipping way.

 


Friday, April 25, 2014 - 9:12:28 AM - MH Back To Top (30518)

"we needed to move a 2 TB database with a time frame of 15 minutes"

I think you mean downtime of 15 minutes, since you later state it took you 2 weeks to move the DB :)


Friday, April 18, 2014 - 10:08:04 PM - Krishna Back To Top (30111)

The best way to reduce the down time is  by restoring the full backup  and differential backups on the new volumes as new database then take a log backup and restore during the downtime. Rename the old database , make the restored database to the exact name and fix the logins. You don't need to wait for file copy or long backup restore timr..

 

 

 

 


Friday, April 18, 2014 - 10:07:29 PM - Daniel Berber Back To Top (30110)

For VLDBs I would normally go with LS since downtime could be as short as just a couple of minutes. Set your LS to apply logs every 15 min and the day of the go live just set your primary database as read only so no further transactions occur , take one final tlog backup using LS backup job copy it over to secondary using copy job and restore that last  tlog file with recovery. You are all set to go! ;)















get free sql tips
agree to terms