Problem Many of our SQL Server releases include data changes, not just code changes. In some respects the SQL Server data changes are more of an issue to manage than the code changes, because we can easily isolate the code changes and roll them back as needed. With the data changes if we add a simple lookup value to a table and update particular records we can do some detective work and trace back the data. It takes time, but we can typically trace it back. Unfortunately, when we change data by a percentage or make numerous changes in a table, those changes are a bit more difficult to trace back because rolling back a percentage is not as precise as we require. Thus far our SQL Server rollback plan has been to just restore a preliminary SQL Server database backup that was issued, but this is an time consuming proposition if we have only one small issue. Can you offer a better approach to isolate the SQL Server data changes and only rollback specific data?
Solution Preparing for a SQL Server rollback in many respects is just as important as building and testing the implementation scripts. The rollback code is truly an insurance policy in case an issue occurs. Let's outline some of the options available from a rollback perspective.
Option 1 - Full Database Backup
Often times, the rollback plan is based on a full database backup issued prior to the implementation. In many respects this is an all encompassing means to ensure the rollback is accurate by restoring to a pre-implementation point in time. In general, this approach is best if numerous dependent changes are made, if no time is available to build other scripts or an application is going to make the changes as opposed to T-SQL scripts.
The downside of using a preliminary full backup as the rollback plan is that it is an all or nothing proposition. So you figuratively would take one step forward and two steps back. If a small issue arises, it cannot be corrected without reverting all of the changes. Depending on the situation this may or may not be acceptable.
A more granular approach to perform the rollback is to backup the original data on a per table basis to a separate table. If the original data is needed for rollback purposes, then rollback the needed data. Here are the general steps:
Create Backup Table - Script out the original table
Add columns to the script to manage the original data
Change table name in the script to reflect it is a backup table
Populate Backup Table - Insert the data into the backup table
Data Changes - Update the base table
Rollback - If applicable, rollback the base table to the original data
Drop Backup Table - Once the data is no longer needed, drop the backup table
Here is an example to demonstrate the rollback script technique with the AdventureWorks sample SQL Server 2005 database:
Step 1 - Create the backup table with administrative columns
USE [AdventureWorks] GO CREATE TABLE [dbo].[zBck_Product_20071220]( [zBPID] [int] IDENTITY(1,1) NOT NULL, [ProductID] [int] NOT NULL, [Name] [dbo].[Name] NOT NULL, [ProductNumber] [nvarchar](25) COLLATE Latin1_General_CI_AS NOT NULL, [StandardCost] [money] NOT NULL, [ListPrice] [money] NOT NULL, [ModifiedDate] [datetime] NOT NULL, [DateInserted] [datetime] NOT NULL, [InsertedBy] [nvarchar](25) COLLATE Latin1_General_CI_AS NOT NULL, [BatchName] [nvarchar](25) COLLATE Latin1_General_CI_AS NOT NULL CONSTRAINT [PK_zBck_Product_20071220_ProductID] PRIMARY KEY CLUSTERED ([ProductID] ASC)WITH (PAD_INDEX = OFF, IGNORE_DUP_KEY = OFF) ON [PRIMARY] ) ON [PRIMARY] GO
Step 2 - Issue an INSERT...SELECT statement to populate the backup table
USE [AdventureWorks] GO INSERT INTO [dbo].[zBck_Product_20071220] ([ProductID], [Name], [ProductNumber], [StandardCost], [ListPrice], [ModifiedDate], [DateInserted], [InsertedBy], [BatchName]) SELECT [ProductID], [Name], [ProductNumber], [StandardCost], [ListPrice], [ModifiedDate], GETDATE(), SUSER_SNAME(), 'Batch1' FROM [Production].[Product] WHERE [MakeFlag] = 1 GO
Step 3 - Production data changes
USE [AdventureWorks] GO UPDATE P SET P.[StandardCost] = P.[StandardCost] * 1.07, P.[ListPrice] = P.[ListPrice] * 1.07 FROM [Production].[Product] P WHERE P.[MakeFlag] = 1 GO
Step 4 - If needed, rollback the data to its original state
USE [AdventureWorks] GO UPDATE P SET P.[StandardCost] = B.[StandardCost], P.[ListPrice] = B.[ListPrice] FROM [Production].[Product] P INNER JOIN [dbo].[zBck_Product_20071220] B ON P.[ProductID] = B.[ProductID] WHERE P.[MakeFlag] = 1 GO
Step 5 - Once the backup data is not needed, drop the backup table
USE [AdventureWorks] GO DROP TABLE [dbo].[zBck_Product_20071220] GO
Although this option offers a great deal of flexibility, depending on the volume of data changes, the available storage and/or the retention period for the data, this approach may not be a viable option. In addition, building these scripts take some planning and foresight in order to use them if a rollback scenario occurs. This level of effort could also be considered excessive if the data and code changes are rigorously tested prior to deployment.
Option 3 - Generate Scripts
Although it is possible to generate the scripts for objects via SQL Server 2005 Management Studio, the same option is not available for data. The closest alternative is to issue the query in SQL Server 2005 Management Studio then save the results to a file or build an SQL Server 2005 Integration Services (SSIS) Package which pushes the data to files.
A final option to consider are third party data and object comparison tools. These tools may work in a pinch if you have multiple systems with the same versions of the data. What these tools provide is a mechanism to identify the differences between the databases (data or objects) then synchronize the items. In a deployment scenario, the code can be applied to one database and a different database would not have the code applied. If an issue arises, then the data and objects from the system without the new code changes would be used to re-sync all or a portion of the system that was changed.
Another approach is with an emerging set of tools that can package and deploy code in parallel. Some of these tools also offer a mechanism to rollback the data and code changes as well. Depending on the code deployment needs these tools can drastically improve the implementation and rollback needs. So check them out!
As you rollout data or code changes, building a rollback plan is more valuable than the implementation plan if an issue arises. Having to scramble to rollback the code or the data is not a pleasant task because one step forward has resulted in two steps back.
Depending on the scope of the SQL Server changes with the rollout and the dependencies between the changes, issuing a full database backup and relying on it as the rollback mechanism is not necessarily a bad plan. The technique outlined in this tip is an alternative and provides a finer granularity of flexibility to recover the data.
Be sure to test your rollback code as you would with your implementation scripts. Keep in mind that if you do face an issue, the rollback scripts could be a life saver.
Another item to keep in mind as you build your implementation scripts is actually how you would perform the rollback. Although only UPDATE statements are outlined in this tip, you potentially have the need to rollback INSERT or DELETE statements as well.
Along the same lines with the previous item, it might make sense to outline the ground rules for a rollback during the testing phase. Although an issue might arise during the deployment, it may make more sense to correct the issue rather than rolling back the changes in some environments. In other environments, regardless of the issue, it is absolutely necessary to rollback under any circumstance. So you be the judge and see how it should be handled in your SQL Server environment.
Last Update: 12/20/2007
About the author
Jeremy Kadlec is the CTO @ Edgewood Solutions, co-founder of MSSQLTips.com and Baltimore SSUG co-leader.