I have a production database table that has important data. If there are any issues with this table I need a quick way to restore this data without having to restore the entire database. Let's say we have a table called TIPS_TABLE. We need to implement a solution to recover this data as quickly as possible to its previous state if there is an issue with the data in this table. We need to also keep as many changes performed on the other tables intact when we restore this table. What options does SQL Server offer for this problem?
Microsoft SQL Server has a large variety of database backup and restore functions: you can create full, partial, file, differential and transaction log backups. The selected recovery model significantly influences the backup strategy and therefore the available backup types. A backup and restore solution is one option, but there are other techniques as well.
Option 1: Restore to a Different Database
There is a simple solution which can be appealing, quick and easy. Restore your database to a different database and copy the required table from there to your database. You could either do a full restore or do a point time recovery restore. Unfortunately it is not always feasible to restore large databases to a different database because of disk space limitations or time, especially if you are working with a really large database.
Once the database has been restored, you can query across databases, you can use a linked server, use BCP or another option to move the data. Options to import the data are described in this tip. Note that this option will only restore the data, but you should also think about triggers and foreign keys when you update the data.
Option 2: Restore Log Using STOPAT
You might have in mind to restore the recent database backup and roll it to a point in time before the issue happened. You can do this using the STOPAT clause if your database is in the full or bulk-logged recovery model. The command below shows how to restore the entire database to a specific point in time.
RESTORE DATABASE YourDB FROM YourDBBackups WITH FILE=3, NORECOVERY;
RESTORE LOG YourDB FROM YourDBBackups WITH FILE=4, NORECOVERY, STOPAT = 'Oct 22, 2012 02:00 AM';
RESTORE DATABASE YourDB WITH RECOVERY;
The main drawback to this option is that this will overwrite all of your data modifications for all tables which have taken place after the issue occurred. You should consider the amount of data loss acceptable for other tables and implement a backup strategy which suits all needs.
Option 3: Database Snapshot
Another option is to create a database snapshot. When an issue occurs on the table, you can fix its contents using the snapshot as a data source. The snapshot needs to have already been setup before the issue occurs. This can be useful when our important table is not so frequently updated, especially when it has planned updates with predictable weekly or monthly updates. If there are frequent and parallel updates on the table and different users access it then this might not be the optimal solution. Note that each time you make data updates on the table a new snapshot will need to be created.
Option 4: Use a View
You can create a second database and move the TIPS_TABLE there. This database would then be backed up separately and when you need to recover the data you can just restore this much smaller database. To access the data from the original database, the easiest solution would be to create a view which selects all the data from all columns in the TIPS_TABLE. Please note that you will have to rename or drop the table in the original database before creating the view.
USE YourDB; GO CREATE VIEW TIPS_TABLE AS SELECT * FROM YourNewDB.YourScheme.TIPS.TABLE; GO
With this option you can run SELECT, INSERT, UPDATE or DELETE statements against the view just like modifying the actual table. In case of an issue, simply restore the database which contains only your very important table using point in time recovery as described in Option 1. You should keep in mind that the metadata of the view should be updated in case the definition of the columns in TIPS_TABLE changes. You can use the sp_refreshview procedure for this task:
USE YourDB; GO EXECUTE sp_refreshview N'TIPS_TABLE';
Option 5: Create a Synonym
Similar to Option 4, move the TIPS_TABLE to a second database. Now implement a synonym for this table in the original database:
USE YourDB; GO CREATE SYNONYM TIPS_TABLE FOR YourNewDB.YourScheme.TIPS.TABLE; GO
The advantage over the previous solution is that you do not need to worry about a metadata update. One issue with this approach is that you cannot reference a synonym in a DDL statement or if it is located on a linked server.
Option 6: Save the Data Using BCP
You can setup a job to periodically export the data to a flat file using BCP. You can create a scheduled job using SQL Server Agent or create a trigger. The disadvantages are similar to Option 1. Then when there is an issue you would need to import the data back into the table from the flat file.
Comparison of Each Method
Restore to a Different Database
Quick and easy solution
Requires a lot of disk space, issues with triggers, foreign keys, etc.
Restore Log Using STOPAT
You can precisely define the point where to roll back
All the new data will be lost from the database
Nice solution if the table is infrequently updated
You should already have data in the snapshot, not suitable if several users parallel update the table
Create a View
Separates the table from the other parts of the database, no data loss
Metadata should be periodically updated, additional maintenance tasks with the new DB
Create a Synonym
Separates the table from the other parts of the database, no data loss
Cannot reference a synonym on a linked server, additional maintenance tasks with the new DB
Save Data Using BCP
You always have a dedicated backup of your table
Requires additional disk space, issues with triggers, foreign keys, etc.
I hope that this overview helped you understand the available options for a table level backup and restore. Every solution has its advantages and its disadvantages. You should decide what factors are more important in your shop and choose the options which best suits your needs.
Over the years, Microsoft has definitely left some holes in the backup and restore arena, and this is one of them. The solution not mentioned is to buy one of the 3rd party vendor backup solutuions, which can pull a table out of a backup file.
Friday, November 30, 2012 - 10:22:33 AM - Tibor Nagy
While most of these ideas will work, the problem lies that once the users or developers find out you can restore a table they then believe or want you to restore *any* table. The options in this article are fine for one-off table restores when you only have one table to worry about but become very unwieldy when you are talking about dozens of tables.
The problem is that SQL Server does not have a way to natively restore tables, while most third party tools do so. Pretty much any SQL Backup product out there has the ability to restore tables and/or other objects from a recent full backup. Some are more efficient than others so YMMV.
Friday, November 30, 2012 - 8:56:46 AM - Cezar Mulotto
Wonderful article! One option not listed is to create a separate filegroup for the table and move the data to the new filegroup. Backups can also be done on the specific filegroups and restored very quickly without restoring the entire database.
Obviously this has to be done ahead of time. I believe that there are also some third-party tools, though quite expensive that can perform row level recovery via the transaction logs.
One problem with replication is that it is done in real time and thus the deleted rows in the primary database would be also be removed on the replicated database.
These are some great options, but what about Replication? You can create a replication of just the TIPS_TABLE and replicate it to another database and to be really fault tolerant to another server. Since replication is near real-time it would always be updated without the need for log restores.
We use Replication not only for reporting, but for keeping reference tables updated across servers. This works out very well, update/insert into a table and other servers are updates.