Learn more about SQL Server tools

mssqltips logo
 

Tutorials          DBA          Dev          BI          Career          Categories          Webcasts          Whitepapers          Today's Tip          Join

Tutorials      DBA      Dev      BI      Categories      Webcasts

DBA    Dev    BI    Categories

 

Configuring Snap Backups with SMSQL for NetApps and SQL Server


By:   |   Read Comments (14)   |   Related Tips: More > Backup

Attend these FREE MSSQLTips webcasts >> click to register


Problem

I've configured my server to use SnapManager for SQL (SMSQL) and installed SQL Server, SnapDrive, and SMSQL. How should I configure backups?

 
Solution

This article assumes you've followed the basic steps to setup your server to use NetApp SnapDrive and SMSQL. If you haven't yet then refer to my previous article which provides a high-level overview of things to consider before configuring backups: Considerations for Implementing SnapManager for SQL Server. If you didn't quite get it right the first time, don't fret. When you go through the steps to configure your backups SMSQL will let you know whether or not all the puzzle pieces are in place. This article will not go through every step in the process. See the Next Steps section for documentation for step-by-step instructions.

Let's start with a couple of quick reminders. SQL must be installed prior to installing SMSQL! So, if you're like my company and installing SQL and installing the storage software are handled by different teams then you'll need to coordinate these steps and makes sure SMSQL is installed after you install SQL. You'll also need to make sure that SMSQL is running under an account with both Local Administrator and SA privileges. The SMSQL documentation recommends you run SMSQL with the SQL Server service account.

Configuration Wizard

Now that's out of the way we can get down to the business of configuring our backups. The first thing you'll notice is all the backups are configured through the SMSQL console. The fact that there is no tie-in with SSMS will drive a DBA crazy. Keep in mind this is a storage application - not a SQL plug-in. Because of this you will need to install the software on a central management server. If you have more than one DBA in your shop, each DBA will need to log into the server via RDP or VNC to manage backups.

The console allows you to manage multiple instances of SQL and it does a good job at listing out the last backup, last restore, and last configuration for each instance. We currently manage about 30 instances in the console and performance isn't bad though certainly not stellar. By April we'll be managing hundreds so stay tuned. Just like SSMS you will first need to register the server. Once registered you'll need to run the configuration wizard. One purpose of configuration is to make sure the datafiles are mapped to the correct LUNs. Whether or not you move the datafiles yourself or have SMSQL move them for you makes no difference - you will still need to run the Configuration Wizard before setting up backups. Here are some quick points about configuration:

  • You'll want to specify a verification server. If you choose to run DBCC CHECKDB commands or snapshot verification then this selection allows you off-load the IO to a dedicated server. Trust me, you do not want DBCC CHECKDB running on the server you're trying to snap.
  • We use a single SnapInfo directory. We've found this reduces complexity. You may use different directories if you have different retention, archiving, or recovery policies for databases on the same server.
  • The directory for your snapshots need to be on their own LUN. Do not put any other files on this LUN. If you do than you will potentially corrupt your snapshots and the directory will need to be recreated (we had a vendor put files on our snap directory because they just assumed it was a directory to put application files)!
  • SnapManager likes to do a lot of things before migrating databases; DBCC before and after migration, deleting copies of migrated databases, and UPDATE STATS. If you are not migrating any databases then simply deselect all the options. If you are migrating, I tend to still deselect and manually do the verification and deletion. It's your call. Verifications can take a long time to complete for large databases and we have separate weekly jobs that do this task.
  • Click on the Advanced button to get the Advanced Notification Settings. Be sure to check mark "Only send notification when operation fails". If you don't, then you'll get an email every time a snapshot completes which, depending on your requirements, could be every half hour. Of course you can select this option if you just feel like you don't get enough email during the day.
sql must be installed before smsql

Backup Configuration

Configuring backups involves another wizard. When selecting your databases keep in mind a few points:
  • The databases you select to backup will be part of a single SQL job. This means if you have different scheduling requirements for different database you'll need to group these separately. You'll need to run the wizard for each group.
  • Pay attention to which databases are sharing LUNs. If you have a user database on the same LUN as system databases you'll need to separate these. If you have any database sharing a LUN with another database than your restores will be streaming (the word "streaming" and "slow" are interchangeable).

Take a look at this example:

if you have a user database on the same lun as  system databases you'll need to separate these

There are 3 LUNs and the C drive. Notice that the SharePointWeb database and the DBA database are on the same LUN. The SharePointWeb database is 800 GB and the DBA database is 100 MB. When I restore the SharePointWeb database snapshot SMSQL will perform a streaming restore. This takes about 3 hours. If I were to move the database to its own LUN then the restore takes about 3 minutes. Moral of the story - do not share LUNs or volumes for large databases!

When configuring the backups I would first only check the system databases master, model, and msdb (note tempdb cannot be configured). This creates a job performing a streaming backup of just the system databases. I would then configure the DBA database and the SharePointWeb databases in a separate backup job.

You can only backup databases that are on NetApp storage (notice the VDISK designation). If any of the LUNs were local than they would be grayed out and could not be backed up using SMSQL.

Things to Consider

Here is a quick and dirty brain dump of things to consider when configuring SMSQL backups. Of course everything is based on individual needs and requirements. Your RPO (Recovery Point Objective) and RTO (Recovery Time Objective) may be vastly different from one system to another. Each requirement will mean a different policy.

SMSQL only allows for a total of 255 snapshots. This can be maximized with the following configuration:

  • Take hourly snapshots and retain them for 7 days.
  • Take daily snapshots and retain them for 14 days
  • Take weekly snapshots and retain them for 364 days
This equates to 234 snapshots. Your storage administrator may not love you for this but this is what you might do if you are required to keep backups for a year. Keep in mind that these are all full backups so there is still a large chance for some data loss. At our shop we keep two weeks of weekly backups, 7 days of daily, and 2 days of hourly. Each retention policy (weekly, hourly, and daily) is a separate SQL job. Be sure they are scheduled to run at different times. Please note - we only take full backups! Log backups through SMSQL are always streamed which makes hourly full backups just as efficient and useful as taking hourly log backups. If you require recovery point objectives less than one hour then you will likely need to configure transaction log backups.
Next Steps


Last Update:


signup button

next tip button



About the author
MSSQLTips author Scott Shaw Scott Shaw is a Lead SQL Server DBA with extensive experience running SQL Server in a virtualized environment.

View all my tips
Related Resources





Post a comment or let the author know this tip helped.

All comments are reviewed, so stay on subject or we may delete your comment. Note: your email address is not published. Required fields are marked with an asterisk (*).

*Name    *Email    Notify for updates 


SQL tips:

*Enter Code refresh code     



Tuesday, May 13, 2014 - 1:58:25 PM - Sandra Back To Top

Is anyone who uses SMSQL creating a separate volume for the SnapInfo directory? We're fighting our snapshot limit because for every full backup, we get not just the snapshot copy for the databases but also the snapshot for SnapInfo, as a single volume contains both. The desire was a once a day backup retained for 30 days, an every two hour backup on the odd hours retained for 1 day, and an every two hour backup on the even hours retained for 2 weeks. It's this last one (154 database snaps plus 154 SnapInfo snaps) that blows our count out of the water. And, the solutions I see right now are: a separate volume for SnapInfo, less retention, or possibly going to a single daily full backup with the remaining as log backups. Each have their downsides although the first not so much for me as a DBA.  Moreso for the SAN Admin.


Wednesday, April 09, 2014 - 9:45:29 PM - Eric Back To Top

To Satya's question about restore and verification of databases using TDE.   It's the same rules as if Snap Manager for SQL didn't exisist.  So long as the same TDE certificate exists on both production server and the verification server you will be fine. We do it every day on our servers. 


Wednesday, April 09, 2014 - 9:27:29 PM - Eric Back To Top

 

Great all around post and comments.   We have been a NetApp shop for about 8 years and I a SQL DBA have learned a lot about the NetApp products.  We do our backups very similar to how many have posted.  Our RPO is one hour.  So we run hourly snapshots we retain for 48 hours and daily snapshots we retain for 7 days.   I saw one of the posters mentioned keeping weekly snapshots for 1 year.  That would require a tremendous mount of volume space depending on the rate of data change. 

 

One point I would like to mention when dividing up databases across luns is for DR and testing.   I am an experienced believer of trying to keep the number of databases per LUN to no more than 30.   On our Dev or QA systems I generally use a single LUN for all user databases since their ole activity and a minor delay is no biggie. Going beyond that on a production server can lead to the database freezing during the snapshot to last more than few seconds resulting in application timeouts or blocking.  However, we are a SharePoint shop too.  It's important that all +25 databases that make up a full blown SharePoint farm be consistent.  For that reason I grouped our SharePoint databases on a single LUN per farm.   This allows us to restore any farms databases together in seconds.   We can also flex clone the snapshots which are read/write copies to use for Dev or DR testing without the need for burning up additional storage or impacting backups or SnapMirror replication.  This allows us to mount an unlimited size LUN snapshot to our Dev or QA servers in a matter of seconds. The big advantage in DR is that we recover the system databases to our SQL server there then attach all the productions servers luns (snap mirror) destinations or snapshot and we have recovered and can test our fail over. 


Tuesday, April 01, 2014 - 12:08:01 PM - Chris Back To Top

Hi Scott,

I have just come accross this post and I'm just after some clarification if thats ok.

We run NetApp C-Mode which has its own errors with SMSQL.  We run 15 min T logs, 6 hrly, daily and weekly snapshots.  When you say an hourly T log backup does PITR does this mean 15 min backups are a waste of time?  

For the above we have 2 SQL jobs in SMSQL a 15 min and 6 hrly job, the 6 hrly run for the daily and weekly jobs also and just renamed to snapshot_daily and snapshot_weekly and kept for the retention period.  
From what I read above you would suggest 4 backup jobs in SMSQL for this.  Why?

I am not a SQL bod so I leave that up to the DBA's but maybe they are not telling storage how to correctly configure the backups.

Thanks,

Chris
SnapDrive 6.4.1, 7.0.1
SMSQL 6.0 and 7.0 
Windows 2003, 2008R2
SQL 2008, 2012 


Friday, November 01, 2013 - 2:54:45 PM - Scott Back To Top

Satya,

 

I wish I had an answer for you but I don't. I've heard that there has been some enhancements to the tool since I published this article. I haven't had an opportunity to check.

Good luck!

Scott


Tuesday, October 29, 2013 - 3:22:20 AM - Satya Back To Top

 

Hi Scott,

 

The Restore & verification of SQL Server backups that were taken by snap manager if we enable TDE on database how it works? Could you please explain?

 

Thanks,
Satya 


Monday, July 18, 2011 - 10:26:11 AM - Interesting idea Back To Top

+1 to full backups, however reasonably frequent, not allowing PITR to a fraction of a second (much less to a STOPATMARK point between databases).

Are these guaranteed to be transactionally consistent, all the time?  Do they need to do massive rollback/forwards for long running transactions upon restore?

I would also look into how you would restore a "backup" if your primary LUN spindles go up in flames, those disk trays are dropped on the floor after someone started moving the wrong tray, or are stolen and no longer on site.  I don't know Netapp, but many "Snap" setups are a difference recording and require the original baseline to make them useful; that's more an archive than a backup.


Monday, March 14, 2011 - 6:29:23 AM - Robert Kok Back To Top

Scott,

Thanks for the clarification. The full backup verification takes 30 minutes and has a serious performance impact on the server. I can schedule this at night, because (for now) the bulk of our users only use the system during business hours.

I will discuss with our information officer if verfication is a true business need to determine if a remote verfication during business hours is required.

Robert


Friday, March 11, 2011 - 11:15:35 AM - Scott Shaw Back To Top

Robert, thanks for the comments. 

In response to your last question it really is a personal choice.  We do not verify our backups - I know, I know this is probably not what a DBA should be saying but we do run nightly integrity checks on our databases. We keep 3 days of hourly backups and 5 days of daily backups so I've been fairly confident that we will capture any integrity problems before we lose backups.

Still, with that being said, if the system is truly critical I still wouldn't neccesarily verify every hourly backup but only verify the nightly.  I strongly recommend offloading the verification to a dedicated server. If verification is a true business need than your storage and server teams should be able to quickly present you with a dedicated VM system just for the purpose of verifying your backups. I haven't seen any numbers on how badly verification stresses a server. This might be a good topic to look into and test.

I hope this helps and let me know if you have any further questions. As always - good luck!

Scott


Friday, March 11, 2011 - 10:11:29 AM - Robert Kok Back To Top

Great post.

We're a small outfit using two filer for our main site and backup site and two SQL Servers (production and test/accept).

Our line of business vendor only supports SQL 2005 and 2008 R2, but not 2008 we're currently using with Snapdrive 6.1/SMSQL 5.0. I've been struggling to get this working on SQL 2005 until I found that Snapdrive 6.1/SMSQL 5.0 doesn't support SQL 2005 SP3 or higher. After upgrading to Snapdrive 6.3 and SMSQL 5.1 it now works with SQL 2005 SP4/CU2.

I'll follow your advice to use implement hourly full backups and put the database in simple mode. This will save me a lot of time figuring out how to implement and verify transaction log backups.

I've an additional question: how trustworthy are full backups during business hours, when you can't use your production server to verify them. If I use our test/accept server for remote verification, it will be useless when I perform this action. Or should I use transaction log backups to avoid this?

 


Thursday, February 03, 2011 - 3:30:41 PM - Scott Shaw Back To Top

Thanks for the clarification and you're absolutely correct. The hourly backups are full backups and you would want to implement them this way only if you don't need PITR within the hour and you are willing to put your databases in simple mode. The benefit is that you are taking hourly snapshots which will backup and restore much quicker then the streaming log backups. All transaction log backups are streaming in SMSQL.


Thursday, February 03, 2011 - 2:06:39 PM - T-Log Maintenance Back To Top
Note that for FULL or BULK LOGGED recovery mode databases, it is required to take transaction log backups in order to clear out the transaction log; otherwise it will continually grow. Also note that T-log backups can reasonably be taken every 5, 10, or 15 minutes, not just hourly; and even hourly T-log backups allow true PITR (to any point in time within the hour), while hourly full backups only allow restoring to within an hour of when is desired.

Wednesday, February 02, 2011 - 7:55:13 AM - Scott Shaw Back To Top

Jason,

 

I feel your pain. There are lots of issues between versions. It all depends on which version of SnapDrive you run and which version of SMSQL you are running.  Matters are worse if you are using RDM since older versions don't fully support RDM.  Earlier versions also do not appear to support local disks for streaming backups. We keep system databases on C and had to upgrade SnapDrive so they see them as VMDKs. 

We are using SMSQL v5.1 and SnapDrive v6.3. Problems still occur and daily service restarts are not uncommon. I'll be the first to tell you that NetApp has some work to do.  Still, my advice is to keep up with the latest and greatest. The hardware and VM software technology is moving fast and things can quickly get outdated.

Send me your errors or post them on my blog http://blogofshaw.blogspot.com/ and I'll be happy to get you some answers. We also use SME but that's not my area. I can always though talk to our Exchange folks.

Scott


Wednesday, February 02, 2011 - 6:13:59 AM - Jason Back To Top
Another great post. One thing happening now in my company, is we're having awful problems with Snapdrive, SMSQL and VMWare 4.1. TEST everything, we upgraded from VMWare 4.0 to 4.1 due to issues we were having with a virtual switch, and now this has caused major issues with SMSQL, even the latest patched versions of Snapdrive and SMSQL. From what I can glean VMWare made some undocumented changes in 4.1 that broke some things in the NetApp tooling. With 11 SQL Servers and 16 instances on VM's we're getting really close to pulling the plug on Snapdrive and SMSQL and SME (Snapmanager For Exchange).

Learn more about SQL Server tools