Thank you for your article on SnapManager for SQL.
Our company has also centralized storage using NetApp appliances. I definitely agree that there is a learning curve for the DBA, but mostly in the area of backup/recovery. Also, when properly configured, the DBA does not need to nearly *as* concerned about how to configure drives and where to place your database files, such as separating data, index, log, system, and tempdb files to optimize performance.
One point I would like to clarify is that if you utilize mount points, you will not need separate drive letters for each LUN. We have over 350 user databases in an active-active cluster, where all of our databases are mounted on shared drives (one per server), and utilize mount points (folders/sub-directories) within each drive that represent each of the LUNs in use. The actual recommendation of databases per LUN is no more than 35 (not 25), so we do this for smaller databases, and have separate LUNs for each of the larger databases. There are also separate folders (mount points) for system DBs, tempdb, and SnapInfo (snapshots).
Another correction to the article is that you are not required to have separate LUNs for data and log files if you do not utilize the full recovery model. Due to the nature of our business, we are able to use the simple recovery model, so we have our data files and log files on the same LUN, and it works perfect.
There are two features about using NetApp technologies that are a tremendous benefit, and these are FlexClone and FlexMirror. We use FlexClone daily to create our test environment, creating an exact duplicate of all of our production databases. This process only takes around 15-20 minutes, remembering that there are over 350 databases (several terabytes) on 2 servers! Our developers and QA team absolutely love this, and it is probably the easiest process that I have come across to do this. As far as FlexMirror goes, this is used to replicate our snapshots to our DR site.
In our environment, one of the few negative things I have found in all this is that SnapManager for SQL does quite a bit of "housekeeping". The actual snapshot of the volume is very fast, as Mr. Shaw has indicated (usually a few seconds, regardless of DB size), but SMSQL does other things, such as checking licenses prior to each snapshot and cleaning up old snapshots. So if you have a large number of databases like we do, the whole process takes quite a while to complete. At first this bothered me, but once it sunk in that the actual snapshots are completed so quickly, I felt more comfortable with how it works.
You actually do not need to create a maintenance plan to back up your system databases. SMSQL can back these up as part of the normal process, but instead of making snapshots, it does a streamed backup, which is just an ordinary backup. Nothing special has to be done if you have your snapshot process configured to snapshot all databases. This is how our servers are set up and it works flawlessly. The backup files of streamed backups are ordinary backup files and are located in the same SnapInfo folder (LUN) as all the other snapshots.
One last point I would like to make is that for my configuration, I have two SQL Agent jobs set up to perform snapshots, each with slightly different parameters. One job snapshots databases every 2 hours, and cleans up snapshots older than 1 day. The other job snapshots user databases at midnight, and retains snapshots for 3 days. Each job has its own naming scheme (which is called "management group" in SMSQL) to help differentiate between them at the folder level. So with this configuration, we have snapshots (local on our servers) for every 2 hours for the past day plus the past 3 nights. The snapshots require a fraction of the space to store compared to traditional backups, so you could actually retain them (locally) much longer if desired. And as business requirements dictate, you may also want to copy these to long-term storage such as tape.