Problem
In my last tip, Enhancing SQL Server Backups for Hundreds of Databases, I created a single, central backup catalog representing all the full, log, and differential backups across our environment. I demonstrated how I use it to build restore statements for any given database for validating the backup, performing DBCC checks, or to recover data at a specific point in time. But can this data help us in other ways, too? Let’s see how we can use this data for a point in time restore.
Solution
Certainly. There are two primary questions I had in mind when designing the backup catalog:
- What are all the point-in-time restores currently possible for a given database?
- What files can be moved or deleted from the file shares to adhere to our retention policies?
Let’s dive into each one.
Question 1: What are all the point-in-time restores currently possible for a given database?
The scripts from Part 1 assume we already know a point-in-time and that we expect it to be possible. Sometimes, though, another team only wants to know what points-in-time might be possible. They may ask about a specific database, and we want to be able to tell them they can achieve a point-in-time anywhere between 13 days ago at 2:05 AM UTC until now.
I don’t know how I’d go about finding that range without the catalog, but the following query can help:
DECLARE @DatabaseName sysname = N'DB0101';
;WITH FullBackups AS
(
SELECT BackupFinish
FROM dbre.BackupCatalog
WHERE DatabaseName = @DatabaseName
AND BackupType = 'D'
GROUP BY BackupFinish
),
BackupPosts AS
(
SELECT BackupFinish, NextBackupFinish = LEAD(BackupFinish, 1)
OVER (ORDER BY BackupFinish)
FROM FullBackups
)
SELECT StartRange = bp.BackupFinish,
EndRange = MAX(rest.BackupFinish)
FROM BackupPosts AS bp
INNER JOIN dbre.BackupCatalog AS rest
ON rest.BackupFinish > bp.BackupFinish
AND rest.BackupFinish < COALESCE(bp.NextBackupFinish, getdate())
WHERE rest.DatabaseName = @DatabaseName
AND rest.BackupType IN ('I', 'L')
GROUP BY bp.BackupFinish
ORDER BY StartRange;Example output:
| StartRange | EndRange |
|---|---|
| 2025-05-25 08:08:54.000 | 2025-06-01 08:02:03.000 |
| 2025-06-01 10:49:04.000 | 2025-06-02 14:02:06.000 |
| 2025-06-02 17:11:44.000 | 2025-06-09 02:26:10.000 |
| 2025-06-09 02:33:17.000 | 2025-06-11 20:44:06.000 |
If the other team asks for a point-in-time of 2025-06-09 14:37, we can certainly help them; however, if they ask for 2025-06-02 16:14, they’re out of luck, as we had a gap in coverage of about three hours.
Of course, this doesn’t validate that all the necessary files exist or that the log chain is intact, but it gives you a starting point. Note that, depending on your log backup frequency, there will always be some gap between the last log backup and the next full backup, where it will be impossible to achieve point-in-time recovery.
Question 2: What files can be moved or deleted from the file shares to adhere to our retention policies?
I hinted in Part 1 that our PowerShell maintenance scripts keep our backup file shares tidy by deleting files we no longer need and moving long-term files to cold storage. I’ll share our rules again:
- Keep full backups for the previous four weeks (after that, cold storage).
- Keep differential backups for the previous three weeks (after that, delete).
- Keep log backups for the previous two weeks (after that, delete).
This way, we can restore any database to:
- Any point in time in the past two weeks.
- Any “whole day” in the past three weeks.
- Any “whole week” in the past four weeks.
- Any “whole week” before that, with additional steps required to pull from cold storage.
Note that the logic must account for the passing of time and not just, for example, “the last two full backups.” Since we could have taken ad hoc backups at any time, or changed the schedule so that two backups might fall in the same week. Two weeks in our business context means two Sundays prior to the most recent Sunday. This could mean roughly three weeks are kept under two-week retention rules for, say, databases configured to run full backups on Fridays. Unless you’re running full backups every day, keep that in mind when calculating how much space you’ll need for a full retention cycle.
SQL View for Last Five Sundays of Backups
I created a view called LastFiveSundays so I didn’t have to repeat this logic wherever I was dealing with retention policies:
CREATE VIEW dbre.LastFiveSundays
AS
SELECT StartDate = DATEADD(WEEK, 0-value, d),
EndDate = DATEADD(WEEK, 1-value, d),
WeeksAgo = ROW_NUMBER() OVER (ORDER BY value)
FROM
(
SELECT d = DATEADD(DAY, 1-value, CONVERT(date,getdate()))
FROM GENERATE_SERIES(1,7)
) AS LastSevenDays
CROSS APPLY GENERATE_SERIES(1,5)
/* most recent Sunday regardless of culture/datefirst settings */
WHERE (DATEPART(WEEKDAY, d) + @@DATEFIRST) % 7 = 1;And I could apply our retention rules against this view as follows:
SELECT StartDate, EndDate, WeeksAgo,
FullBackup = CASE WHEN WeeksAgo > 4 THEN 'move to cold' ELSE 'keep' END,
DiffBackup = CASE WHEN WeeksAgo > 3 THEN 'delete' ELSE 'keep' END,
LogBackup = CASE WHEN WeeksAgo > 2 THEN 'delete' ELSE 'keep' END
FROM dbre.LastFiveSundays;Here’s the output when run during the week of June 8, 2025:
| StartDate | EndDate | WeeksAgo | LogBackup | DiffBackup | FullBackup |
|---|---|---|---|---|---|
| 2025-06-01 | 2025-06-08 | 1 | keep | keep | keep |
| 2025-05-25 | 2025-06-01 | 2 | keep | keep | keep |
| 2025-05-18 | 2025-05-25 | 3 | delete | keep | keep |
| 2025-05-11 | 2025-05-18 | 4 | delete | delete | keep |
| 2025-05-04 | 2025-05-11 | 5 | delete | delete | move to cold |
Then we can use that in a stored procedure that will retrieve the list of files to act on:
CREATE OR ALTER PROCEDURE dbre.Backup_FilesToActOn
@BatchSize int = 10000,
@BackupType char(1) = 'D',
@DatabaseName nvarchar(128) = NULL
AS
BEGIN
SET NOCOUNT ON;
;WITH Rules AS
(
SELECT StartDate, EndDate, WeeksAgo,
FullBackup = CASE WHEN WeeksAgo > 4 THEN 'move to cold' END
DiffBackup = CASE WHEN WeeksAgo > 3 THEN 'delete' END,
LogBackup = CASE WHEN WeeksAgo > 2 THEN 'delete' END,
FROM dbre.LastFiveSundays
),
Files AS
(
SELECT TOP (@BatchSize)
[Path] = bc.PhysicalDeviceName,
[Action] = CASE bc.BackupType
WHEN 'L' THEN r.LogBackup
WHEN 'I' THEN r.DiffBackup
WHEN 'D' THEN r.FullBackup
END
FROM dbre.BackupCatalog AS bc
INNER JOIN Rules AS r
ON bc.BackupFinish >= r.StartDate
AND bc.BackupFinish < r.EndDate
WHERE bc.BackupType = @BackupType
AND bc.DatabaseName = COALESCE(@DatabaseName, bc.DatabaseName)
ORDER BY bc.BackupFinish
)
SELECT [Path],
[Action]
FROM Files
WHERE [Action] IS NOT NULL;This will produce output like the following, which the PowerShell can then be used to delete the file and delete the row, or move the file to cold storage and update the row, or leave the file alone:
| Path | Action |
|---|---|
| \\fileshare1\backups\DB0101.DIFF.20250501.bak | delete |
| \\fileshare1\backups\DB0101.FULL.20250501.bak | move to cold |
| \\fileshare1\backups\DB0101.LOG.20250508.01.trn | delete |
| \\fileshare2\backups\DB0101.LOG.20250508.02.trn | delete |
| \\fileshare2\backups\DB0101.LOG.20250515.01.trn | delete |
I’m not sharing the PowerShell code because it is a simple loop with Remove-Item, and the only proprietary part is private and unique anyway (getting credentials to access a storage account).
Conclusion
This is just a glimpse into how we manage backups at our scale. Many people have different requirements and want to automate different parts of their solution in different ways, and that’s okay. This is what’s working for us, and, at best, all I can hope is that this sparks some innovation in your process, too.
Next Steps
- Analyze your environment and the tools at your disposal, then ask yourself if you’d be prepared to perform a point-in-time restore of any database.
- Review the following tips and other resources: