Dynamically Backing Up and Compressing All User Databases in SQL Server
Managing backups across a SQL Server instance with many user databases is one of those operational challenges that sounds simple until you're doing it manually. The standard approach - creating individual maintenance plan tasks for each database - breaks down quickly when databases are being created and removed regularly. Every new database needs to be added to the plan. Every dropped database leaves a failing job step. It's maintenance overhead that adds up.
The better approach is a dynamic backup procedure that discovers and backs up all user databases automatically, with backup compression enabled. You create a database, it gets backed up. You drop a database, nothing breaks. The procedure handles it without any manual intervention.
This article walks through a stored procedure that does exactly that, explains how it works, and covers the key considerations before you deploy it in production.
Why Use a Dynamic Backup Procedure?
SQL Server maintenance plans are a reasonable starting point, but they have real limitations in environments where the database count changes frequently. A SaaS platform running per-tenant databases, a development server with constantly changing test databases, or any environment where DBAs aren't the only people creating databases - these are all scenarios where static maintenance plans create ongoing administrative work.
A dynamic backup procedure solves this by querying sys.databases at runtime rather than relying on a hardcoded list. Every time the job runs, it gets a fresh list of databases and backs them all up. No plan updates required.
Backup compression is included here because, on any modern SQL Server instance, there's almost no reason not to use it. Compressed backups are typically 60-70% smaller than uncompressed backups, and they often complete faster because the reduced I/O outweighs the CPU overhead. This has been available since SQL Server 2008, and it's enabled by default at the instance level from SQL Server 2022 onwards.
How the Procedure Works
The procedure uses a FAST_FORWARD cursor to iterate through user databases and execute a BACKUP DATABASE command for each one. Dynamic SQL builds the full backup statement, including the file path and compression options.
Here's the core procedure:
CREATE PROCEDURE [dbo].[BackupAllUserDatabases]
AS
BEGIN
SET NOCOUNT ON;
DECLARE @CompleteCommand nvarchar(1000);
DECLARE @PartOne nvarchar(20) = N'BACKUP DATABASE [';
DECLARE @PartTwo nvarchar(20) = N'] TO DISK = ';
DECLARE @FilePathPrefix nvarchar(50) = N'''D:\SQLBackups\';
DECLARE @FilePathSuffix nvarchar(30) = N'_FullCompressed.bak''';
DECLARE @PartThree nvarchar(100) = N' WITH NOFORMAT, INIT, SKIP, NOREWIND, NOUNLOAD, COMPRESSION, STATS = 10;';
DECLARE @databaseName sysname;
DECLARE curDatabaseList CURSOR
FAST_FORWARD
FOR
SELECT db.[name] AS [DatabaseName]
FROM sys.databases AS db
WHERE db.database_id > 4
AND db.state_desc = 'ONLINE';
OPEN curDatabaseList;
FETCH NEXT FROM curDatabaseList INTO @databaseName;
WHILE @@FETCH_STATUS = 0
BEGIN
-- Build the dynamic SQL backup command
SET @CompleteCommand = @PartOne + @databaseName + @PartTwo + @FilePathPrefix;
SET @CompleteCommand = @CompleteCommand + @databaseName + @FilePathSuffix + @PartThree;
PRINT @CompleteCommand;
-- Execute the backup
EXECUTE (@CompleteCommand);
FETCH NEXT FROM curDatabaseList INTO @databaseName;
END
CLOSE curDatabaseList;
DEALLOCATE curDatabaseList;
END
A few things worth noting about the implementation:
The database_id > 4 filter excludes the four system databases: master, model, msdb, and tempdb. System databases should be backed up separately with their own schedules and retention policies. Lumping them in with user databases creates confusion and complicates restores.
The state_desc = 'ONLINE' filter is important. Without it, the procedure will attempt to back up databases in RESTORING, OFFLINE, SUSPECT, or RECOVERY_PENDING states, and those attempts will fail with errors. Adding this filter keeps the procedure clean and avoids noisy job failures that can mask real problems.
The INIT option overwrites any existing backup sets in the file. Each run creates a fresh backup file rather than appending to an existing one. If you need to retain multiple backup sets per file, change this to NOINIT, but be aware that file sizes will grow with each run.
STATS = 10 prints progress messages every 10% of completion. This is useful for monitoring long-running backups in the SQL Server Agent job history.
What to Watch Out For
Before deploying this procedure, there are several practical considerations.
File naming and overwrite behaviour. The current implementation writes each database to a file named <DatabaseName>_FullCompressed.bak in D:\SQLBackups\. Every run overwrites the previous file. That's fine if you're running this as a daily job and your backup retention is handled at the storage level, but it means you have exactly one backup file per database at any given time. For environments that need multiple restore points, you'll want to incorporate a date/time stamp into the filename.
Here's a modified version that adds a timestamp to the backup filename:
DECLARE @Timestamp nvarchar(20) = CONVERT(nvarchar(20), GETDATE(), 112)
+ '_'
+ REPLACE(CONVERT(nvarchar(8), GETDATE(), 108), ':', '');
SET @CompleteCommand = N'BACKUP DATABASE [' + @databaseName
+ N'] TO DISK = ''D:\SQLBackups\'
+ @databaseName + N'_' + @Timestamp
+ N'_Full.bak'' WITH NOFORMAT, INIT, COMPRESSION, STATS = 10;';
Recovery model filtering. The original script was written for simple recovery model databases. If your instance has a mix of recovery models, this procedure will still work - a full backup is valid regardless of recovery model. But if you want to target only simple recovery model databases specifically, add AND recovery_model_desc = 'SIMPLE' to the cursor query.
Backup destination path. The path D:\SQLBackups\ must exist before the procedure runs. SQL Server will not create the directory automatically, and the backup will fail with an error if the path doesn't exist. Validate this as part of your deployment checklist.
Error handling. The procedure as written doesn't include explicit error handling. If one database backup fails, the cursor continues to the next database. That behaviour is generally acceptable, but it means a failure in the middle of the run won't stop the job. For production environments, consider wrapping the EXECUTE call in a TRY...CATCH block and logging failures to a dedicated table.
Permissions. The SQL Server Agent service account or the login executing this procedure needs db_backupoperator membership in each user database, or sysadmin at the server level. Verify this before scheduling the job.
Scheduling the Procedure
Once the procedure is created in a suitable database (a DBA utilities database is the right place for this, not master), create a SQL Server Agent job with a single T-SQL step:
EXEC [DBAUtilities].[dbo].[BackupAllUserDatabases];
Schedule it according to your RPO requirements. For most environments running simple recovery model databases, a nightly full backup is the baseline. If the databases are more critical, consider whether simple recovery model is actually appropriate, or whether you should move to full recovery model with transaction log backups to reduce data loss exposure.
Key Takeaways
- A dynamic backup procedure queries
sys.databasesat runtime, so it automatically includes new databases and ignores dropped ones without any maintenance plan changes. - Always filter on
state_desc = 'ONLINE'to prevent backup failures against offline or recovering databases. - Backup compression reduces backup file sizes by 60-70% on average and often reduces backup duration due to lower I/O, making it a default choice on any SQL Server 2008 or later instance.
- The
database_id > 4filter correctly excludes system databases, which should be backed up on their own schedule. - For production use, add timestamp-based filenames, a
TRY...CATCHerror handler, and a failure logging table to make the procedure operationally robust.
If you're not confident your backup strategy covers all databases consistently, or you want a second opinion on your current approach, DBA Services offers SQL Server health checks that include a full review of backup coverage, recovery models, and backup job reliability. It's a practical way to find gaps before they become incidents.
Need help with your SQL Servers?
Find out what's really going on inside your SQL Server environment.
Our health checks uncover critical misconfigurations in 97% of reviews.