What Every Project Manager Must Know When Deploying SQL Server
Deploying SQL Server without the right preparation is one of the most reliable ways to blow a project budget. The technical decisions made during planning directly determine whether your SQL Server deployment runs smoothly for years or becomes an expensive maintenance headache within months. These are the 10 things project managers must get right before go-live.
Most project managers are strong on scope, timeline, and budget. SQL Server deployments, though, have a layer of technical complexity that catches non-specialists off guard. The consequences show up later: performance problems that are expensive to diagnose, recovery failures during outages, and infrastructure that can't scale when the business needs it to. Getting ahead of these issues during the project phase costs far less than fixing them in production.
1. Have You Load Tested the Hardware?
The project spec might include storage and IO estimates based on future requirements, but estimates are not guarantees. Hardware load testing validates that what's been built can actually handle real workloads before users depend on it.
Tools like SQLQueryStress let you fire a query thousands of times concurrently to simulate production load. For storage-specific testing, DiskSpd (Microsoft's own tool) measures IO throughput and latency under pressure. Run these tests before sign-off, not after go-live.
2. Do You Have a Performance Baseline?
At some point, someone will complain that SQL Server is slow. It happens on every deployment eventually. Without a documented performance baseline captured at go-live, you have no objective reference point to compare against. You can't identify what changed if you don't know what normal looked like.
Capture key metrics at go-live: CPU utilisation, memory pressure, disk IO latency, wait statistics, and query execution times. Tools like the Performance Analysis of Logs (PAL) tool help consolidate these metrics into a usable report. This baseline becomes your evidence base for every performance conversation that follows.
3. Have You Planned for Growth?
Straightforward in concept, but consistently overlooked in practice. Review 6 to 12 months of historical server load data to understand usage trends. Design the new environment to accommodate projected growth, not just current requirements.
You don't need to provision all the resources upfront. But you do need to know whether adding CPUs, memory, or storage later is a simple configuration change or a full infrastructure rebuild. If it's the latter, that cost needs to be in the project budget now.
4. Is This an OLTP or OLAP Workload?
These two workload types have fundamentally different characteristics, and SQL Server needs to be configured differently for each.
OLTP (Online Transaction Processing) handles high volumes of short, concurrent read/write transactions. Think order management systems, ERP platforms, or banking transactions. OLAP (Online Analytical Processing) handles complex queries across large datasets, typically for reporting and business intelligence.
Mixing these workloads on the same instance without proper configuration causes contention and performance problems. Understand the nature of the workload before the server is built, and configure SQL Server accordingly. In some cases, the right answer is separate instances.
5. Is the Disk Layout Correct?
This is a detail that gets skipped surprisingly often. Best practice for SQL Server disk layout is to separate data files, transaction log files, and TempDB onto separate disk allocations within Windows.
Even if the underlying storage subsystem is shared (a SAN, for example), presenting separate volumes to Windows gives the OS additional read/write handles to work with. This reduces IO contention and avoids the queue delays that quietly degrade performance under load. TempDB in particular benefits from isolation because it's used heavily by SQL Server for sorting, hashing, and temporary object storage.
6. Are Your Backup Requirements Properly Defined?
Backups are not just about having a copy of the data. Two metrics define what your backup strategy actually needs to deliver.
- RTO (Recovery Time Objective): How quickly must the system be back online after a failure?
- RPO (Recovery Point Objective): How much data loss is acceptable? One hour? Fifteen minutes? Zero?
Smaller RTO and RPO values mean more robust recovery capability, but also higher infrastructure cost. These numbers must be defined by the business before the backup architecture is designed, not after. A backup strategy built around the wrong RTO/RPO is a strategy that will fail when it matters most.
7. Which High Availability Option Is Right for This Deployment?
SQL Server offers several high availability options, each with different trade-offs:
- Always On Availability Groups - the most capable option for enterprise environments, supporting multiple readable secondaries and automatic failover
- Database Mirroring - deprecated since SQL Server 2012 and should not be used in new deployments
- Log Shipping - simple, low-cost, but with higher RPO and manual failover
- Replication - suited to specific data distribution scenarios, not general HA
- Failover Cluster Instances (FCI) - protects the instance, not individual databases, and requires shared storage
The right choice depends on the RTO/RPO requirements established in the previous step, the budget, and the technical capability of the team maintaining the environment. Don't default to the most complex option if a simpler one meets the requirements.
8. What Is the Disaster Recovery Plan?
High availability protects against component failures. Disaster recovery protects against the entire site going down. These are different problems and they need different solutions.
If the data centre floods, or the primary site loses power for 48 hours, what happens? Is there an offsite backup? Replication to a secondary site or cloud environment? A tested runbook for recovering the environment from scratch? If none of these questions have clear answers, the business is exposed to a risk that could be catastrophic.
Disaster recovery planning is not optional. It needs to be scoped, costed, and tested as part of the project.
9. What Is the Migration Strategy?
Moving data from an old environment to a new one carries real risk. The migration strategy needs to account for data integrity validation, acceptable downtime windows, and a rollback plan if something goes wrong.
Common approaches include backup/restore migrations, log shipping-based cutovers (which minimise downtime), and replication-based migrations for near-zero downtime requirements. The right approach depends on the size of the databases, the acceptable outage window, and the complexity of the application dependencies involved.
10. Is Security Configured Correctly From the Start?
SQL Server security is significantly easier to implement correctly at deployment than to retrofit later. At minimum, the project should address:
- Principle of least privilege for all SQL logins and Windows accounts
- Separation of sa and service account credentials
- Encryption at rest (Transparent Data Encryption) and in transit (TLS)
- Audit logging for privileged access
Security configuration is not a post-go-live task. It's a deployment requirement.
Key Takeaways
- Hardware load testing and performance baselining must happen before go-live, not after the first complaint.
- RTO and RPO values drive your backup and high availability architecture. Define them early or the design will be wrong.
- OLTP and OLAP workloads require different SQL Server configurations. Mixing them without planning causes performance problems.
- Disaster recovery is a separate concern from high availability. Both need to be scoped and budgeted in the project.
- Security and disk layout are far cheaper to implement correctly at deployment than to fix in a running production environment.
If you're managing a SQL Server deployment and want independent assurance that the technical foundations are right, DBA Services offers pre-deployment health checks and ongoing managed support for SQL Server environments across Australia. Getting an experienced DBA involved at the project stage is considerably cheaper than resolving avoidable problems in production.
Need help with your SQL Servers?
Find out what's really going on inside your SQL Server environment.
Our health checks uncover critical misconfigurations in 97% of reviews.