SQL Server Performance Tuning Best Practices for Developers

SQL Server performance tuning is the process of identifying and eliminating bottlenecks in your database environment to improve query speed, reduce resource consumption, and keep applications responsive. For developers, the most impactful gains come from writing efficient SQL code, using indexes correctly, and understanding how the query optimiser makes decisions.

Poor SQL Server performance rarely appears overnight. It creeps in as data volumes grow, query complexity increases, and code that worked fine at low load starts struggling under production conditions. By the time users are complaining about slow response times, you're already behind. The practices below are things we apply regularly across client environments, drawn from over 20 years of SQL Server consulting work.

Why Does SQL Server Performance Degrade?

The query optimiser is only as good as the information it has available. Outdated statistics, missing indexes, poorly written queries, and inappropriate use of temporary storage all force the optimiser into suboptimal execution plans. Understanding what drives those decisions is what separates reactive troubleshooting from proactive tuning.

1. Stop Using SELECT * in Production Code

This is the single most common performance mistake we see in developer-written SQL. Using SELECT * forces SQL Server to retrieve every column in the table, even columns the application never uses. On wide tables with large varchar or nvarchar columns, this adds significant I/O overhead and increases network traffic between the database server and the application.

Always specify the columns you actually need:

-- Inefficient: retrieves all columns regardless of what the application needs
SELECT * FROM Customers;

-- Optimised: retrieves only the required columns
SELECT CustomerId, CustomerName, Email FROM Customers;

This is not just a performance issue. It also makes your code more maintainable and less likely to break when table schemas change.

2. Avoid Scalar Functions in WHERE Clauses

Scalar functions in WHERE clauses are a well-known performance killer. When you wrap a column in a scalar function, SQL Server typically cannot use an index on that column. Worse, the function executes once per row in the result set, which on large tables can turn a millisecond query into a multi-second one.

-- Problematic: scalar function prevents index use and runs per row
SELECT * FROM Customers WHERE dbo.GetAge(BirthDate) > 18;

Where possible, replace scalar functions with inline table-valued functions or restructure the logic so the column itself is not wrapped:

-- Better: inline table-valued function applied via CROSS APPLY
SELECT c.* FROM Customers c
CROSS APPLY dbo.GetAgeInline(c.BirthDate) Age
WHERE Age.Value > 18;

In SQL Server 2019 and later, scalar UDF inlining can automatically transform eligible scalar functions into relational expressions, but you should not rely on this as a substitute for writing efficient code from the start.

3. Choose Temp Tables Over Table Variables for Large Datasets

Both temp tables and table variables provide temporary storage, but they behave differently under the hood. Table variables do not maintain statistics, which means the query optimiser assumes a single row regardless of actual data volume. On large datasets, this produces wildly inaccurate cardinality estimates and poor execution plans.

-- Table variable: no statistics, optimiser assumes 1 row
DECLARE @CustomerOrders TABLE (OrderId INT, CustomerId INT, OrderDate DATETIME);

-- Temp table: stored in tempdb, supports statistics and indexes
CREATE TABLE #CustomerOrders (OrderId INT, CustomerId INT, OrderDate DATETIME);

Use temp tables when you're working with more than a few hundred rows or when the temp table will be joined to other tables. Table variables are fine for small lookup sets or passing data between statements in a batch. In SQL Server 2019, the new table variable deferred compilation feature improves cardinality estimates for table variables, but temp tables remain the safer default for large data operations.

4. Keep Statistics Current

SQL Server performance tuning depends heavily on accurate statistics. The query optimiser uses statistics to estimate row counts and choose execution plans. When statistics are stale, the optimiser makes poor choices, and you end up with index scans where seeks should occur, or memory grant misestimates that cause spills to disk.

Enable Auto Update Statistics on all user databases. For high-volume OLTP systems where data changes rapidly, consider also enabling Auto Update Statistics Asynchronously to avoid blocking queries while statistics are being rebuilt.

For critical tables, schedule manual updates during maintenance windows:

-- Full scan for accurate statistics on a high-volume table
UPDATE STATISTICS Orders WITH FULLSCAN;

FULLSCAN is slower than the default sampled update but produces more accurate statistics. On very large tables, a high sample rate is often a reasonable compromise:

-- High sample rate as a compromise on large tables
UPDATE STATISTICS Orders WITH SAMPLE 30 PERCENT;

5. Design Indexes for Your Actual Query Patterns

Indexes are the most powerful tool available for SQL Server performance tuning, and also the most misused. Missing indexes cause full table scans. Too many indexes slow down writes and increase storage and maintenance overhead.

A few practical rules we apply on client systems:

  • Every table should have a clustered index. Heap tables (tables without a clustered index) perform poorly on range scans and can accumulate forwarded records that hurt read performance.
  • Non-clustered indexes should be built around actual query patterns, not guesswork. Use the missing index DMVs (sys.dm_db_missing_index_details, sys.dm_db_missing_index_group_stats) as a starting point, but validate recommendations before implementing them.
  • Include columns in non-clustered indexes to cover frequently executed queries and avoid key lookups.
  • Review and remove unused indexes. The sys.dm_db_index_usage_stats DMV shows which indexes have not been used since the last service restart.
-- Example: covering index with included columns to eliminate key lookups
CREATE NONCLUSTERED INDEX IX_Orders_CustomerId
ON Orders (CustomerId)
INCLUDE (OrderDate, TotalAmount);

Avoid the trap of blindly applying every index recommendation from the Database Engine Tuning Advisor. It optimises for the workload you feed it and will happily recommend dozens of overlapping indexes that collectively hurt write performance.

6. Use CTEs and Set-Based Logic Instead of Cursors

Cursor-based row-by-row processing is almost always slower than equivalent set-based SQL. SQL Server is designed to operate on sets of data. Cursors fight that design and typically produce significantly higher CPU and I/O costs for the same result.

Where you find cursors in existing code, look at whether the logic can be rewritten using Common Table Expressions (CTEs), window functions, or set-based UPDATE and INSERT statements. In most cases it can, and the performance difference is substantial.

What Else Should Developers Watch For?

A few additional patterns worth keeping in mind during SQL Server performance tuning:

  • Parameter sniffing issues: Stored procedures can cache execution plans based on atypical parameter values at first execution. Use OPTION (RECOMPILE) selectively on problematic queries, or consider OPTIMIZE FOR hints where the distribution of values is well understood.
  • Implicit conversions: Mismatched data types between a query parameter and a column data type cause implicit conversions that prevent index use. Always match data types explicitly.
  • Transaction scope: Keep transactions as short as possible. Long-running transactions hold locks and block other sessions, which shows up as wait time rather than CPU or I/O pressure.

Key Takeaways

  • Specify column names explicitly instead of using SELECT * to reduce I/O and network overhead.
  • Avoid scalar functions in WHERE clauses. They prevent index use and execute once per row.
  • Use temp tables instead of table variables for large datasets. Table variables lack statistics and produce poor cardinality estimates.
  • Keep statistics current with Auto Update Statistics enabled, and use FULLSCAN updates for critical high-volume tables.
  • Build indexes around real query patterns, review usage regularly, and remove indexes that are not being used.

SQL Server performance tuning is an ongoing discipline, not a one-time project. Query patterns change, data volumes grow, and what performed well at launch can degrade significantly over time. If you're not regularly reviewing execution plans, wait statistics, and index health, problems will accumulate quietly until they become visible to users.

DBA Services provides SQL Server health checks and managed support for organisations that want expert oversight without the overhead of a full-time DBA. If your environment hasn't had a performance review recently, it's worth finding out what's actually going on under the hood.