What Is SET STATISTICS TIME and Why Does It Matter?
SET STATISTICS TIME is a T-SQL command that measures exactly how long SQL Server spends parsing, compiling, and executing a query. Enable it before running any query and SQL Server will return timing data broken into two distinct phases: parse and compile time, and execution time. Each phase reports both CPU time and elapsed time, giving you a direct, measurable view of query performance that guesswork simply cannot provide.
If you've ever been on the receiving end of a "why is the database so slow?" complaint, you know how frustrating it is to answer without hard data. SET STATISTICS TIME gives you that data. It's not a fix. It's a diagnostic instrument, and understanding how to read its output correctly is a fundamental skill for anyone responsible for SQL Server performance.
How Do You Enable SET STATISTICS TIME?
The syntax is straightforward. Run this before any query you want to measure:
SET STATISTICS TIME ON;
-- Your query here
SELECT *
FROM Sales.Orders
WHERE OrderDate >= '2024-01-01';
SET STATISTICS TIME OFF;
Once enabled, SQL Server outputs timing information to the Messages tab in SQL Server Management Studio (SSMS). You'll see output similar to this:
SQL Server parse and compile time:
CPU time = 0 ms, elapsed time = 1 ms.
SQL Server Execution Times:
CPU time = 16 ms, elapsed time = 23 ms.
Turn it off after your query with SET STATISTICS TIME OFF. Leaving it on indefinitely adds noise to your output and makes it harder to isolate the query you're actually analysing.
What Is the Difference Between CPU Time and Elapsed Time?
This is where most people misread the output, and it's worth being precise about what each metric actually tells you.
CPU time is the amount of processor time SQL Server consumed to perform the operation. It reflects the actual computational load the query places on the server. This is the number you should focus on when comparing query performance, because it's consistent and directly tied to the work being done.
Elapsed time is the wall-clock time from when the operation started to when it finished. It includes everything CPU time measures, plus time spent waiting on I/O, network latency between the server and client, lock waits, memory pressure, and general server load at that moment. Elapsed time will vary from one execution to the next, sometimes significantly. It's useful as context, but it's unreliable as a benchmark.
A practical example: if you run the same query twice and see CPU time of 16 ms both times but elapsed time of 23 ms and then 47 ms, the query itself hasn't changed. Something else on the server was competing for resources during the second run. Basing performance decisions on elapsed time alone leads you in the wrong direction.
What Do Parse and Compile Time Tell You?
Parse and compile time measures how long SQL Server takes to check the syntax of your query, validate object references, and build an execution plan. For a simple query, this is typically under 1 ms. Complex queries with many joins, subqueries, or dynamic SQL can take longer.
Here's the important behaviour to understand: after a query runs for the first time, SQL Server caches the execution plan. On subsequent executions, parse and compile time drops to zero because the server reuses the cached plan rather than building a new one. This is normal and expected.
If you're seeing consistently high parse and compile times across repeated executions, a few things could be causing it:
- The query is using
WITH RECOMPILE, forcing a new plan on every execution - Parameter sniffing issues are causing plan invalidation
- The plan cache is under memory pressure and plans are being evicted frequently
- Ad-hoc queries with no parameterisation are generating unique plans that never get reused
High recompilation overhead is a real cost at scale. If a query runs thousands of times per hour and spends even 5 ms recompiling each time, that adds up to meaningful CPU overhead across the day.
How Do You Use SET STATISTICS TIME to Compare Queries?
This is where the tool earns its keep. The most practical application is side-by-side comparison of different approaches to the same problem. Say you're evaluating two versions of a stored procedure, or comparing a cursor-based approach against a set-based one. Run each version with SET STATISTICS TIME ON and compare the CPU times.
Here's a structured approach:
- Clear the procedure cache if you want to test cold-plan performance:
DBCC FREEPROCCACHE(use with caution on production systems) - Enable statistics:
SET STATISTICS TIME ON - Run query version A, record CPU time and elapsed time
- Run query version B, record CPU time and elapsed time
- Run each version 3 to 5 times to account for caching effects
- Compare average CPU times across runs
On the second and subsequent runs, parse and compile time should be zero for both queries. You're then comparing pure execution cost. A query that consistently shows 8 ms CPU time versus one showing 45 ms CPU time is telling you something definitive, regardless of what elapsed time looks like on any given run.
Pair this with SET STATISTICS IO ON for a complete picture. IO statistics show you logical and physical reads, which explains why one query might use less CPU than another. High logical reads often point to missing indexes or inefficient query plans.
SET STATISTICS TIME ON;
SET STATISTICS IO ON;
-- Query A
SELECT CustomerID, SUM(OrderTotal)
FROM Sales.Orders
WHERE OrderDate >= '2024-01-01'
GROUP BY CustomerID;
SET STATISTICS TIME OFF;
SET STATISTICS IO OFF;
What Are the Limitations of SET STATISTICS TIME?
SET STATISTICS TIME is a query-level tool. It tells you how long a specific query took under the conditions at the time of measurement. It doesn't tell you about aggregate performance patterns across thousands of executions, it doesn't capture wait statistics, and it won't surface problems that only appear under production load.
For ongoing performance monitoring, you need additional tools: Query Store (available from SQL Server 2016 onwards), Dynamic Management Views (DMVs) like sys.dm_exec_query_stats, and Extended Events for capturing execution data at scale. SET STATISTICS TIME is best used during development, query tuning, and investigation of specific reported slowness, not as a substitute for proper monitoring infrastructure.
Also worth noting: the timing output goes to the Messages tab in SSMS, not the Results tab. If you're running queries from an application or script that only captures result sets, you won't see the statistics output. Plan accordingly.
Key Takeaways
- SET STATISTICS TIME returns parse/compile time and execution time for any T-SQL query, broken into CPU time and elapsed time for each phase.
- Always focus on CPU time for performance comparisons. Elapsed time is influenced by external factors and will vary between runs.
- Parse and compile time should drop to zero on repeated executions as SQL Server reuses cached plans. Consistently high values indicate a recompilation problem worth investigating.
- The most practical use of SET STATISTICS TIME is comparing two or more query approaches to identify which places less load on the server.
- Combine SET STATISTICS TIME with SET STATISTICS IO for a complete picture of query cost, covering both CPU and I/O dimensions.
If you're regularly using tools like SET STATISTICS TIME to investigate performance issues, that's a good sign your team takes SQL Server health seriously. But reactive investigation only goes so far. DBA Services provides proactive SQL Server health checks and managed support that surface performance problems before they affect your users, including systematic query analysis, index reviews, and execution plan assessments across your entire environment. Get in touch to find out how we work with Australian organisations to keep SQL Server performing at its best.
Need help with your SQL Servers?
Find out what's really going on inside your SQL Server environment.
Our health checks uncover critical misconfigurations in 97% of reviews.