Finding Your Top CPU-Hungry Queries in SQL Server

High CPU on a SQL Server instance is one of the most common performance complaints DBAs field. Before you can fix the problem, you need to know which queries are actually responsible. The query below uses SQL Server's dynamic management views to return the top 50 queries ranked by average CPU time, giving you a clear starting point for investigation.

This is a first-response diagnostic tool. When a server is running hot, this is often the first script we run.

Why Average CPU Time Matters More Than Total CPU

It's tempting to sort by total CPU consumption, and that has its place. But average CPU time per execution is usually the more useful metric when you're trying to identify problematic queries.

A query that runs once and consumes 10 seconds of CPU is worth investigating. A query that runs 50,000 times a day and consumes 200 milliseconds each time is a much bigger problem in aggregate, and it's also far more likely to be something you can actually optimise. Sorting by average CPU time surfaces the queries that are inherently expensive per execution, which often points directly to missing indexes, poor query plans, or inefficient logic.

Total CPU tells you what's burning the most cycles right now. Average CPU tells you what's structurally expensive. Both matter, but when you're triaging a CPU spike, average CPU is where to start.

The Script: Top 50 Queries by Average CPU Time

SELECT TOP 50
    qs.total_worker_time / qs.execution_count AS AverageCPUTime,
    qs.execution_count,
    qs.total_worker_time,
    st.dbid,
    st.objectid,
    OBJECT_NAME(st.objectid, st.dbid) AS ObjectName,
    CASE qs.statement_end_offset
        WHEN -1 THEN st.text
        ELSE SUBSTRING(
                st.text,
                qs.statement_start_offset / 2,
                qs.statement_end_offset / 2
             )
    END AS StatementText
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) st
ORDER BY AverageCPUTime DESC;

Run this against any SQL Server 2008 or later instance. It works on SQL Server 2012 through SQL Server 2022 without modification.

What This Query Is Actually Doing

The script pulls data from two dynamic management views: sys.dm_exec_query_stats and sys.dm_exec_sql_text.

sys.dm_exec_query_stats holds aggregate performance statistics for cached query plans. Every time a query executes and its plan is cached, SQL Server accumulates metrics including total CPU time (reported in microseconds as total_worker_time), execution count, elapsed time, logical reads, and writes. Dividing total_worker_time by execution_count gives you the average CPU cost per execution.

sys.dm_exec_sql_text is a table-valued function that accepts a SQL handle and returns the actual text of the query. The CROSS APPLY joins each row from dm_exec_query_stats to its corresponding SQL text.

The CASE statement handles a subtlety worth understanding. When a stored procedure or batch contains multiple statements, statement_start_offset and statement_end_offset identify the specific statement within the batch. Using SUBSTRING extracts just the relevant statement rather than the entire batch text, which makes the results much more readable. When statement_end_offset is -1, it means the statement runs to the end of the batch, so the full text is returned.

OBJECT_NAME(st.objectid, st.dbid) resolves the object name where the query originates, whether that's a stored procedure, function, or trigger. This column will be NULL for ad hoc queries.

Important Limitations to Understand

This data comes from the plan cache. That's a critical point. If SQL Server has been restarted recently, or if plans have been evicted from cache due to memory pressure, the statistics will only reflect activity since the last cache population. On a busy server with limited memory, frequently evicted plans may be underrepresented.

The numbers are cumulative since the plan was last compiled and cached. A query that was extremely expensive two weeks ago but has since been optimised may still appear near the top if its plan hasn't been recompiled. Context matters.

This query also reflects instance-wide activity across all databases. The st.dbid column tells you which database the query belongs to. If you're managing a shared SQL Server instance hosting multiple databases, filter by st.dbid = DB_ID('YourDatabaseName') to narrow results to a specific database.

One more thing: dm_exec_query_stats only captures queries that have a cached plan. Some queries, particularly those using certain query hints or queries that SQL Server determines aren't worth caching, won't appear here.

How to Interpret the Results

When you run this query, focus on the top 10 results first. In most environments, a small number of queries account for the majority of CPU consumption. That's consistent with what we see across client environments.

Look at the execution_count alongside AverageCPUTime. A query with an average CPU time of 5,000 milliseconds that runs twice a day is very different from one with 500 milliseconds average CPU time that runs 10,000 times a day. Both deserve attention, but the remediation approach differs.

For the StatementText column, you'll sometimes see parameterised queries where the actual values are replaced with parameter markers. That's normal. What you want is the query structure, not the specific values.

When ObjectName is populated, you know the query lives inside a stored procedure or other programmable object. That's usually easier to optimise because the code is in one place. Ad hoc queries with a NULL ObjectName can be harder to track down and may indicate an application that's not using parameterisation or stored procedures consistently.

What to Do With the Results

Once you've identified your top CPU consumers, the next steps depend on what you find.

For queries with high average CPU time, start with the execution plan. Use sys.dm_exec_query_plan to retrieve the cached plan, or run the query directly with "Include Actual Execution Plan" in SSMS. Look for index scans on large tables, key lookups, hash joins on large datasets, and high estimated versus actual row count discrepancies. These are the most common culprits.

Missing indexes are the most frequent fix. SQL Server's Database Engine Tuning Advisor and the sys.dm_db_missing_index_details DMV can both surface index recommendations, though treat those recommendations as starting points rather than gospel.

If the query logic itself is inefficient, that requires application-level changes. That conversation is easier when you can show stakeholders concrete data from a query like this one.

Key Takeaways

  • Sorting by average CPU time per execution identifies structurally expensive queries, not just queries that happen to be running frequently right now.
  • The data comes from the plan cache and reflects activity since the plan was last compiled. A recent server restart or memory pressure can limit the data available.
  • The OBJECT_NAME() function in the script identifies whether a query originates from a stored procedure, function, or trigger, which helps narrow down where to make changes.
  • Filter results by st.dbid when working on a multi-database instance to focus on a specific database.
  • High average CPU time most commonly points to missing indexes, poor cardinality estimates, or inefficient query logic, all of which are diagnosable from the execution plan.

Need Help Investigating SQL Server CPU Problems?

A single query can point you in the right direction, but diagnosing and resolving CPU performance issues properly requires understanding the full picture: workload patterns, index coverage, plan stability, and hardware capacity.

DBA Services provides SQL Server health checks and managed DBA support for Australian organisations. If your SQL Server is consistently running hot and you're not sure why, our team can perform a thorough performance review and give you a prioritised remediation plan. Get in touch to find out how we can help.