Or how to efficiently get at old query plans
Performance Tuning Mission
While on a SQL Server 2008 Data Warehouse performance tuning mission, this aspect (or undocumented gotcha as it felt like at the time) of the query plan Dynamic Management Functions caused some head-scratching, until I sussed that:
A query plan can change without its plan_handle changing
Or, putting it in Data Warehouse speak:
query_plan in sys.dm_exec_query_plan(plan_handle) is a Type 1 slowly changing attribute with respect to the plan_handle key, which means query_plan changes overwrite history for the same plan_handle
This mattered since I was tracking down intermittent but significant slow-downs of the Data Warehouse load by sampling query plans for monitoring purposes – I suspected the culprit was intermittent query plan changes which meant that:
- Detecting when and how the query plans had changed was critical
- The intermittent nature of the symptoms meant that actual (as opposed to estimated) query plans were needed and that monitoring had to run for several days at a time
- Due to the storage size for query plans, and the desire to capture ‘all’ query plans, deduplicating the query plans (i.e. only storing changes) during sampling was required
- E.g. tracing ‘all’ query plans and detecting changes during analysis would have taken too much storage space, and made for very slow analysis
I needed to detect query plan changes, but as stated above, using only plan_handle changes to detect query_plan changes does however not work, let’s demonstrate this.
T-SQL Script and Results
- Create a test table with an index in tempdb
- SELECT from the empty table
- Display the plan_handle for the SELECT that just executed
- INSERT 100000 records into the table
- Run exactly the same SELECT statement as above from the now populated table
- Display the plan_handle for the second SELECT that just executed
Do enable “Include Actual Execution Plan” before running the script.
USE tempdb GO CREATE TABLE dbo.TestQueryPlan ( Id INT IDENTITY(1,1), Pad CHAR(1000), ) GO CREATE INDEX IX_TestQueryPlan_Id ON dbo.TestQueryPlan (Id) GO SELECT Id, Pad FROM dbo.TestQueryPlan WHERE Id <= 1 GO SELECT plan_handle, text FROM sys.dm_exec_cached_plans AS cp CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) WHERE text LIKE '%TestQueryPlan%WHERE Id%' AND text NOT LIKE '%CROSS APPLY%' GO INSERT INTO dbo.TestQueryPlan SELECT TOP 100000 'Pad me' FROM master.sys.columns a CROSS APPLY master.sys.columns b GO SELECT Id, Pad FROM dbo.TestQueryPlan WHERE Id <= 1 GO SELECT plan_handle, text FROM sys.dm_exec_cached_plans AS cp CROSS APPLY sys.dm_exec_sql_text(cp.plan_handle) WHERE text LIKE '%TestQueryPlan%WHERE Id%' AND text NOT LIKE '%CROSS APPLY%' GO DROP TABLE dbo.TestQueryPlan GOHere are the query plans for the two identical SELECT statements:
The query plans are quite different due to running SELECT on an empty vs. a populated table. Next up are the plan_handles:
We see that the plan_handles are identical, despite the query plans above being quite different, and therein lies the gotcha – I need a better way of detecting query plan changes.
Also note that while the script uses ad-hoc SELECT queries, the same effect also applies to stored procedures.
BOL?
One thing that had led me astray was BOL in several places saying that plan_handle is unique, e.g. sys.dm_exec_query_plan notes that:
Each cached query plan is identified by a unique identifier called a plan handle.
and
plan_handle Uniquely identifies a query plan for a batch that is cached or is currently executing.
I initially took uniqueness to mean that if a query plan changes, its plan_handle will also change, which as shown above is not the case.
One way to look at it is that the BOL definition of plan_handle uniqueness:
- Only applies to currently cached or executing query plans
- Does not apply to plans that have been dropped from the cache
- Does not apply to plans that have finished executing and were never stored in the cache
Pros and Cons
Looking at plan_handle from other perspectives I can certainly see reasons why it is implemented this way:
- When a statement is recompiled and the query plan ends up different, SQL Server
- Has no use for the old plan and throws it away
- Does not have to update anything that pointed to the old plan using the plan_handle, since the plan_handle stays the same
- A plan_handle encodes many things (SET options, objectid…) within its 64 bytes, so it’s not an arbitrary identifier (see sys.dm_exec_plan_attributes)
On the flip side, it’s a bit of a shame that despite a 64-byte ostensibly unique plan_handle, and the query_plan_hash from sys.dm_exec_query_stats (which is designed to generate duplicates), there is no obvious and fast way of knowing if a query plan has changed or not. Tracing all query (re)compiles would provide the necessary data, but lightweight it is not.
A second aspect of this design is that there is no way to get hold of an old query plan from DMFs etc. after it has changed, making troubleshooting harder unless of course you were explicitly capturing query plans.
Way Forward
Realising the above, I calculated, stored, and compared my own hash of the query_plan attribute, i.e. hashing the full XML text. This guaranteed detecting even the smallest query plan change, and furthermore I only had to process the often large captured query_plans once, with subsequent comparisons done on the computed hashes.
Hashing was fairly easy since I already had a CLR hashing stored procedure that could handle large inputs.
Given thousands of query plans though, sometimes megabytes in size, it would still be preferable to have a more direct way of detecting query plan changes than hashing the full text or running a continuous trace – do let me know if you have any ideas on this.
query_plan hashing worked well for my requirements and sampling facility; other approaches could also work well, especially for ad-hoc investigations (as opposed to longer-term monitoring) where the size of the logged data would not have time to become prohibitively large, and the impact to the server could be kept short, including:
- Periodically dump the full query plans with context into tables, and check for changes (i.e. deduplicate) during analysis instead of during sampling
- Use SQL Profiler or a Server-side trace to capture query plans using e.g. the Showplan XML Statistics Profile event, and again check for changes during analysis. Note that especially SQL Profiler “can have a significant performance overhead”
- Use new Extended Events in SQL Server Denali that include the actual query plan, but like tracing they carry the “can have a significant performance overhead” caveat
In Conclusion
- Long term logging of actual query plans is very useful for troubleshooting intermittent and unexpected query plan changes
- Do take monitoring duration, storage size and impact on server into account when selecting how to collect query plans (i.e. Management Studio, SQL Profiler, Server side trace, Deduplicated sampling as described here…)
- Be aware that query plans change without their associated plan_handle changing, so use the full query_plan XML text to detect query plan changes, and
- Deduplicating query plans during sampling dramatically reduces storage space and simplifies analysis; do however use your own hash of the query_plan field to reduce the CPU processing required to detect changes
Hope it helps!
Kristian