By: Aaron Bertrand | Comments (14) | Related: > SQL Server 2019
Problem
For as long as I have worked with SQL Server, and on almost every system I've ever managed, tempdb has been a bottleneck in one way or another. Microsoft has made a lot of improvements over the years to help resolve bottlenecks with access to PFS, GAM, SGAM, and other resources in tempdb. In no particular order, a few of these enhancements include:
- Optimized number of data files during setup
- Eliminated Trace Flag 1117 and 1118 to help with object allocation contention
- Eager writes to reduce impact of bulk operations
- Multiple optimizations to improve metadata contention
- Accelerated Database Recovery, which shifts version store overhead to user databases
While they have removed some of these bottlenecks, and generally this makes tempdb less of a performance drain, certain workloads may introduce bottlenecks elsewhere. One area where contention is still common, even after all of these improvements, is with tempdb system table metadata, where too many sessions are trying to write to system tables (like a heavy workload that creates, alters, and drops a lot of #temp tables).
Solution
Memory-Optimized TempDB Metadata is a new feature in SQL Server 2019, as part of the "In-Memory Database" feature umbrella. Much like "Always On" isn't a specific feature, In-Memory Database is a term describing the set of capabilities that allow you to take advantage of memory to help make databases faster.
Pam Lahoud describes the primary symptom of the tempdb system table metadata contention issue as follows (and you can get really heavy into the low-level details here):
… sessions in Suspended state with a wait type of PAGELATCH_xx
and
a wait resource of a page that hosts a TempDB system table, such as 2:1:118.
You can check for this contention using the following query, which limits the
results to PAGELATCH_xx
waits involving a specific
set of tempdb system tables:
SELECT r.session_id, r.wait_type, r.wait_resource, r.command,
[object] = OBJECT_NAME(p.[object_id],p.database_id)
FROM sys.dm_exec_requests AS r
CROSS APPLY sys.fn_PageResCracker(r.page_resource) AS pc
CROSS APPLY sys.dm_db_page_info(pc.[db_id], pc.[file_id], pc.page_id, 'DETAILED') AS p
WHERE UPPER(r.wait_type) like '%PAGELATCH%'
AND p.database_id = 2 -- tempdb
AND p.[object_id] IN(3,9,34,40,41,54,55,60,74,75);-- what are these IDs? you'll see shortly
(On older versions of SQL Server, you won't be able to use functions like sys.dm_db_page_info. You will need to manually parse the wait_resource value, and then use either sys.dm_db_database_page_allocations or, if even older than that, DBCC PAGE and DBCC IND, in order to determine if your contention involves this same set of tables.)
The feature provides a new instance-level setting that moves ten of the most commonly-used system tables in tempdb (corresponding to those object IDs listed above) to memory-optimized tables. This eliminates latching, substantially improving concurrency and resolving most or all of the waits observed in the query above. Note that you may still see similar types of waits associated with other system tables, or even permanent user tables in tempdb (which, being a system database, currently can't have memory-optimized user tables). Also note that this does not memory-optimize any of your own #temp tables or table variables; this feature affects metadata only.
You turn the feature on with an ALTER SERVER CONFIGURATION
command, which requires a restart:
ALTER SERVER CONFIGURATION SET MEMORY_OPTIMIZED TEMPDB_METADATA = ON;
You can also use sp_configure:
EXEC sys.sp_configure N'show advanced options', 1;
RECONFIGURE WITH OVERRIDE; EXEC sys.sp_configure N'tempdb metadata memory-optimized', 1;
RECONFIGURE WITH OVERRIDE;
And you can check the current value in sys.configurations:
SELECT CASE WHEN value <> value_in_use THEN 'restart required' END, *
FROM sys.configurations
WHERE name = N'tempdb metadata memory-optimized';
Or with SERVERPROPERTY():
SELECT SERVERPROPERTY('IsTempdbMetadataMemoryOptimized');
You won't notice anything obvious after you restart, but you can see which system tables have been converted to memory-optimized with the following query:
SELECT t.[object_id], t.name
FROM tempdb.sys.all_objects AS t
INNER JOIN tempdb.sys.memory_optimized_tables_internal_attributes AS i
ON t.[object_id] = i.[object_id];
Results (descriptions added by me; some taken from the docs, but not all are documented, and not all were delivered with RTM):
object_id | name | delivered | description |
---|---|---|---|
3 | sysrscols | RTM | Stores column information like offsets, change frequency, types, and max in-row values. |
5 | sysrowsets | CU2 | Contains a row for each partition rowset for an index or a heap. |
7 | sysallocunits | Contains a row for each storage allocation unit. | |
9 | sysseobjvalues | RTM | Stores column information like default values. |
34 | sysschobjs | Stores a row for each object. | |
40 | sysmultiobjvalues | Stores information about entities like column encryption keys. | |
41 | syscolpars | Stores a row for each table/view column and procedure/function parameter. | |
54 | sysidxstats | Stores a row for each index or statistic. | |
55 | sysiscols | Stores a row for each persisted index and statistics column. | |
60 | sysobjvalues | Stores a wide range of properties for entities. | |
74 | syssingleobjrefs | Stores a row for N:1 dependencies (think tying partition schemes to tables). | |
75 | sysmultiobjrefs |
Stores a row for N:N dependencies
(think sys.sql_dependencies ).
|
There isn't much other evidence that the feature has been enabled, other than the elimination of contention identified above, and overall snappier performance (Niko Neugebauer shows some observations here). I initially checked file and filegroup properties, half expecting tempdb to have started up with an additional, memory-optimized filegroup. This didn't happen because, as part of tempdb, which gets re-created on every service start, these system tables are now – by definition – non-durable tables. So, the durability provided by the memory-optimized filegroup you need to add to user databases is not needed here.
What's the catch?
This seems like a major win, and a no-brainer to turn on once you're on SQL Server 2019. So why isn't it enabled by default? Well, there are some limitations in the first implementation that you should be aware of:
- You have to be careful when you have transactions that involve in-memory tables in user databases and also certain patterns can yield the following error:
A user transaction that accesses memory optimized tables or natively compiled modules cannot access more than one user database or databases model and msdb, and it cannot write to master.
- Columnstore indexes are not supported on #temp tables when this feature is enabled. If you try, you will get the following error:
Columnstore index creation is not support in tempdb when memory-optimized metadata mode is enabled.
- A related issue is that
sys.sp_estimate_data_compression_savings
can't be used to estimateCOLUMNSTORE
orCOLUMNSTORE_ARCHIVE
(same error: Msg 11442). I think it's possible that we can write our own version of this procedure that doesn't use #temp tables, but I'll save that for another time.
- This shouldn't affect many people, but locking hints against any system
views that reference these system tables will be ignored; effectively, they
will use
READ COMMITTED SNAPSHOT
.
- Finally, this feature requires a service restart to take effect; you can't just decide in the middle of a business cycle that you're going to turn it on (or off). To avoid disruption, enabling or disabling needs to be planned to coincide with a failover or maintenance window. And there is no guarantee that this feature will improve performance for your workload, so justifying that disruption might require testing to prove that it will help.
Summary
If you have a tempdb-heavy workload, it is worth trying out this feature, since it should help. While there are some limitations, just remember that this is the very first iteration, and surely more will happen in releases to come (much like the very first iteration of Columnstore was not quite perfect). In Cumulative Update #2, they added two additional system tables that didn't make the initial cut, and they may add more in the future as new bottlenecks rise to the top. I have confidence that they will resolve the incompatibilities mentioned above, and hope that they may even expand this functionality to temp tables and table variables at some point in the future.
Next Steps
Read on for related tips and other resources:
- Tempdb Configuration Best Practices in SQL Server
- SQL Server Tempdb Usage and Bottlenecks tracked with Extended Events
- TEMPDB - Files and Trace Flags and Updates, Oh My!
- Breaking Down TempDB Contention (part 2)
- In-Memory Database
- Memory-Optimized TempDB Metadata (Microsoft Docs)
- Memory-Optimized TempDB Metadata (Niko Neugebauer)
- What's New in the First Public CTP of SQL Server 2019
- New Function in SQL Server 2019 - sys.dm_db_page_info
About the author
This author pledges the content of this article is based on professional experience and not AI generated.
View all my tips