Memory-Optimized TempDB Metadata in SQL Server 2019

By:   |   Comments (14)   |   Related: > SQL Server 2019


Problem

For as long as I have worked with SQL Server, and on almost every system I've ever managed, tempdb has been a bottleneck in one way or another. Microsoft has made a lot of improvements over the years to help resolve bottlenecks with access to PFS, GAM, SGAM, and other resources in tempdb. In no particular order, a few of these enhancements include:

While they have removed some of these bottlenecks, and generally this makes tempdb less of a performance drain, certain workloads may introduce bottlenecks elsewhere. One area where contention is still common, even after all of these improvements, is with tempdb system table metadata, where too many sessions are trying to write to system tables (like a heavy workload that creates, alters, and drops a lot of #temp tables).

Solution

Memory-Optimized TempDB Metadata is a new feature in SQL Server 2019, as part of the "In-Memory Database" feature umbrella. Much like "Always On" isn't a specific feature, In-Memory Database is a term describing the set of capabilities that allow you to take advantage of memory to help make databases faster.

Pam Lahoud describes the primary symptom of the tempdb system table metadata contention issue as follows (and you can get really heavy into the low-level details here):

… sessions in Suspended state with a wait type of PAGELATCH_xx and
a wait resource of a page that hosts a TempDB system table, such as 2:1:118.

You can check for this contention using the following query, which limits the results to PAGELATCH_xx waits involving a specific set of tempdb system tables:

SELECT r.session_id, r.wait_type, r.wait_resource, r.command, 
    [object] = OBJECT_NAME(p.[object_id],p.database_id)
  FROM sys.dm_exec_requests AS r
  CROSS APPLY sys.fn_PageResCracker(r.page_resource) AS pc 
  CROSS APPLY sys.dm_db_page_info(pc.[db_id], pc.[file_id], pc.page_id, 'DETAILED') AS p
  WHERE UPPER(r.wait_type) like '%PAGELATCH%'
    AND p.database_id = 2 -- tempdb
    AND p.[object_id] IN(3,9,34,40,41,54,55,60,74,75);-- what are these IDs? you'll see shortly

(On older versions of SQL Server, you won't be able to use functions like sys.dm_db_page_info. You will need to manually parse the wait_resource value, and then use either sys.dm_db_database_page_allocations or, if even older than that, DBCC PAGE and DBCC IND, in order to determine if your contention involves this same set of tables.)

The feature provides a new instance-level setting that moves ten of the most commonly-used system tables in tempdb (corresponding to those object IDs listed above) to memory-optimized tables. This eliminates latching, substantially improving concurrency and resolving most or all of the waits observed in the query above. Note that you may still see similar types of waits associated with other system tables, or even permanent user tables in tempdb (which, being a system database, currently can't have memory-optimized user tables). Also note that this does not memory-optimize any of your own #temp tables or table variables; this feature affects metadata only.

You turn the feature on with an ALTER SERVER CONFIGURATION command, which requires a restart:

ALTER SERVER CONFIGURATION SET MEMORY_OPTIMIZED TEMPDB_METADATA = ON;			

You can also use sp_configure:

EXEC sys.sp_configure N'show advanced options', 1;
RECONFIGURE WITH OVERRIDE; EXEC sys.sp_configure N'tempdb metadata memory-optimized', 1;
RECONFIGURE WITH OVERRIDE;

And you can check the current value in sys.configurations:

SELECT CASE WHEN value <> value_in_use THEN 'restart required' END, *
  FROM sys.configurations
  WHERE name = N'tempdb metadata memory-optimized';

Or with SERVERPROPERTY():

SELECT SERVERPROPERTY('IsTempdbMetadataMemoryOptimized');			

You won't notice anything obvious after you restart, but you can see which system tables have been converted to memory-optimized with the following query:

SELECT t.[object_id], t.name
  FROM tempdb.sys.all_objects AS t
  INNER JOIN tempdb.sys.memory_optimized_tables_internal_attributes AS i
  ON t.[object_id] = i.[object_id];

Results (descriptions added by me; some taken from the docs, but not all are documented, and not all were delivered with RTM):

object_id name delivered description
3 sysrscols RTM Stores column information like offsets, change frequency, types, and max in-row values.
5 sysrowsets CU2 Contains a row for each partition rowset for an index or a heap.
7 sysallocunits Contains a row for each storage allocation unit.
9 sysseobjvalues RTM Stores column information like default values.
34 sysschobjs Stores a row for each object.
40 sysmultiobjvalues Stores information about entities like column encryption keys.
41 syscolpars Stores a row for each table/view column and procedure/function parameter.
54 sysidxstats Stores a row for each index or statistic.
55 sysiscols Stores a row for each persisted index and statistics column.
60 sysobjvalues Stores a wide range of properties for entities.
74 syssingleobjrefs Stores a row for N:1 dependencies (think tying partition schemes to tables).
75 sysmultiobjrefs Stores a row for N:N dependencies (think sys.sql_dependencies).

There isn't much other evidence that the feature has been enabled, other than the elimination of contention identified above, and overall snappier performance (Niko Neugebauer shows some observations here). I initially checked file and filegroup properties, half expecting tempdb to have started up with an additional, memory-optimized filegroup. This didn't happen because, as part of tempdb, which gets re-created on every service start, these system tables are now – by definition – non-durable tables. So, the durability provided by the memory-optimized filegroup you need to add to user databases is not needed here.

What's the catch?

This seems like a major win, and a no-brainer to turn on once you're on SQL Server 2019. So why isn't it enabled by default? Well, there are some limitations in the first implementation that you should be aware of:

  • You have to be careful when you have transactions that involve in-memory tables in user databases and also certain patterns can yield the following error:
Msg 41317, Level 16, State 1
A user transaction that accesses memory optimized tables or natively compiled modules cannot access more than one user database or databases model and msdb, and it cannot write to master.
  • Columnstore indexes are not supported on #temp tables when this feature is enabled. If you try, you will get the following error:
Msg 11442, Level 16, State 1
Columnstore index creation is not support in tempdb when memory-optimized metadata mode is enabled.
  • A related issue is that sys.sp_estimate_data_compression_savings can't be used to estimate COLUMNSTORE or COLUMNSTORE_ARCHIVE (same error: Msg 11442). I think it's possible that we can write our own version of this procedure that doesn't use #temp tables, but I'll save that for another time.
  • This shouldn't affect many people, but locking hints against any system views that reference these system tables will be ignored; effectively, they will use READ COMMITTED SNAPSHOT.
  • Finally, this feature requires a service restart to take effect; you can't just decide in the middle of a business cycle that you're going to turn it on (or off). To avoid disruption, enabling or disabling needs to be planned to coincide with a failover or maintenance window. And there is no guarantee that this feature will improve performance for your workload, so justifying that disruption might require testing to prove that it will help.

Summary

If you have a tempdb-heavy workload, it is worth trying out this feature, since it should help. While there are some limitations, just remember that this is the very first iteration, and surely more will happen in releases to come (much like the very first iteration of Columnstore was not quite perfect). In Cumulative Update #2, they added two additional system tables that didn't make the initial cut, and they may add more in the future as new bottlenecks rise to the top. I have confidence that they will resolve the incompatibilities mentioned above, and hope that they may even expand this functionality to temp tables and table variables at some point in the future.

Next Steps

Read on for related tips and other resources:



sql server categories

sql server webinars

subscribe to mssqltips

sql server tutorials

sql server white papers

next tip



About the author
MSSQLTips author Aaron Bertrand Aaron Bertrand (@AaronBertrand) is a passionate technologist with industry experience dating back to Classic ASP and SQL Server 6.5. He is editor-in-chief of the performance-related blog, SQLPerformance.com, and also blogs at sqlblog.org.

This author pledges the content of this article is based on professional experience and not AI generated.

View all my tips



Comments For This Article




Thursday, May 20, 2021 - 10:51:27 PM - Marios Philippopoulos Back To Top (88724)
This works - accompanied by a service restart:

EXEC sys.sp_configure N'show advanced options', 1;
RECONFIGURE WITH OVERRIDE;
EXEC sys.sp_configure N'tempdb metadata memory-optimized', 1;
RECONFIGURE WITH OVERRIDE;


Thursday, May 20, 2021 - 7:45:06 PM - Marios Philippopoulos Back To Top (88723)
Hi Aaron,

Thanks for getting back to me. The error comes from executing the command in SSMS Query Text Editor - not from Intellisense.
I am looking into what you mentioned about "using database projects, and extended T-SQL verification? I suspect your error is coming from this, not from SQL Server." Is this on SSMS? I was not even aware of these settings. :-)

Thanks,
Marios

Thursday, May 20, 2021 - 12:40:10 PM - Aaron Bertrand Back To Top (88720)
And finally, just to be sure, are you getting this error when you *execute* the command, or is it just IntelliSense telling you the syntax is wrong in a tooltip? If the latter, just ignore it (or turn IntelliSense off). I see the same error message in a tooltip in Azure Data Studio, but the command *executes* just fine. The IntelliSense support isn't always in sync with language changes.

Thursday, May 20, 2021 - 12:38:04 PM - Aaron Bertrand Back To Top (88719)
Marios, also, are you using database projects, and extended T-SQL verification? I suspect your error is coming from this, not from SQL Server. If you turn this feature off in the project properties, it should bypass this check.

Thursday, May 20, 2021 - 12:04:27 PM - Aaron Bertrand Back To Top (88718)
Marios, this syntax is still working for me. Can you triple-check that you are running the command against the server you expect?

Thursday, May 20, 2021 - 10:31:44 AM - Marios Philippopoulos Back To Top (88717)
Hi, I am having a strange problem - when entering the below command I get error "Incorrect syntax near MEMORY_OPTIMIZED.":

ALTER SERVER CONFIGURATION SET MEMORY_OPTIMIZED TEMPDB_METADATA = ON;

I am on SQL version:
Microsoft SQL Server 2019 (RTM-CU9) (KB5000642) - 15.0.4102.2 (X64) Jan 25 2021 20:16:12 Copyright (C) 2019 Microsoft Corporation Developer Edition (64-bit) on Windows 10 Home 10.0 <X64> (Build 19041: )

SSMS: 18.9.1

Anyone else have this issue?

Thursday, February 4, 2021 - 3:17:59 PM - Dan Carollo Back To Top (88179)
Two other things I found broke when "tempdb metadata memory-optimized" is set to ON.

1) CROSS APPLY using sys.dm_db_stats_properties
2) Running sp_updatestats on a database containing memory-optmized tables


REPRO:

USE <DatabaseName> -- Where database is enabled or memory-optimized
SELECT
s.objectid,
s.statsid,
s.name as StatsName,
sp.rows,
sp.rowssampled,
sp.modificationcounter,
sp.lastupdated,
Partitionnumber=1,
autocreated,
norecompute,
isincremental=0
FROM
sys.stats s
CROSS APPLY
sys.dmdbstatsproperties(s.[objectid], s.statsid) sp
where OBJECTPROPERTY(s.object_id,'IsMSShipped')=0

RESULTS: The Query actually returns results, but throws this error message:

Msg 41317, Level 16, State 0, Line 7
A user transaction that accesses memory optimized tables or natively compiled modules cannot access more than one user database or databases model and msdb, and it cannot write to master.

THe problem happens only with the CROSS APPLY. You can query from the function just fine if you pass in ObjectId, StatsID


SECOND REPRO:

USE <DatabaseName> -- Where database is enabled for memory-optimized
GO
EXEC sp_updatestats

Updating [dbo].[TestTable_IM_DATA]
Msg 41317, Level 16, State 0, Procedure sp_updatestats, Line 122 [Batch Start Line 93]
A user transaction that accesses memory optimized tables or natively compiled modules cannot access more than one user database or databases model and msdb, and it cannot write to master.

Monday, November 18, 2019 - 4:16:50 PM - ob Back To Top (83127)

@aaron

By the time I see this error in the morning, the page is gone from tempdb, I can't catch it. But guessing by the page number it is not PFS-GAM-SGAM

ob


Monday, November 18, 2019 - 11:13:42 AM - Aaron Bertrand Back To Top (83124)

@ob that doesn't seem to be the specific contention this should help - did you look up what object page 3302 belongs to in your case?


Sunday, November 17, 2019 - 8:43:45 PM - ob Back To Top (83119)

I am not creating/dropping temp objects, I am spilling into tempdb. 2 queries, 100 GB. tempdb is on SSDs

I am getting latch error.

A time-out occurred while waiting for buffer latch -- type 2, bp 000000054C6A0000, page 6:3302, stat 0x40d, database id: 2, allocation unit Id: 71920605853843456, task 0x00000004E901D088 : 0, waittime 300 seconds, flags 0x1a, owning task 0x00000004E901D088. Not continuing to wait.

page #3302 is not a system page (or is it?)

Will my latching errors be fixed by upgrade to SQL 2019?


Monday, November 11, 2019 - 1:11:30 PM - Aaron Bertrand Back To Top (83057)

@Nick yes, I agree, I talked about that a bit here:

https://sqlperformance.com/2019/11/sql-server-2019/thoughts-on-editions


Monday, November 11, 2019 - 10:15:44 AM - Aaron Bertrand Back To Top (83056)

@Zach Note that this isn't anything involving user memory-optimized queries unless you are actively joining against these system tables. This is just metadata tables being pushed in-memory, and they don't require a memory-optimized filegroup or any sort of durability, so I don't think you'll need the trace flag to avoid errors here. (I'm not saying the errors are a good thing, I just don't think turning this feature on is the same as setting up your own in-mem tables and doing who knows what with them from your app.)


Monday, November 11, 2019 - 10:03:32 AM - Nick Colebourn Back To Top (83055)

One significant limitation...Enterprise Edition only... :( 


Monday, November 11, 2019 - 9:23:52 AM - Zach Back To Top (83053)

Gonna have to be a hard pass for me until they figure out the random errors like this that only go away if I use an undocumented trace flag when a connection that serviced an MOT is reused. Its in 2016 and 2017. Wonder if it has joined us for a third round?

"The Database Engine received a floating point exception from the operating system while processing a user request. Try the transaction again. If the problem persists, contact your system administrator."















get free sql tips
agree to terms