SQL Server In-Memory Tables Impact on Startup and Recovery

By:   |   Comments (2)   |   Related: > In Memory OLTP


Problem

By now we've all read the improvements that can be gained with respect to performance by using the new SQL Server 2014 In-Memory OLTP feature, but are there any drawbacks to the feature? One possibility I thought about was how using this feature could have an effect on instance startup/recovery duration, assuming we are not using the "DURABILITY=SCHEMA_ONLY" option, as it has to take some amount of time to load this data into memory.

Solution

In order to test this theory we'll need to create a table that uses this new feature, but before we can create it we'll need to create a filegroup that will store the In-Memory data as outlined in this tip. Once that is done we can use the following T-SQL to create our In-Memory table.

-- Table creation logic for in-memory storage                        
CREATE TABLE testtable_inmemory ([col1] [int] NOT NULL primary key nonclustered,
                        [col2] [int] NULL,
                        [col3] [int] NULL,
                        [col4] [varchar](50) NULL) WITH (MEMORY_OPTIMIZED=ON);

Our test scenario will consist of multiple iterations of loading data into the table and performing an instance restart. We'll also perform a manual CHECKPOINT before each instance restart to ensure all the data is written to the filestream filegroup for this object and there is no extra time spent on startup reading the transaction log. We'll run another small test at the end of this tip to see if there is a difference without the CHECKPOINT. Our first iteration will just be an instance restart to get our baseline. For each subsequent iteration we will update the code below to add 2,500,000 more records to the table. For example, the second iteration will have "SELECT @val=2500000" and "WHILE @val < 5000000". Here is the T-SQL to perform these tasks.

DECLARE @val INT
SELECT @val=1
WHILE @val < 2500000
BEGIN  
   INSERT INTO testtable_inmemory (col1, col2, col3, col4) 
       VALUES (@val,@val % 10,@val,'TEST' + CAST(@val AS VARCHAR))
   SELECT @val=@val+1
END
GO

CHECKPOINT
GO

After each iteration we can see how long the database took to come online by looking at the SQL Error Log. In there you should see a line similar to the following.

Recovery completed for database test (database ID 5) in 1 second(s) (analysis 3 ms, redo 2 ms, undo 1833 ms.)

Now that we have our test scenario defined we can execute it and take a look at the results.

Table Record CountData Size (bytes)Index Size (bytes)Instance Start Duration (sec)
0 0 0 1
2500000 156249 31407 6
5000000 311499 62973 11
7500000 468749 94382 17
10000000 624999 125824 21

It's easy to see from these results that as the table gets larger it takes longer for SQL Server to bring this database online. The data shows an almost linear relation between the table size and startup/recovery duration.

As mentioned above let's perform a similar test, this time without the CHECKPOINT, and see if there is any difference in our results. We'll again start with an empty table but this time we'll load 2000000 records. Here is the T-SQL for these steps.

-- Table creation logic for in-memory storage                        
CREATE TABLE testtable_inmemory ([col1] [int] NOT NULL primary key nonclustered,
                        [col2] [int] NULL,
                        [col3] [int] NULL,
                        [col4] [varchar](50) NULL) WITH (MEMORY_OPTIMIZED=ON);
DECLARE @val INT
SELECT @val=1
WHILE @val 

We'll now perform our restart and record the same metrics as above. We'll also include the size of the transaction log since even though we are running this database in simple recovery mode and our transaction has committed, the way In-Memory table transactions are processed is the log file is still used. To check the transaction log usage we can use the DBCC SQLPERF(logspace) command.

Table Record CountData Size (bytes)Index Size (bytes)Log Size (MB)Log Used %Instance Start Duration (sec)
2000000 124999 25151 984 99 69

Now let's perform a CHECKPOINT and another instance restart and see if our results go back to how they were during our original test. The results below confirm that this is the case.

Table Record CountData Size (bytes)Index Size (bytes)Log Size (MB)Log Used %Instance Start Duration (sec)
2000000 124999 25151 984 8 5

Summary

My test was performed on a standard desktop machine. If you are running higher end enterprise level hardware your numbers will most likely be different than above, but it's important to note that this new feature will have an effect on startup/recovery duration and you should take this into account when deciding whether or not you want to use this new feature as opposed to the original disk-based table structure.

Next Steps


sql server categories

sql server webinars

subscribe to mssqltips

sql server tutorials

sql server white papers

next tip



About the author
MSSQLTips author Ben Snaidero Ben Snaidero has been a SQL Server and Oracle DBA for over 10 years and focuses on performance tuning.

This author pledges the content of this article is based on professional experience and not AI generated.

View all my tips



Comments For This Article




Wednesday, October 8, 2014 - 7:30:40 AM - yunus Back To Top (34883)

Our test:

0 0 0 1
2500000 16212 21701 2
5000000 294199 75512 7
7500000 391859 102544 9

 

Yunus Karadağ


Friday, October 3, 2014 - 11:02:53 AM - Tariq Back To Top (34830)

Hey Ben

Nice article. Read your performance impact article too. Good work.

Thanks

Tariq















get free sql tips
agree to terms