Microsoft SQL Server 2014 In-Memory OLTP: How fast is it?

By:   |   Comments (11)   |   Related: > In Memory OLTP


Problem

In-Memory OLTP is a new feature of Microsoft SQL Server 2014 that was born in response to business and hardware trends. But how fast is it? How we can use it to optimize OLTP transactions?  Why it is faster than transactions on disk based tables?

Solution

Microsoft SQL Server 2014 In-Memory OLTP offers high performance, it was built to take full advantage of memory and processor cores installed on the database server, it also is oriented to OLTP scenarios and full integrated into SQL Server. Memory optimized tables, stored procedures and indexes are now available for high-throughput OLTP and transactions on memory optimized tables are also introduced with this new great feature. In this tip I will explain a short example to show the performance differences for INSERT transactions on memory optimized tables vs. disk based tables. Note that the tests were conducted on a virtual machine with 4.5GB RAM and 4 cores.

For our example, we will create one In-Memory stored procedure and table versus one disk based Stored Procedure and table.  We will start off by creating an In-Memory database that needs at least one the memory optimized Filegroup and include the CONTAINS MEMORY_OPTIMIZED_DATA option to store one or more containers for data files or delta files or both. We can only create one memory optimized Filegroup per database. In our example the memory optimized Filegroup is named "INMEMORYDB_FILEGROUP" and it holds only one container named "INMEMORYDB_CONTAINER".

CREATE DATABASE [INMEMORYDB]
CONTAINMENT = NONE
ON PRIMARY 
(NAME = N'INMEMORYDB', FILENAME = N'D:\SQLDATA\INMEMORYDB_Data01.mdf', 
SIZE = 1GB, MAXSIZE = UNLIMITED, FILEGROWTH = 2GB), FILEGROUP [INMEMORYDB_FILEGROUP] CONTAINS MEMORY_OPTIMIZED_DATA DEFAULT (NAME = N'INMEMORYDB_CONTAINER', FILENAME = N'D:\INMEMORYDB_CONTAINER', MAXSIZE = UNLIMITED) LOG ON (NAME = N'INMEMORYDB_log', FILENAME = N'E:\SQLLog\INMEMORYDB_Log.ldf',
SIZE = 1GB, MAXSIZE = 2048GB, FILEGROWTH = 1GB)

Now we will create the In-Memory table Product_InMemory and then the On-Disk table Product_OnDisk as follow:

CREATE TABLE [dbo].[Product_InMemory]
([ProductID] [int] IDENTITY(1,1) NOT NULL NONCLUSTERED PRIMARY KEY HASH WITH (BUCKET_COUNT=3000000),
 [ProductName] [nvarchar](40) NOT NULL,
 [SupplierID] [int] NOT NULL INDEX [IX_SupplierID] HASH WITH (BUCKET_COUNT=3000000),
 [CategoryID] [int] NOT NULL INDEX [IX_CategoryID] HASH WITH (BUCKET_COUNT=3000000),
 [QuantityPerUnit] [nvarchar](20) NULL,
 [UnitPrice] [money] NULL  DEFAULT ((0)),
 [UnitsInStock] [smallint] NULL DEFAULT ((0)),
 [UnitsOnOrder] [smallint] NULL  DEFAULT ((0)),
 [ReorderLevel] [smallint] NULL  DEFAULT ((0)),
 [Discontinued] [bit] NOT NULL  DEFAULT ((0)) 
)
 WITH (MEMORY_OPTIMIZED=ON)
 GO
 
CREATE TABLE [dbo].[Product_OnDisk]
([ProductID] [int] IDENTITY(1,1) NOT NULL PRIMARY KEY CLUSTERED ,
 [ProductName] [nvarchar](40) NOT NULL,
 [SupplierID] [int] NOT NULL INDEX [IX_SupplierID] NONCLUSTERED ,
 [CategoryID] [int] NULL INDEX [IX_CategoryID] NONCLUSTERED,
 [QuantityPerUnit] [nvarchar](20) NULL,
 [UnitPrice] [money] NULL  DEFAULT ((0)),
 [UnitsInStock] [smallint] NULL DEFAULT ((0)),
 [UnitsOnOrder] [smallint] NULL  DEFAULT ((0)),
 [ReorderLevel] [smallint] NULL  DEFAULT ((0)),
 [Discontinued] [bit] NOT NULL  DEFAULT ((0)) 
)

As you can see in the script above the table Product_InMemory must be marked as In-Memory using "MEMORY_OPTIMIZED" clause. You can also see we have created three HASH indexes. They are a new type of In-Memory OLTP index used to optimize the performance of queries on memory-optimized tables. To learn more about indexing, check out these two tips Getting started with Indexes on SQL Server Memory Optimized Tables and Understanding SQL Server Memory-Optimized Tables Hash Indexes.

Now we will create the In-Memory OLTP stored procedure usp_ProductInsert_InMemory to insert data into the In-Memory table Product_InMemory. This stored procedure has new syntax including the with NATIVE_COMPILATION clause so the object is marked to be natively compiled then loaded into the memory. It also must be bound to the schema of the objects it references by using SCHEMABINDING clause, and with "EXECUTE AS OWNER" to set the default execution context. Finally, it must set the atomic block that has two options required: the isolation level and the language. In our example we will use "SNAPSHOT" and as language "us_english". You can read more about these options with these two tips - Migrate to Natively Compiled SQL Server Stored Procedures for Hekaton and SQL Server 2014 In Memory OLTP Durability, Natively Compiled Stored Procedures and Transaction Isolation Level.

--In-Memory OLTP Stored Procedure to modify In-Memory Table
CREATE PROCEDURE dbo.usp_ProductInsert_InMemory(@InsertCount int )
WITH NATIVE_COMPILATION, SCHEMABINDING, EXECUTE AS OWNER
AS
BEGIN ATOMIC
WITH (TRANSACTION ISOLATION LEVEL=SNAPSHOT, LANGUAGE=N'us_english')

DECLARE @SupplierID int=1, @CategoryID int=1
DECLARE @Start int=1

WHILE @Start<@InsertCount
 BEGIN
   INSERT INTO dbo.Product_InMemory(ProductName, SupplierID, CategoryID, 
 QuantityPerUnit, UnitPrice, UnitsInStock, UnitsOnOrder, ReorderLevel, Discontinued ) 
   SELECT 'NewProduct_' + CAST(@Start AS VARCHAR(10)),@SupplierID,@CategoryID, 1,1,1,1,1,1 

   IF @SupplierID=10 SET @SupplierID=0
   IF @CategoryID=8 SET @CategoryID=0

   SET @SupplierID=@SupplierID+1
   SET @CategoryID=@CategoryID+1

   SET @Start=@Start+1
 END
END

At this time we create the disk based stored procedure named usp_ProductInsert_OnDisk as follows:

  -- OnDisk-Based Stored Procedure to modify OnDisk Table
CREATE PROCEDURE dbo.usp_ProductInsert_OnDisk(@InsertCount int )
AS
BEGIN 

DECLARE @SupplierID int=1, @CategoryID int=1
DECLARE @Start int=1

WHILE @Start<@InsertCount
 BEGIN
   INSERT INTO dbo.Product_OnDisk(ProductName, SupplierID, CategoryID, 
 QuantityPerUnit, UnitPrice, UnitsInStock, UnitsOnOrder, ReorderLevel, Discontinued ) 
   SELECT 'NewProduct_' + CAST(@Start AS VARCHAR(10)),@SupplierID,@CategoryID, 1,1,1,1,1,1 

   IF @SupplierID=10 SET @SupplierID=0
   IF @CategoryID=8 SET @CategoryID=0

   SET @SupplierID=@SupplierID+1
   SET @CategoryID=@CategoryID+1

   SET @Start=@Start+1
 END
END

Now it's time to execute both stored procedures and record the performance metrics.  As you can see in the next screenshot the In-Memory stored procedure (usp_ProductInsert_InMemory) took just five seconds to insert 1000000 rows into the In-Memory table Product_InMemory. This is because the objects and processing is in memory, so the process does not needed to write directly to disk so there are no latches, no locking, minimal context switches, any type of contentions, etc. I have used SET STATISTICS IO ON to display activity from disk, but you can see there is no I/O disk activity, only CPU activity. The stored procedure and table are loaded totally in memory and everything works directly in memory.

How to use DBCC CHECKPRIMARYFILE to attach several databases in just a few seconds

You can see now in the next screenshot that the execution of the table based stored procedure (usp_ProductInsert_OnDisk) took 9 minutes and 13 seconds to insert 1000000 rows into the disk based table Product_OnDisk. Why is it so much slower?  It is because each INSERT operation works directly on disk and in this situation the operation has to deal with latches, locking, logging and additional contention issues.

How to use DBCC CHECKPRIMARYFILE to attach several databases in just a few seconds

Based on this example, we can see that In-Memory OLTP objects are about 10-30x more efficient than disk based objects.

Next Steps


sql server categories

sql server webinars

subscribe to mssqltips

sql server tutorials

sql server white papers

next tip



About the author
MSSQLTips author Percy Reyes Percy Reyes is a SQL Server MVP and Sr. Database Administrator focused on SQL Server Internals with over 10+ years of extensive experience managing critical database servers.

This author pledges the content of this article is based on professional experience and not AI generated.

View all my tips



Comments For This Article




Friday, June 12, 2015 - 11:41:30 PM - Percy Reyes Back To Top (37912)

in-memory versions will offer better performance for the vast majority of cases, especially when you work with big tables.


Tuesday, March 3, 2015 - 3:56:07 PM - Mike Bailey Back To Top (36427)

 

I tried the examples on SQL 2014 RTM and inserted 1 million rows into both the tables and ran the queries. These are the results on a machine with 4 GB of memory.

 

--about 750-800 milliseconds consistently (as measured in SQL profiler duration column)

SELECT CategoryID, count(*) FROM [Product_InMemory]

GROUP BY CategoryID

 

--700 milliseconds first time (or after dbcc dropcleanbuffers)

--about 290 milliseconds thereafter (once data pages got cached)

SELECT CategoryID, count(*) FROM [Product_OnDisk]

GROUP BY CategoryID

 

So it looks like while the in-memory optimization gives great performance in some OLTP scenarios, aggregations and operations involving partial or full scans of the tables will perform similar or better with disk based tables (b-tree index).

Percy, once the disk table got cached in memory, was the disk based table query performance comparable or better than or worse than the in-memory version? Thanks.

 

Monday, December 22, 2014 - 1:21:46 PM - SAMSON Back To Top (35727)

Very interesting article Percy and the best clarification. My doubt about the memory whenever we create our object on the memory will  the size of the object doesn't affect our memory capacity? what happen these two tables, from the disc and from the memory, at disaster recovery(restoring) process? the last but not the least when I start trying your example I got the error:

 

Msg 41356, Level 15, State 1, Line 5

 

 

 

Filegroups with MEMORY_OPTIMIZED_DATA can only be created in 64-bit installations of SQL Server.

 

                                                  Thanks

 

 

 

 

 


Monday, November 10, 2014 - 5:21:19 PM - AZJim Back To Top (35245)

Percy ... I think you are comparing apples and oranges. 

To compare in-memory tables you must compare them to static tables which have been preloaded (because you want to compare what you already are running with the purported gains with in-memory tables.  After preloading the static tables, run your production scripts several times and then compare.  This is how an OLTP system runs (with consideration to the Least-Recently-Used algorithm for bufferpool management).

This comparison does not convince me to migrate to SS2014.  In fact, I am thinking that Microsoft has something to hide.


Friday, November 7, 2014 - 10:35:52 AM - Additional Notes Back To Top (35222)

Thank you for the initial results; however, it's time to go further.

"Note that the tests were conducted on a virtual machine with 4.5GB RAM and 4 cores"

Regrettably, your tests are not rigorous enough to draw any conclusions from other than that it is possible for in-memory tables to be faster in some cases.  There was no information on the disk IO characteristics, much less an effort made to provide disk storage of comparable performance per cost to RAM such as good SSD storage, which is the natural alternative unless there's extra RAM lying around or the tables are very small.

Further, the indexes apart from the primary keys are on extremely low selectivity columns (based on the tiny range in the insert loop), which will lead to excessive page splits on the disk based table, as well as make the indexes of limited value in the first place.

Equally important, a while loop to insert rows one at a time, as other comments have mentioned, is an extremely poor test unless you have some process doing exactly that already that you wanted to do a comparison on; it's not a good general case test.

 

 


Thursday, November 6, 2014 - 6:18:09 PM - Percy Reyes Back To Top (35210)

Hi Asif Syed,

In-Memory OLTP feature is oriented and recommended for workload-intensive OLTP scenarios. Not recommended on OLAP.

For OLAP scenarios you can opt use In-Memory Columnstore indexes, and you can try first testing with Clustered columnstore index that is updatable in SQL Server 2014.

Let me know if you any more question or comments.

Thank for your comment!


Thursday, November 6, 2014 - 2:01:07 AM - Asif Syed Back To Top (35197)

Very nice article. Thank you very much for sharing this information.

Here with i want to know how this new feature will be helpful for my scenario..

My scenario is i need to pull thousands of rows in reports. Some times i need to pull data in different matrics with multiple dimensions.

is this feature will be helpful for me... Please guide.

 

Thanks

Asif.S


Wednesday, November 5, 2014 - 11:54:17 AM - Percy Reyes Back To Top (35190)

Hi Slawek Guzek - It is just an example to show how NO latches, locking, logging and additional contention issues can improve greatly the performance of simple and normal modification operations.

Anyway, you can insert 13 millions ( or more) of rows into the tables Product_InMemory and Product_OnDisk, and then execute it:

--- 2 seconds.
SELECT CategoryID, count(*) FROM [Product_InMemory]
GROUP BY CategoryID

-- more than 15 or 20 seconds.
SELECT CategoryID, count(*) FROM [Product_OnDisk]
GROUP BY CategoryID

Thanks for you comment!


Wednesday, November 5, 2014 - 9:52:02 AM - Percy Reyes Back To Top (35189)

Hi mariachi22,  You are right, it was a typo error. It will be fixed. Thanks!


Wednesday, November 5, 2014 - 7:20:58 AM - Slawek Guzek Back To Top (35188)

This, as well as all other tests posted do far here are testing/comparing performance in pretty unrealistic scenarios - millions of inserts in a loop.  Is it really a good and solid base to derive such a big statement - 30 times more efficient?


Wednesday, November 5, 2014 - 4:30:26 AM - mariachi202 Back To Top (35186)

I do believe you need a nonclustered statement on your primary key creation.

 

 PRIMARY KEY NONCLUSTERED HASH WITH... 

 

Clustered indexes are not supported by HASH.

 















get free sql tips
agree to terms