Comparing Performance of In Memory table vs Regular Disk Table Part 2

By:   |   Comments   |   Related: > In Memory OLTP


Problem

If you read my last tip you can see the performance benefit that can be gained by using the new In-Memory OLTP feature available in SQL Server 2014. This tip will extend that test by using some additional options available with the In-Memory OLTP feature to see if we can realize any further improvement.

Solution

For this performance test we are going to look at a couple options available when using the In-Memory OLTP feature. The first is the DURABILITY option of the CREATE TABLE statement which gives us the option of not persisting the data in our table. The second is accessing the table using natively compiled stored procedures which, as the name suggests, will access the table using compiled code rather than the old method of interpreting the T-SQL statements at run time. A good overview of both of these options is discussed here.

Sample Table Setup

Let's use the same table schema I used in my previous tip for this one. This time though we'll create two copies of each table so that we can execute the same statements with the same parameters for both the T-SQL statements and the natively compiled stored procedures. Here is the T-SQL to create these tables.

-- Table creation logic for in-memory storage                        
CREATE TABLE testtable_inmemory ([col1] [int] NOT NULL primary key nonclustered,
                        [col2] [int] NULL,
                        [col3] [int] NULL,
                        [col4] [varchar](50) NULL) WITH (MEMORY_OPTIMIZED=ON);


-- Table creation logic for in-memory storage with data not persisted                     
CREATE TABLE testtable_inmemoryschemaonly ([col1] [int] NOT NULL primary key nonclustered,
                        [col2] [int] NULL,
                        [col3] [int] NULL,
                        [col4] [varchar](50) NULL) WITH (MEMORY_OPTIMIZED=ON, DURABILITY = SCHEMA_ONLY);

-- Table creation logic for second in-memory storage                        
CREATE TABLE testtable_inmemory2 ([col1] [int] NOT NULL primary key nonclustered,
                        [col2] [int] NULL,
                        [col3] [int] NULL,
                        [col4] [varchar](50) NULL) WITH (MEMORY_OPTIMIZED=ON);

-- Table creation logic for second in-memory storage with data not persisted                      
CREATE TABLE testtable_inmemoryschemaonly2 ([col1] [int] NOT NULL primary key nonclustered,
                        [col2] [int] NULL,
                        [col3] [int] NULL,
                        [col4] [varchar](50) NULL) WITH (MEMORY_OPTIMIZED=ON, DURABILITY = SCHEMA_ONLY); 
GO

We'll also need to create the natively compiled stored procedures to access these tables. There are two stored procedures for each operation, one that accesses the In-Memory table with the default DURABILITY and one that access the SCHEMA_ONLY copy of the table. Here is the T-SQL code to create these stored procedures.

CREATE PROCEDURE dbo.DeleteColumn(@col1 integer)
with native_compilation, schemabinding, execute as owner
as 
begin atomic with
(transaction isolation level = snapshot,language = N'English')
  DELETE FROM dbo.testtable_inmemory2 WHERE [col1]=@col1
end
go

CREATE PROCEDURE dbo.DeleteColumnSchemaOnly(@col1 integer)
with native_compilation, schemabinding, execute as owner
as 
begin atomic with
(transaction isolation level = snapshot,language = N'English')
  DELETE FROM dbo.testtable_inmemoryschemaonly2 WHERE [col1]=@col1
end
go

CREATE PROCEDURE dbo.UpdateColumn(@col1 integer)
with native_compilation, schemabinding, execute as owner
as 
begin atomic with
(transaction isolation level = snapshot,language = N'English')
  UPDATE dbo.testtable_inmemoryschemaonly2 SET col3=col3+10 WHERE [col1]=@col1
end
go

CREATE PROCEDURE dbo.UpdateColumnSchemaOnly(@col1 integer)
with native_compilation, schemabinding, execute as owner
as 
begin atomic with
(transaction isolation level = snapshot,language = N'English')
  UPDATE dbo.testtable_inmemoryschemaonly2 SET col3=col3+10 WHERE [col1]=@col1
end
go

CREATE PROCEDURE dbo.SelectColumn(@col1 integer)
with native_compilation, schemabinding, execute as owner
as 
begin atomic with
(transaction isolation level = snapshot,language = N'English')
  SELECT col1,col2,col3 FROM dbo.testtable_inmemory2 WHERE [col1]=@col1
end
go

CREATE PROCEDURE dbo.SelectColumnSchemaOnly(@col1 integer)
with native_compilation, schemabinding, execute as owner
as 
begin atomic with
(transaction isolation level = snapshot,language = N'English')
  SELECT col1,col2,col3 FROM dbo.testtable_inmemoryschemaonly2 WHERE [col1]=@col1
end
go

CREATE PROCEDURE dbo.InsertRow(@col1 integer)
with native_compilation, schemabinding, execute as owner
as 
begin atomic with
(transaction isolation level = snapshot,language = N'English')
  INSERT INTO dbo.testtable_inmemory2 (col1, col2, col3, col4) 
       VALUES (@col1,@col1 % 10,@col1,'TEST' + CAST(@col1 AS VARCHAR))
end
go

CREATE PROCEDURE dbo.InsertRowSchemaOnly(@col1 integer)
with native_compilation, schemabinding, execute as owner
as 
begin atomic with
(transaction isolation level = snapshot,language = N'English')
  INSERT INTO dbo.testtable_inmemoryschemaonly2 (col1, col2, col3, col4) 
       VALUES (@col1,@col1 % 10,@col1,'TEST' + CAST(@col1 AS VARCHAR))
end
go

Now that we have our schema created let's run our first test by loading some data into each table and measuring the performance. We'll load 200000 records into each table using a WHILE loop. The first two code blocks will use straight T-SQL and the last two will use the natively compiled stored procedures. Here is the T-SQL code.

DECLARE @val INT
SELECT @val=1
WHILE @val <= 200000
BEGIN  
   INSERT INTO testtable_inmemory (col1, col2, col3, col4) 
       VALUES (@val,@val % 10,@val,'TEST' + CAST(@val AS VARCHAR))
   SELECT @val=@val+1
END
GO

DECLARE @val INT
SELECT @val=1
WHILE @val <= 200000
BEGIN  
   INSERT INTO testtable_inmemoryschemaonly (col1, col2, col3, col4) 
       VALUES (@val,@val % 10,@val,'TEST' + CAST(@val AS VARCHAR))
   SELECT @val=@val+1
END
GO

DECLARE @val INT
SELECT @val=1
WHILE @val <= 200000
BEGIN  
   exec dbo.InsertRow @val;
   SELECT @val=@val+1
END
GO

DECLARE @val INT
SELECT @val=1
WHILE @val <= 200000
BEGIN  
   exec dbo.InsertRowSchemaOnly @val;
   SELECT @val=@val+1
END
GO

Looking at the SQL Profiler results from these statements we can see that as one would expect the SCHEMA_ONLY option performs no reads or writes and therefore uses much fewer CPU resources as well. The other thing to notice from these results in that the natively compiled stored procedure shows better performance in all categories compared with using regular T-SQL, it's also more than twice as fast when combined with the SCHEMA_ONLY option.

Statement/Table Type Duration (ms) CPU (ms) Reads Writes
T-SQL with In-Memory 104313 13094 44 44
T-SQL with In-Memory Schema Only 6735 3843 0 0
Native SP with In-Memory 91322 10672 8 8
Native SP with In-Memory Schema Only 3065 2938 0 0

Next let's take a look at retrieving some of this data using these two options. As in the above test the first two code blocks will use straight T-SQL and the last two will use natively compiled stored procedures. Here is the T-SQL code for this test.

SELECT col1,col2,col3 from testtable_inmemory WHERE [col1]=21493
SELECT col1,col2,col3 from testtable_inmemory WHERE [col1]=61943
SELECT col1,col2,col3 from testtable_inmemory WHERE [col1]=111873
SELECT col1,col2,col3 from testtable_inmemory WHERE [col1]=141320
SELECT col1,col2,col3 from testtable_inmemory WHERE [col1]=191623
GO

SELECT col1,col2,col3 from testtable_inmemoryschemaonly WHERE [col1]=21493
SELECT col1,col2,col3 from testtable_inmemoryschemaonly WHERE [col1]=61943
SELECT col1,col2,col3 from testtable_inmemoryschemaonly WHERE [col1]=111873
SELECT col1,col2,col3 from testtable_inmemoryschemaonly WHERE [col1]=141320
SELECT col1,col2,col3 from testtable_inmemoryschemaonly WHERE [col1]=191623
GO

exec dbo.SelectColumn 21493;
exec dbo.SelectColumn 61943;
exec dbo.SelectColumn 111873;
exec dbo.SelectColumn 141320;
exec dbo.SelectColumn 191623;
GO

exec dbo.SelectColumnSchemaOnly 21493;
exec dbo.SelectColumnSchemaOnly 61943;
exec dbo.SelectColumnSchemaOnly 111873;
exec dbo.SelectColumnSchemaOnly 141320;
exec dbo.SelectColumnSchemaOnly 191623;
GO

We can see from the below SQL Profiler output that there is almost no difference when querying data as all the table data is in memory in all cases. The only slight difference is with the duration where again the native compiled stored procedures perform slightly better.

Statement/Table Type Duration (ms) CPU (ms) Reads Writes
TSQL with In-Memory 1 0 0 0
TSQL with In-Memory Schema Only 1 0 0 0
Native SP with In-Memory 0 0 0 0
Native SP with In-Memory Schema Only 0 0 0 0

Finally let's take a look at an UPDATE and DELETE example. The following code blocks are broken up as above between straight T-SQL and natively compile stored procedures. Here is the complete T-SQL listing for these statements.

UPDATE testtable_inmemory SET col3=col3+10 WHERE [col1]=21493
UPDATE testtable_inmemory SET col3=col3+10 WHERE [col1]=61943
UPDATE testtable_inmemory SET col3=col3+10 WHERE [col1]=111873
UPDATE testtable_inmemory SET col3=col3+10 WHERE [col1]=141320
UPDATE testtable_inmemory SET col3=col3+10 WHERE [col1]=191623
GO

UPDATE testtable_inmemoryschemaonly SET col3=col3+10 WHERE [col1]=21493
UPDATE testtable_inmemoryschemaonly SET col3=col3+10 WHERE [col1]=61943
UPDATE testtable_inmemoryschemaonly SET col3=col3+10 WHERE [col1]=111873
UPDATE testtable_inmemoryschemaonly SET col3=col3+10 WHERE [col1]=141320
UPDATE testtable_inmemoryschemaonly SET col3=col3+10 WHERE [col1]=191623
GO

exec dbo.UpdateColumn 21493;
exec dbo.UpdateColumn 61943;
exec dbo.UpdateColumn 111873;
exec dbo.UpdateColumn 141320;
exec dbo.UpdateColumn 191623;
GO
exec dbo.UpdateColumnSchemaOnly 21493;
exec dbo.UpdateColumnSchemaOnly 61943;
exec dbo.UpdateColumnSchemaOnly 111873;
exec dbo.UpdateColumnSchemaOnly 141320;
exec dbo.UpdateColumnSchemaOnly 191623;
GO

DELETE FROM testtable_inmemory WHERE [col1]=12491
DELETE FROM testtable_inmemory WHERE [col1]=31941
DELETE FROM testtable_inmemory WHERE [col1]=91871
DELETE FROM testtable_inmemory WHERE [col1]=131320
DELETE FROM testtable_inmemory WHERE [col1]=171621
GO

DELETE FROM testtable_inmemoryschemaonly WHERE [col1]=12494
DELETE FROM testtable_inmemoryschemaonly WHERE [col1]=31944
DELETE FROM testtable_inmemoryschemaonly WHERE [col1]=91874
DELETE FROM testtable_inmemoryschemaonly WHERE [col1]=131321
DELETE FROM testtable_inmemoryschemaonly WHERE [col1]=171624
GO

exec dbo.DeleteColumn 12495;
exec dbo.DeleteColumn 31945;
exec dbo.DeleteColumn 91875;
exec dbo.DeleteColumn 131325;
exec dbo.DeleteColumn 171625;
GO
exec dbo.DeleteColumnSchemaOnly 12495;
exec dbo.DeleteColumnSchemaOnly 31945;
exec dbo.DeleteColumnSchemaOnly 91875;
exec dbo.DeleteColumnSchemaOnly 131325;
exec dbo.DeleteColumnSchemaOnly 171625;
GO

Looking at the SQL Profiler results below from these two tests we see a similar pattern as we saw with the SELECT statements. The natively compiled stored procedures perform slightly better with respect to duration and there is no difference in the other areas because all of the data is already in memory.

UPDATE
Statement/Table Type Duration (ms) CPU (ms) Reads Writes
TSQL with In-Memory 504 0 0 0
TSQL with In-Memory Schema Only 2 0 0 0
Native SP with In-Memory 0 0 0 0
Native SP with In-Memory Schema Only 0 0 0 0

DELETE
Statement/Table Type Duration (ms) CPU (ms) Reads Writes
TSQL with In-Memory 2 0 0 0
TSQL with In-Memory Schema Only 0 0 0 0
Native SP with In-Memory 2 0 0 0
Native SP with In-Memory Schema Only 0 0 0 0

Summary

In most cases using the SCHEMA_ONLY option and not persisting data may not be possible due to application requirements. However if it is possible to update your table to use this feature it should net you a big performance gain, especially with respect to loading new data. Natively compiled stored procedures have very few limitations (read more here on that) so if you are using the In-Memory OLTP feature within SQL Server it's worth looking into implementing this feature since you'll most likely get at least some benefit from their increased performance as well. As always you'll need to test any changes in your own environment to verify if indeed you will see any improvement in performance.

Next Steps


sql server categories

sql server webinars

subscribe to mssqltips

sql server tutorials

sql server white papers

next tip



About the author
MSSQLTips author Ben Snaidero Ben Snaidero has been a SQL Server and Oracle DBA for over 10 years and focuses on performance tuning.

This author pledges the content of this article is based on professional experience and not AI generated.

View all my tips



Comments For This Article

















get free sql tips
agree to terms