SQL Server Temp Table vs Table Variable Performance Testing

By:   |   Comments (26)   |   Related: 1 | 2 | 3 | 4 | > Temp Tables


Problem

If you've been a DBA for any amount of time I'm sure you've been asked the question: Which is better to use, a SQL Server temp table or a table variable? There are technical reasons why to use one over the other, but being that I am usually just interested in performance I am more concerned with which one will be faster and use fewer resources. In this tip we will compare the performance of a temp table vs a table variable using a few straightforward scenarios.

Solution

For those of us that aren't familiar with all the technical differences between the two objects here is a great tip that covers all you need to know regarding the differences between temporary tables and table variables.

SQL Server Table Setup for Performance Testing Temp Table vs Table Variable

For this test scenario we are going to load data into four tables, two will be temporary tables and two will be table variables. Each of these object groups will have one small table with only 2000 records and one larger one with 1000000 records so we can see if there are any differences based on the size of the temporary object as well.

-- Table creation logic
CREATE TABLE #temptable ([col1] [int] NOT NULL primary key,[col2] [int] NULL,
[col3] [int] NULL,[col4] [varchar](50) NULL)

DECLARE @tablevariable TABLE ([col1] [int] NOT NULL primary key,[col2] [int] NULL,
[col3] [int] NULL,[col4] [varchar](50) NULL)

CREATE TABLE #bigtemptable ([col1] [int] NOT NULL primary key,[col2] [int] NULL,
[col3] [int] NULL,[col4] [varchar](50) NULL)

DECLARE @bigtablevariable TABLE ([col1] [int] NOT NULL primary key,[col2] [int] NULL,
[col3] [int] NULL,[col4] [varchar](50) NULL)

-- Create index logic
CREATE NONCLUSTERED INDEX [IX_temptable] ON #temptable ([col2] ASC)
CREATE NONCLUSTERED INDEX [IX_bigtemptable] ON #bigtemptable ([col2] ASC)
 
-- Populate tables
DECLARE @val INT
SELECT @val=1
WHILE @val <= 2000
BEGIN  
   INSERT INTO #temptable (col1, col2, col3, col4) VALUES (@val,@val,@val,'TEST')
   INSERT INTO @tablevariable (col1, col2, col3, col4) VALUES (@val,@val,@val,'TEST')
   INSERT INTO #bigtemptable (col1, col2, col3, col4) VALUES (@val,@val,@val,'TEST')
   INSERT INTO @bigtablevariable (col1, col2, col3, col4) VALUES (@val,@val,@val,'TEST')
   SELECT @val=@val+1
END
WHILE @val <= 1000000
BEGIN  
   INSERT INTO #bigtemptable (col1, col2, col3, col4) VALUES (@val,@val,@val,'TEST')
   INSERT INTO @bigtablevariable (col1, col2, col3, col4) VALUES (@val,@val,@val,'TEST')
   SELECT @val=@val+1
END

Test Scenario for SQL Server Table Variables vs. Temp Tables Performance Testing

To test the performance of these objects we are going to run statements for each of the 4 basic DML operations, SELECT/INSERT/UPDATE/DELETE. For each operation we are going run statements that effect both single and multiple rows. We will also break up the statements so that each operation will test referencing primary key columns as well as non-key columns that are both indexed and not indexed. This will allow us to see how much benefit if any we get from having indexes on a column. Only temporary tables can have additional non-clustered indexes defined. Table variables can only have primary keys defined at creation so this may be an important factor to consider when it comes to performance. The comments inline within the script define what the statements are testing. Here is the code for all of our test scenarios.

-- simple insert
INSERT INTO #temptable (col1, col2, col3, col4) VALUES (2005,2005,2005,'TEST')
INSERT INTO @tablevariable (col1, col2, col3, col4) VALUES (2005,2005,2005,'TEST')
INSERT INTO #bigtemptable (col1, col2, col3, col4) VALUES (1000005,1000005,1000005,'TEST')
INSERT INTO @bigtablevariable (col1, col2, col3, col4) VALUES (1000005,1000005,1000005,'TEST')

-- select single row where primary key column equals value
SELECT * FROM #temptable where col1=1000
SELECT * FROM @tablevariable where col1=1000
SELECT * FROM #bigtemptable where col1=600000
SELECT * FROM @bigtablevariable where col1=600000

-- select range of rows where primary key column between values
SELECT * FROM #temptable where col1 between 450 and 500
SELECT * FROM @tablevariable where col1 between 450 and 500
SELECT * FROM #bigtemptable where col1 between 100000 and 100500
SELECT * FROM @bigtablevariable where col1 between 100000 and 100500

-- select single row where other indexed column equals value
SELECT * FROM #temptable where col2=1050
SELECT * FROM @tablevariable where col2=1050
SELECT * FROM #bigtemptable where col2=650000
SELECT * FROM @bigtablevariable where col2=650000

-- select range of rows where other indexed column between values
SELECT * FROM #temptable where col2 between 500 and 550
SELECT * FROM @tablevariable where col2 between 500 and 550
SELECT * FROM #bigtemptable where col2 between 150000 and 150500
SELECT * FROM @bigtablevariable where col2 between 150000 and 150500

-- select single row where other nonindexed column equals value
SELECT * FROM #temptable where col3=1100
SELECT * FROM @tablevariable where col3=1100
SELECT * FROM #bigtemptable where col3=700000
SELECT * FROM @bigtablevariable where col3=700000

-- select range of rows where other nonindexed column between values
SELECT * FROM #temptable where col3 between 550 and 600
SELECT * FROM @tablevariable where col3 between 550 and 600
SELECT * FROM #bigtemptable where col3 between 200000 and 200500
SELECT * FROM @bigtablevariable where col3 between 200000 and 200500

-- update single row where primary key column equals value
UPDATE #temptable set col4='TESTUPDATE' where col1=1150
UPDATE @tablevariable set col4='TESTUPDATE' where col1=1150
UPDATE #bigtemptable set col4='TESTUPDATE' where col1=750000
UPDATE @bigtablevariable set col4='TESTUPDATE' where col1=750000

-- update range of rows where primary key column between values
UPDATE #temptable set col4='TESTUPDATE' where col1 between 600 and 650
UPDATE @tablevariable set col4='TESTUPDATE' where col1 between 600 and 650
UPDATE #bigtemptable set col4='TESTUPDATE' where col1 between 250000 and 250500
UPDATE @bigtablevariable set col4='TESTUPDATE' where col1 between 250000 and 250500

-- update single row where other indexed column equals value
UPDATE #temptable set col4='TESTUPDATE' where col2=1200
UPDATE @tablevariable set col4='TESTUPDATE' where col2=1200
UPDATE #bigtemptable set col4='TESTUPDATE' where col2=800000
UPDATE @bigtablevariable set col4='TESTUPDATE' where col2=800000

-- update range of rows where other indexed column between values
UPDATE #temptable set col4='TESTUPDATE' where col2 between 650 and 700
UPDATE @tablevariable set col4='TESTUPDATE' where col2 between 650 and 700
UPDATE #bigtemptable set col4='TESTUPDATE' where col2 between 300000 and 300500
UPDATE @bigtablevariable set col4='TESTUPDATE' where col2 between 300000 and 300500

-- update single row where other nonindexed column equals value
UPDATE #temptable set col4='TESTUPDATE' where col3=1250
UPDATE @tablevariable set col4='TESTUPDATE' where col3=1250
UPDATE #bigtemptable set col4='TESTUPDATE' where col3=850000
UPDATE @bigtablevariable set col4='TESTUPDATE' where col3=850000

-- update range of rows where other nonindexed column between values
UPDATE #temptable set col4='TESTUPDATE' where col3 between 700 and 750
UPDATE @tablevariable set col4='TESTUPDATE' where col3 between 700 and 750
UPDATE #bigtemptable set col4='TESTUPDATE' where col3 between 350000 and 350500
UPDATE @bigtablevariable set col4='TESTUPDATE' where col3 between 350000 and 350500

-- delete single row where primary key column equals value
DELETE FROM #temptable where col1=1300
DELETE FROM @tablevariable where col1=1300
DELETE FROM #bigtemptable where col1=900000
DELETE FROM @bigtablevariable where col1=900000

-- delete range rows where primary key column between values
DELETE FROM #temptable where col1 between 750 and 800
DELETE FROM @tablevariable where col1 between 750 and 800
DELETE FROM #bigtemptable where col1 between 400000 and 400500
DELETE FROM @bigtablevariable where col1 between 400000 and 400500

-- delete single row where other indexed column equals value
DELETE FROM #temptable where col2=1350
DELETE FROM @tablevariable where col2=1350
DELETE FROM #bigtemptable where col2=950000
DELETE FROM @bigtablevariable where col2=950000

-- delete range of rows where other indexed column between values
DELETE FROM #temptable where col2 between 800 and 850
DELETE FROM @tablevariable where col2 between 800 and 850
DELETE FROM #bigtemptable where col2 between 450000 and 450500
DELETE FROM @bigtablevariable where col2 between 450000 and 450500

-- delete single row where other nonindexed column equals value
DELETE FROM #temptable where col3=1400
DELETE FROM @tablevariable where col3=1400
DELETE FROM #bigtemptable where col3=1000000
DELETE FROM @bigtablevariable where col3=1000000

-- delete range of rows where other nonindexed column values
DELETE FROM #temptable where col3 between 900 and 950
DELETE FROM @tablevariable where col3 between 900 and 950
DELETE FROM #bigtemptable where col3 between 500000 and 500500
DELETE FROM @bigtablevariable where col3 between 500000 and 500500

Note: This script must be run in a single batch (table setup and test scenario) as the scope of the table variables are limited to the current batch of sql statements.

Test Scenario Results for SQL Server Temp Tables vs. Table Variables

INSERT statement results for SQL Server Performance Testing of Temp Tables vs. Table Variables

SQL Server Temp Tables vs Table Variables INSERT statement results

The SQL Profiler trace from the INSERT statements show that the table variable outperforms the temporary table whether dealing with a small or large temporary object. This can probably be attributed to the fact that the temporary table has an extra index on the table that needs to be updated as well.

SELECT statement results for SQL Server Performance Testing of Temp Tables vs. Table Variables

SQL Server Temp Tables vs Table Variables SELECT statement results

The SQL Profiler trace from the SELECT statements tell a different story. When querying rows based on the primary key column or the non-indexed column we get the same performance from both objects. However, when we query rows using the indexed column of the temporary table, which is not indexed in the table variable since this is not available for table variables, we see a really big increase in performance across all measures for the temporary table. In fact, using this index to query a specific value from the temporary table yields almost the same performance regardless of the size of the table.

UPDATE statement results for SQL Server Performance Testing of Temp Tables vs. Table Variables

SQL Server Temp Tables vs Table Variables UPDATE statement results

The SQL Profiler trace from the UPDATE statements shows a similar trend to that of the SELECT statements. When updating rows based on either the primary key or non-indexed column the performance is similar. When updating rows based on the indexed column the temporary table performs quite a bit better.

DELETE statement results for SQL Server Performance Testing of Temp Tables vs. Table Variables

SQL Server Temp Tables vs Table Variables DELETE statement results

With the SQL Profiler trace for the DELETE statements we see an interesting result. When deleting rows based on the primary key the table variable outperforms the temporary table. As with the SQL Profiler results of the INSERT statement this is probably due to the fact that an extra index has to be updated. As with the other scenarios when deleting based on the indexed column the temporary table performs better than the table variable. When deleting rows based on the column with no index we see similar performance between the two objects with the temporary table only slightly better when dealing with a range of records. One thing I want to add with regards DELETEs is that I would not be as concerned with its performance since in most cases if you don't need the data in your temporary object then you can exclude it when performing the SELECT/INSERT rather than removing it after the fact.

Summary of Performance Testing for SQL Server Temp Tables vs. Table Variables

As we can see from the results above a temporary table generally provides better performance than a table variable. The only time this is not the case is when doing an INSERT and a few types of DELETE conditions. This increase in performance is especially evident when dealing with larger data sets as the ability to create indexes on the temporary table speeds up query execution significantly. When the data set is small the difference in performance is almost negligible. In my opinion if you are doing a lot of data manipulation with the temporary object after it has been created and the data set is fairly large a temporary table is the best option. As always it's best to test yourself to confirm which object gives you the best performance for your particular application.

Next Steps


sql server categories

sql server webinars

subscribe to mssqltips

sql server tutorials

sql server white papers

next tip



About the author
MSSQLTips author Ben Snaidero Ben Snaidero has been a SQL Server and Oracle DBA for over 10 years and focuses on performance tuning.

This author pledges the content of this article is based on professional experience and not AI generated.

View all my tips



Comments For This Article




Thursday, August 13, 2020 - 11:11:37 PM - Ian Back To Top (86303)
Great article Ben, I try to teach the art of the temporary table to my cohorts and your article provides some more credence for temp table goodness. Thanks for the info!

Monday, March 2, 2020 - 1:21:03 AM - Jeff Stubing Back To Top (84871)

This is an awesome article! I always thought that table variables would perform better but have never taken the time to actually test it. Thank you so much for putting together such a thorough analysis. I've been wrong all along! I really needed this.


Saturday, August 18, 2018 - 11:10:24 AM - Harikumar Back To Top (77223)

Very informative , Thank You for laying out different usecases.  


Friday, December 9, 2016 - 9:12:02 AM - Sarah H Back To Top (44931)

 What about temp table vs CTE?

 Thank you


Wednesday, January 20, 2016 - 5:32:17 PM - Mark Back To Top (40461)

I have found, that if my table is only used for looking up data, better to be a Table Variable then direct link to the table.  I took a process that was taking HOURS to run to down to a matter of MINUTES just by copying what data I need from my TABLE into a TABLE VARIABLE.  Yes, I am talking 7-8 hours of work was compressed into maybe 7 minutes.  But these were only lookups tables. I was totally supprised by the increase in performance...oh and happy too.

 


Friday, January 16, 2015 - 9:16:52 AM - Ryan Back To Top (35965)

Very nice article and very simply presented without overcomplicating the analysis. Thank you for posting this.


Wednesday, September 25, 2013 - 5:51:14 PM - Geoffrey Back To Top (26952)

Ben,

 

Can you provide me with real work examples of what a sql developer would do, and the scope of that task, and it's main objective?

 

I would greatly appreciate 2 or 3 examples(heck...one would be great)

 

Thanks,

 

Geoffrey 


Wednesday, September 25, 2013 - 11:54:48 AM - Ray Back To Top (26941)

There are a few other considerations here...  but one I constantly deal with is parallelism.

If using a table variable, you are usually limited to a single thread, temp tables don't have this limitation.  (I always used table variables under SQL2000 due to 'questionable' performance when multithreading, but SQL2008+ performs far better when multithreading)

When joining large tables together with a subset of data prepared in a temp table, the multithreading aspect greatly improves the performance of my operations (as long as I've written them properly) when using temp tables.


Monday, September 16, 2013 - 12:33:31 PM - Geof Back To Top (26816)

Thank you for the previous response, Ben.

 

Okay so let's say we figure out that the best way to test for a table and it's querys(accessing data meaningfully) is a temp table. Okay, that is great. However, what do we do after that....do we create a 'regular' table? So, if we figured that a temp table (and it's queries that use that temp table) perform better in the sense of accessing data successfully(in regards to the desired data showing), what do we do next? Create a table? If so, what difference in the 'regular' tables creation would it make if the best way to test was to use table variables?

 

Meaning, if either temp tables or table variables were the 'winner' in the testing 'contest', wouldn't the devloper still need to create a regular table? 

 

I understand that the developer also needs to look for the best way to query, so I have that down.

 

I would greatly appreciate your inputs.

 

Thanks,

 

Geoffrey 


Monday, September 16, 2013 - 11:18:43 AM - Ben Snaidero Back To Top (26814)

Geoffrey,

As the article and some of the comments mention there is no clear answer as to which is best to use in all cases, it all depends on your specific situation.

How a user views if something is of benefit is usually based solely on performance (they usually don't care how something is implemented behind the scenes) so as mentioned I would test which gives the best performance for your particular situation

Thanks for reading.


Friday, September 13, 2013 - 1:11:04 PM - Geoffrey Back To Top (26785)

Quick question---

 

So after all the dust is cleared (from performing testing using temp tables and table variables)....what is the final outcome of all the testing. Do you create an actual table containing all the columns and data types that was contained/tested in the temp table (or table variable)? 

 

What is the main objective as far as how the user (not developer or DBA) benefits? 

 

That is what I would like to know. I am new to the development 'game' and would like a conceptual explanation on coding and how it benefits the user.

 

~Geoffrey 


Tuesday, June 11, 2013 - 10:31:46 AM - Robert J. Good Back To Top (25393)

Thanks for the article Ben. Not the results I would have expected.


Tuesday, March 12, 2013 - 10:34:45 AM - AvidaEinav Back To Top (22741)

Another huge deal is the statistics..

Table varible don't keep statistics, this causes very poor performance when doing joins ex..

Temp table do keeps statistic!


Wednesday, March 6, 2013 - 11:55:37 PM - asit kaushik Back To Top (22619)

my 2 cents the basic difference when we ask anybody  is that table variables are memory resident whereas temporary tables are temdb resident which is very untrue which is been proven in many blog posts and can be proved with the below code snippet

use tempdb

---this would list all the existing #tables used by other applications can list down or can try it in a test environment

select * from sys.tables where name like '%#%'

GO
-- a very small table

declare @table as TABLE

(

id int

)

select * from sys.tables where name like '%#%'

---You will see we have a # table which is for table variable---

But to me the question arises what is the reason of keeping table variable if we have a similar temprary table. I could only see the case of functions otherwise i could not get any reason where we can justify the existance of table variable.


Tuesday, December 11, 2012 - 9:14:50 PM - TimothyAWiseman Back To Top (20901)

Thanks for the excellent tip.  This confirms what I have generally seen as well.

Personally, I think table variables often result in slightly cleaner code and they can be passed as parameters when that matters.  But temp tables will almost always perform at least as well and often better, and they give you more options for indexing.  Generally, I use a table variable when I am dealing with very small datasets (less than 100 rows) and temp tables the rest of the time.


Monday, December 10, 2012 - 3:48:40 AM - Martin Smith Back To Top (20845)

 

@Madhivanan - Yes in MSDE 2000, running the following

 

declare @t table(i int)

 

select * 

from tempdb.INFORMATION_SCHEMA.TABLES  

WHERE TABLE_NAME LIKE '#%'

 

Returns

 

TABLE_CATALOG   TABLE_SCHEMA   TABLE_NAME    TABLE_TYPE

--------------- -------------- ------------- ----------

tempdb          dbo            #108B795B     BASE TABLE


Sunday, December 9, 2012 - 11:46:36 PM - Madhivanan Back To Top (20842)

@Martin Smith

Do information_schema.tables show up the table variables in version 2000? I think I said that keeping information_schma.tables in mind


Sunday, December 9, 2012 - 3:45:40 PM - Martin Smith Back To Top (20840)

@Eric 

Table variables are subject to logging because if in an error occurs during the execution of the statement it needs to be rolled back. For example

 DECLARE @T TABLE

 (

 P INT PRIMARY KEY

 )

 INSERT INTO @T VALUES (1),(2)

  INSERT INTO @T VALUES (3),(4),(1)

  SELECT *  FROM @T

 

The statement encounters a PK uniqueness violation so the insert of values 3 & 4 is rolled back.

In 2008+ "INSERT ... SELECT" can be minimally logged in the same way as "SELECT ... INTO". Temporary tables can still have advantages when it comes to logging though. e.g table variables do not support "TRUNCATE". One logging advantage of table variables is that the system transactions on them are independent of the surrounding user transaction so a long running transaction that writes to table variables will not prevent the tempdb transaction log from being truncated.


Sunday, December 9, 2012 - 10:49:04 AM - Eric Russell Back To Top (20837)

One considering when persisting and modifying large temporary resultsets is transaction logging. I understand why temp tables are logged, because they are included in batch level transactions and so technically should be included in a rollback, although at least in my experience there rarely a practical use for that. However, with temp tables you have the option of creating the table using SELECT INTO, which will not not transaction log the inserts the same way it will if you first create the table and then insert it.

Table variables are not even included in transactions, just like other variables they arn't recovered from a rollback, but for some reason they are still transaction logged like temp tables. That's unfortunate, and if it were not the case, table variables would be clearly favored in a wider array of scenarios.

 


Thursday, December 6, 2012 - 11:13:40 AM - Try other cases as well. Back To Top (20784)

This was a reasonable first test, and I'm glad to see actual numbers, but I'd have to suggest some other factors to look at, most of which are highly likely to favor #temp tables at many non-trivial data sizes.

1) For any unique column (even more with the unique, contiguous data you used) Table variable nonclustered unique indexes can be created using UNIQUE constraints (http://www.sqlservercentral.com/Forums/Topic535656-1338-1.aspx#bm535734)

DECLARE @MyTabVar TABLE( PKCol intNOTNULLPRIMARYKEYCLUSTERED,UniqueCol intNOTNULLUNIQUE)

2) Non-integer, non-sequential datasets
2a) with and without rebuilding the clustered and/or nonclustered index after data additions index
2b) including larger varchar types with duplicate data

3) Without any indexes at all on either - no primary key, no nothing. Particularly useful in the table scan cases Jeff mentioned

4) Multiple load phases - using temporary structures to run what's normally nasty row-by-row exception case logic only once for a given job.
4a) Often this is gathering just the required and relevant and accurate data from several tables to be worked on by many queries later.
4b) Combine with 2a's index rebuilds if required.

4) Multi-column indexes - some cases where temporary tables/table variables

Thursday, December 6, 2012 - 7:14:11 AM - Martin Smith Back To Top (20779)

(Make sure you are in tempdb if running the code above!)


Thursday, December 6, 2012 - 7:10:04 AM - Martin Smith Back To Top (20778)

 

@Madhivanan

Not sure why you say just 2008. Even on 2000 you can see them in tempdb with

 

DECLARE @T TABLE (X INT)

 

SELECT *

FROM sysobjects

WHERE name LIKE '#%'

 

 

I've written a quite detailed piece about the differences between temporary tables and table variables here

 

 

 


Thursday, December 6, 2012 - 5:34:25 AM - Madhivanan Back To Top (20776)

In addition to what Jeff Moden said, you can see table variables in tempdb only from version 2008 onwards. Here is a post about it http://beyondrelational.com/modules/2/blogs/70/posts/13664/storage-of-table-variable.aspx


Tuesday, December 4, 2012 - 8:57:32 AM - Jeff Moden Back To Top (20733)

To answer the question about the conversion of large Table Variables to Temp Tables...

Both Temp Tables and Table Variables build their structure in TempDB.  As with any good database engine, those structures start out in memory and spill to disk if they get too big.  You can typically see Table Variables in TempDB as tables that begin with a # sign and have an all numeric digit name.


Tuesday, December 4, 2012 - 8:53:44 AM - Jeff Moden Back To Top (20732)

Nice article.  Good to see someone quantify these differences with code. 

In many (most?) cases, the temporary structure is made to contain only the data that is actually needed.  The reason why that is important is because that usually means that full table scans will be made anyway which usually negates the need for any indexing.  There are always exceptions, of course.

Whether there are indexes or not, Temp Tables can frequently be batch loaded with minimal logging using SELECT INTO which can give Temp Tables a performance advantage over Table Variables.

There is another "performance" advantage that Temp Tables can have over Table Variables and you eluded to it.  The contents of a Temp Table will persist in SSMS allowing a programmer to incrimentally troubleshoot code without having to include all of the previous code that builds and populates a Table Variable.  That sometimes makes both development and troubleshooting go quite a bit faster.

 


Tuesday, December 4, 2012 - 8:53:21 AM - Dan Back To Top (20731)

Nice to see someone else confirm my experience - e.g. I'm not crazy!  From my testing, temp tables are as good as or outperform table variables.  The clear advantage to me is the ability to pass a table variable as a parameter, but the use case for those seems limited, and the overhead (creating a table variable type) is too much.  Easier to just use temp tables, even then. 

I have heard that the optimizer / sql engine will convert large table variables to temporary tables anyway, under the covers.  Can anyone confirm?















get free sql tips
agree to terms