More efficient SQL Server UPDATE for VARCHAR(MAX), NVARCHAR(MAX) and VARBINARY(MAX) data types with the WRITE Clause

By:   |   Updated: 2018-10-24   |   Comments (1)   |   Related: More > T-SQL

Problem

When you need to update SQL Server data in columns that have data types like VARCHAR(MAX), NVARCHAR(MAX) and VARBINARY(MAX) that contain values like JSON or XML script without have performance issues.

Solution

SQL Server 2005 introduced new large value data types to replace the deprecated text, ntext and image data types. These data types can store up to 2^31-1 bytes of data. Updates can be made without rewriting the entire column value, but there is no difference to inserting a large value with a regular INSERT.

To update a large value data type, the UPDATE can use the WRITE clause as follows: 

UPDATE dbo.LargeTableObject 
SET col.WRITE (expression, @offset, @length )
WHERE id = 1;
GO

You can see the difference between a regular UPDATE and UPDATE with the WRITE clause.

Example of using SQL Server UPDATE with WRITE Clause

Using this method can be a good choice and to demonstrate it I’ll create a table with a varchar(max) column data type.

DROP TABLE IF EXISTS LargeTableObjectGO
 
CREATE TABLE LargeTableObject (
   id BIGINT IDENTITY
  ,col1 VARCHAR(MAX)
)GO

Next, a row is inserted. I'm using a JSON file in this example. You can download the JSON file from here.

INSERT INTO dbo.LargeTableObject (col1)SELECT BulkColumn 
FROM OPENROWSET (BULK 'C:\temp\citylots.json', SINGLE_CLOB) as j
GO

We can see how SQL Server saves this data, by running the query below.

SELECT OBJECT_NAME(I.object_id) 
  , I.name
  , AU.total_pages
  , AU.used_pages
  , AU.data_pages
  , P.rows
  , AU.type_desc
FROM sys.allocation_units AS AUINNER JOIN sys.partitions AS P ON AU.container_id = P.partition_id
INNER JOIN sys.indexes AS I ON I.index_id = P.index_id AND I.object_id = P.object_id
WHERE P.object_id = OBJECT_ID('LargeTableObject')
GO

The image below shows we are using LOB_DATA pages and we are using 7513 total pages (each page is 8K) to store this data, which is about 60MB.

pages used for database table

We can also see this data if we you sp_spaceused on this table.

space used for database table

The next step is clear the plan cache and run a checkpoint to clear the transaction log. Also, we will set statistics io on to get how many pages are needed for the update.

DBCC FREEPROCCACHE 
DBCC DROPCLEANBUFFERS 
 
CHECKPOINT
 
SET STATISTICS IO ON 
 
SELECT * FROM fn_dblog(null,null) 
GO

With the function fn_dblog you can check to see that the log is clean.

transaction log use

Before use the WRITE clause, let’s update the column adding text at the end using a regular update.

UPDATE dbo.LargeTableObject 
SET col1 = col1 + REPLICATE(' regular update ', 2 )
WHERE id = 1
GO

The statistics io shows the following:

Table 'LargeTableObject'. Scan count 1, logical reads 1, physical reads 0, read-ahead reads 0, lob logical reads 66981, lob physical reads 11, lob read-ahead reads 98106.

Table 'Worktable'. Scan count 0, logical reads 7, physical reads 0, read-ahead reads 0, lob logical reads 77360, lob physical reads 0, lob read-ahead reads 25557.

(1 row affected)

And, let’s take a look at the transaction log again.

SELECT * FROM fn_dblog(null,null) 
GO

We can see there are a lot more rows now.

transaction log use

The next step I will use the WRITE clause for the UPDATE, but before that I’ll clean my plan cache and transaction log again.

DBCC FREEPROCCACHE 
DBCC DROPCLEANBUFFERS 
 
CHECKPOINT
 
SELECT * FROM fn_dblog(null,null) 
GO
transaction log use

We will do a similar update as we did above, but use the WRITE clause.

UPDATE dbo.LargeTableObject 
SET col1.WRITE (REPLICATE(' write update ', 2 ), NULL , NULL )
WHERE id = 1
GO

The statistics io for this update is as follows. This is a big difference than the regular update.

Table 'LargeTableObject'. Scan count 1, logical reads 1, physical reads 1, read-ahead reads 0, lob logical reads 3, lob physical reads 2, lob read-ahead reads 0.

(1 row affected)

Now let's look at the transaction log.

SELECT * FROM fn_dblog(null,null)
GO

We can see there are a lot less rows after the update using the WRITE clause.

transaction log use

Compare Execution Plans

If we look at the execution plans for both queries, we can see they almost look the same. But as we saw, there is a lot less activity when using the WRITE clause.

execution plans

Conclusion

There is a big difference when using WRITE clause and this improvement is a good reason to change the way you update large value data types. The execution plans will not show what’s the better choice, so you need to use set statistics io to get more information.

Next Steps


Last Updated: 2018-10-24


get scripts

next tip button



About the author
MSSQLTips author Douglas Correa Douglas Correa is a database professional, focused on tuning, high-availability and infrastructure.

View all my tips




Post a comment or let the author know this tip helped.

All comments are reviewed, so stay on subject or we may delete your comment. Note: your email address is not published. Required fields are marked with an asterisk (*).

*Name
*Email
Email me updates

Signup for our newsletter

I agree by submitting my data to receive communications, account updates and/or special offers about SQL Server from MSSQLTips and/or its Sponsors. I have read the privacy statement and understand I may unsubscribe at any time.





Wednesday, October 24, 2018 - 4:46:56 PM - jeff_yao Back To Top

 

Learned something new, nice tip, Doug !



download

























get free sql tips

I agree by submitting my data to receive communications, account updates and/or special offers about SQL Server from MSSQLTips and/or its Sponsors. I have read the privacy statement and understand I may unsubscribe at any time.



Learn more about SQL Server tools