Benchmarking SQL Server IO with SQLIO

By:   |   Comments (17)   |   Related: > Hardware


Problem

During a recent SQL Server install everything seemed to go fine. SQL Server 2008 R2 went in with no problems. There was a local drive with a RAID 1 set for the OS and system databases and two direct attached RAID 5 sets; one for transaction log files and one for data files. Was everything okay? Could it be better? How could I tell? In this tip we look at how to maximize your IO throughput.

Solution

There are many factors that go to make up the performance of SQL Server: CPU, RAM, disk I/O performance and, most importantly, the T-SQL code and queries. The Control Panel Device Manager tells you how fast the CPU's are, how much memory is available and the memory speed. The performance of queries is very important, but that's the subject of numerous other articles. It's easy to tell the size of the disk, but not so easy to understand how many I/O operations the disks are capable of achieving. The variety of technologies, such as controllers, HBA's, SAN's, virtualization and iSCSI, that can be employed between the Windows operating system (OS) and the physical drives is numerous and clouds any analysis based on the hardware specs.

While disk drives are often purchased to provide the right amount of space, the I/O capacity of the drives is often neglected. This leaves SQL Server in the unfortunate position of needing more I/O operations than the disks can provide. The least that we can do is get the most out of the drives that we have and understand their capacity. That's done with benchmarking.

My favorite tool for benchmarking is the free tool SQLIO from Microsoft, which can be downloaded from Microsoft downloads here. There are alternatives, such as IOMeter. IOMeter was originally written by Intel, but is now open source and can be downloaded from www.IOMeter.org. It has the advantage that there are versions for Linux, Solaris and even Netware. Since SQL Server runs on Windows all we need is SQLIO, and becoming familiar with a single tool has a lot of advantages. There are others from Microsoft such as SQLIOSim and SQLIOStress that are tools for validating the robustness rather then the performance of disk systems.

After running setup, find the install directory which is usually "c:\program files\SQLIO" on 32 bit systems or "c:\Progam Files (x86)\SQLIO" on 64 bit systems. For Intel Windows, SQLIO is a 32 bit program. There's an advanced version for Alpha that's a 64 bit program.

Configure Param.txt

The first step is to modify the file param.txt to tell SQLIO where to find its test file which is named testfile.dat. By default the file will be in the same directory as the program file, but you probably don't need to benchmark your C: drive. Instead I'm going to benchmark the L: drive, one of the direct attached RAID 5 sets.

Param.txt has 4 parameters for each line.

Parameter Description Values
file name Name of the test file L:\testfile.dat
number of threads Size of the thread pool. This will be overridden by the command line later. 8
mask Affinity mask to bind operations against this file to particular CPU's. I don't use this and just use 0x0 for all CPU's 0x0
file size The size in MB. This should be roughly the size of your database and always larger than your cache, unless you want to test just the cache. I usually use a 100 GB file, but 20 GB (20480 MB) might be okay if your short on space. 102400

I generally test only one drive at a time, so I only put one line into this file. You can put several lines each for its own test file. Lines with # are treated as comments. My typical param.txt file for testing the L drive would be:

L:\testfile.dat 8 0x0 102400

Create Testfile

The next step is to run SQLIO once, so that it can create the testfile.dat. This takes a long time to run and I'd rather keep this run separate from any test runs.

sqlio -kW -s5 -fsequential -o4 -b64 -Fparam.txt

You'll see output like this:

C:\Program Files (x86)\SQLIO>sqlio -kW -s5 -fsequential -o4 -b64 -Fparam.txt
 sqlio v1.5.SG
 parameter file used: param.txt
 file l:\testfile.dat with 2 threads (0-1) using mask 0x0 (0)
 2 threads writing for 5 secs to file l:\testfile.dat
 using 64KB sequential IOs
 enabling multiple I/Os per thread with 4 outstanding
 size of file l:\testfile.dat needs to be: 107374182400 bytes
 current file size: 0 bytes
 need to expand by: 107374182400 bytes
 expanding l:\testfile.dat ...

Expansion on my system took 15 minutes, so be patient while waiting for this to complete.

Let's run down the most important command line switches that we'll be using.

Switch Description Example
-d The drive or drives to work on. There should already be a testfile.dat on that drive as specified in the param.txt. There can be multiple drive letters, such as in "-dEFG". I test only one drive at a time. -DL
-B Controls buffering. N for no buffering, H for only hardware controller buffering, S for Windows software buffering, or Y for both hardware and software buffering. To match what SQL Serve is doing, use -BH for just hardware buffering. -BH
-k Operation to perform. W for write, R for read -kW
-f Random or sequential I/O. -frandom chooses the block randomly across the file. -fsequential reads or writes sequentially. There are additional choices for testing specific block sizes -frandom
-t Threads to run simultaneously. -t2
-o Outstanding requests on each thread. SQL Server threads issue read requests and then wait, so their typical depths is 1. However, the Lazy Writer issues many write requests. So a switch of "-o8" would always keep 8 I/O operations queued on each thread. You'll have to work on balancing the number of threads and the number of outstanding requests on each thread in order to keep the disk sub system busy. I usually increase this number until the disks become saturated. -o8
-s Seconds to run. This should be long enough to fill any cache and then build the queue to it's maximum before processing levels into a steady state. 90 seconds is usually sufficient to get a good picture of what the disks are capable of 90
-b Block size of each I/O in kilobytes. SQL Server reads and writes to data files in 64K blocks. -b64

There are additional switches available and detailed description of each switch is included in the readme.txt file that serves as the documentation for SQLIO.

Here's the complete output from the initial run of SQLIO used to create the test file:

C:\Program Files (x86)\SQLIO>sqlio -kW -s5 -fsequential -o4 -b64 -Fparam.txt
sqlio v1.5.SG
parameter file used: param.txt
file l:\testfile.dat with 2 threads (0-1) using mask 0x0 (0)
2 threads writing for 5 secs to file l:\testfile.dat
using 64KB sequential IOs
enabling multiple I/Os per thread with 4 outstanding
size of file l:\testfile.dat needs to be: 107374182400 bytes
current file size: 0 bytes
need to expand by: 107374182400 bytes
expanding l:\testfile.dat ... done.
using specified size: 102400 MB for file: l:\testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 2725.80
MBs/sec: 170.3

At the bottom we see the two most important metrics for analyzing the drives: Input/Output operations per second (IOs/sec) and Megabytes per second (MBs/Sec). These two numbers are related by the block size used for the test. The L drive in my tests is a RAID 5 set of four 7200 RPM SATA drives. The allocation unit size is 4096 and the partition offset is 32256. In the article Partition offset and allocation unit size of a disk for SQL Server I described how to determine these numbers and how these can be changed.

Testing

Now to test the L: drive I create a series of tests in a command file (.cmd) and let them run. The number of I/O request that are outstanding at any one time is the product of two switches: -t for threads and -o for outstanding requests. The tests start with one outstanding request per thread and increase the number of threads used until there is one per core in the server and then increases the number of outstanding requests per thread after that. Each line doubles the number of outstanding requests from the previous line. What I'm looking for is the maximum number of I/Os that the L: drive can handle.

Random Writes Test

sqlio -dL -BH -kW -frandom -t1 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kW -frandom -t2 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kW -frandom -t4 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kW -frandom -t8 -o1 -s90 -b64 testfile.dat

Here's the output from the run:

C:\Program Files (x86)\SQLIO>sqlio -dL -BH -kW -frandom -t1 -o1 -s90 -b64 testfile.dat 
sqlio v1.5.SG
1 thread writing for 90 secs to file L:testfile.dat
using 64KB random IOs
enabling multiple I/Os per thread with 1 outstanding
buffering set to use hardware disk cache (but not file cache)
using current size: 106496 MB for file: L:testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 110.06
MBs/sec: 6.87
C:\Program Files (x86)\SQLIO>sqlio -dL -BH -kW -frandom -t2 -o1 -s90 -b64 testfile.dat 
sqlio v1.5.SG
2 threads writing for 90 secs to file L:testfile.dat
using 64KB random IOs
enabling multiple I/Os per thread with 1 outstanding
buffering set to use hardware disk cache (but not file cache)
using current size: 106496 MB for file: L:testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 107.24
MBs/sec: 6.70
C:\Program Files (x86)\SQLIO>sqlio -dL -BH -kW -frandom -t4 -o1 -s90 -b64 testfile.dat 
sqlio v1.5.SG
4 threads writing for 90 secs to file L:testfile.dat
using 64KB random IOs
enabling multiple I/Os per thread with 1 outstanding
buffering set to use hardware disk cache (but not file cache)
using current size: 106496 MB for file: L:testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 112.67
MBs/sec: 7.04
C:\Program Files (x86)\SQLIO>sqlio -dL -BH -kW -frandom -t8 -o1 -s90 -b64 testfile.dat 
sqlio v1.5.SG
8 threads writing for 90 secs to file L:testfile.dat
using 64KB random IOs
enabling multiple I/Os per thread with 1 outstanding
buffering set to use hardware disk cache (but not file cache)
using current size: 106496 MB for file: L:testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec: 107.36
MBs/sec: 6.71

I've extracted the IOs per second and megabytes per second numbers into the table that follows. It looks like one outstanding request is all that it takes for the drive to reach it's maximum capacity for random writes, because all tests after the first have very similar performance. Random writes are used by the Lazy Writer and by Checkpoints to save pages to disk. The L: drive can perform roughly 110 of these each second. Since the block size switch (-b) was set to 64K that results in around 7 megabytes per second.

Outstanding I/O s IOs per Second MB per second
1 110.06 6.87
2 107.24 6.70
4 112.67 7.04
8 107.36 6.71

Random Reads Test

The next set of tests is for random reads. SQL Server does random reads when doing bookmark lookups or when reading from fragmented tables. Here are the tests that I ran:

sqlio -dL -BH -kR -frandom -t1 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t2 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t4 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o2 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o4 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o8 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o16 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o32 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o64 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -frandom -t8 -o128 -s90 -b64 testfile.dat

Here are the results extracted to a table:

Outstanding I/O s IOs per Second MB per second
1 100.83 6.30
2 149.90 9.36
4 204.07 12.75
8 261.24 16.32
16 315.76 19.73
32 366.78 22.92
64 420.82 26.30
128 453.46 28.34
256 486.76 30.42
512 488.14 30.50
1024 488.68 20.54

The L: drive plateaus at around 488 I/Os per second, which results in 30.50 MB/sec in throughput. That's an okay number, but we must remember that when there are a large number of bookmark lookups, that's all the physical I/O's that the L: drive can deliver. If the table is very large the chances of a page being cached is small and it may take one physical read for each lookup no matter the available memory or cache.

Sequential Reads Test

Performance is best when SQL Server is reading sequentially rather than randomly. This is achieved when SQL Server is able to do seeks or scans on indexes or heaps that are not fragmented. To test sequential reads I'll run these tests:

sqlio -dL -BH -kR -fsequential -t1 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t2 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t4 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o1 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o2 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o4 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o8 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o16 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o32 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o64 -s90 -b64 testfile.dat
sqlio -dL -BH -kR -fsequential -t8 -o128 -s90 -b64 testfile.dat

Here are the results.

Outstanding I/O s IOs per Second MB per second
1 3646 227
2 5497 343
4 5469 341
8 5550 346
16 5503 343
32 5577 348
64 5574 348
128 5637 352
256 5611 350
512 5673 354
1024 5629 351

Sequential reads are faster than random reads, because the disk heads often don't have to move to satisfy the request. On the L: drive the number of reads climbs to around 5600 and plateaus there.

Sequential Writes Test

There are a variety of times when SQL Server is able to write sequentially. One of them is when it writes to the transaction log file. If there is only one log file on the drive, all writes are to the end of the file and the disk heads don't have to move. I've changed the parameters to use 8K blocks (-b8) to be closer to how log files are used in SQL Server.

sqlio -dL -BH -kW -fsequential -t1 -o1 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t2 -o1 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t4 -o1 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o1 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o2 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o4 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o8 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o16 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o32 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o64 -s90 -b8 testfile.dat
sqlio -dL -BH -kW -fsequential -t8 -o128 -s90 -b8 testfile.dat

Here are the results:

Outstanding I/O s IOs per Second MB per second
1 11,510 90
2 14,888 116
4 15,030 117
8 14,911 116
16 15,006 117
32 14,931 116
64 15,062 117

Each of these I/O's are 8K, smaller than the 64K I/O's used in the tests further above and the number of operations rises rapidly to 15,000 and plateaus with 117 megabytes per second.

Conclusion

One of the keys to achieving this performance is that the SQLIO test is the only activity on the drive while the test is being run. Similar results will pertain to SQL Server, so long as the drive isn't shared with other activity. The numbers in these tests can be compared to tests run with other ways of formatting the drive, different RAID settings and to other drives to get an idea of the expected performance to help optimize the drive. I also want to keep these results on hand to re-run if there is a suspicion that there is something wrong with a drive at a later date. The tests can be re-run and I can quickly verify that there is or is not a change in performance.

Next Steps


sql server categories

sql server webinars

subscribe to mssqltips

sql server tutorials

sql server white papers

next tip



About the author
MSSQLTips author Andy Novick Andy Novick is a SQL Server Developer in the Boston area with 25 years of database and application development experience.

This author pledges the content of this article is based on professional experience and not AI generated.

View all my tips



Comments For This Article




Sunday, March 10, 2019 - 10:58:49 PM - Mathew Somers Back To Top (79240)

The SQLIO Disk Subsystem Benchmark Tool has been retired and replaced by DiskSpd.exe. You can download the tool and documentation from https://gallery.technet.microsoft.com/DiskSpd-A-Robust-Storage-6ef84e62. Please download the readme file for more details.


Tuesday, October 28, 2014 - 11:08:12 AM - thierry Back To Top (35096)

tip :)

you can use fsutil.exe file createnew ... and setvaliddata to quickly create a 100GB file


Monday, October 7, 2013 - 3:48:29 PM - Marty Back To Top (27064)

14 drive RAID10 array with 15K spinning disks.

If each drive could supply 200 IOPS (I've seen ratings of 140 - 190 for a 15K RPM drive) , teh max write should be 1400 IOPS and the max read should be 2800 IOPS.

SQLIO is reporting 6000 - 8200 IOs/sec.

How can that be?  I'm not so sure I can trust the tool...

 


Saturday, May 4, 2013 - 12:39:03 PM - Lelala Back To Top (23721)

Any idea, why they came up with this 8kb thingy?
Why not 16kb?
Why not 4kb?
Or is it because on early 32-bit systems, 1st-level-cache of most CPUs was 8kb?

Regards

Flo

Developer / www.lelala.de


Saturday, February 2, 2013 - 5:21:35 PM - Steven Back To Top (21879)

Sorry an example might make it easier to understand what I'm saying

 

So rather than

sqlio -kR -s60 -o1 -frandom -b4 -BH -LS -Fparam.txt

Timeout /T 30

sqlio -kR -s60 -o2 -frandom -b4 -BH -LS -Fparam.txt

Timeout /T 30

sqlio -kR -s60 -o4 -frandom -b4 -BH -LS -Fparam.txt

Timeout /T 30

sqlio -kR -s60 -o8 -frandom -b4 -BH -LS -Fparam.txt

Timeout /T 30

sqlio -kR -s60 -o16 -frandom -b4 -BH -LS -Fparam.txt

Timeout /T 30

sqlio -kR -s60 -o32 -frandom -b4 -BH -LS -Fparam.txt

Timeout /T 30

sqlio -kR -s60 -o64 -frandom -b4 -BH -LS -Fparam.txt

Timeout /T 30

sqlio -kR -s60 -o128 -frandom -b4 -BH -LS -Fparam.txt

Timeout /T 30

 

sqlio -dT -kR -t2 -s60 -o1 -frandom -b4 -BH -LS testfile.dat

Timeout /T 10

sqlio -dT -kR -t2 -s60 -o2 -frandom -b4 -BH -LS testfile.dat

Timeout /T 10

sqlio -dT -kR -t2 -s60 -o4 -frandom -b4 -BH -LS testfile.dat

Timeout /T 10

sqlio -dT -kR -t2 -s60 -o8 -frandom -b4 -BH -LS testfile.dat

Timeout /T 10

sqlio -dT -kR -t2 -s60 -o16 -frandom -b4 -BH -LS testfile.dat

Timeout /T 10

sqlio -dT -kR -t2 -s60 -o32 -frandom -b4 -BH -LS testfile.dat

Timeout /T 10

sqlio -dT -kR -t2 -s60 -o64 -frandom -b4 -BH -LS testfile.dat

Timeout /T 10

sqlio -dT -kR -t2 -s60 -o128 -frandom -b4 -BH -LS testfile.dat

Timeout /T 10


Saturday, February 2, 2013 - 5:17:42 PM - Steven Back To Top (21878)

Whilst SQLIO is a great tool to use I think the param.txt file and some of the switches can be misleading. When I use the params.txt file you specify the number of Threads which seems to over ride anything you might use like -t1, -t2 etc in your command line. So my two penneths worth is that it's sometimes favourable to drop specifying the params.txt and specify filetest.dat simply by itself uding the -d option for the drive where your hosting your filetest.dat file. This then gives you more control on the amount of threads being used. essentially it boils down to potentially specifying multiple commands in the params.txt file or specifying more explicit switches in the command line from my experience. 

Who knows this may help someone whose getting lots of figures but necessarily the ones they thought they were. 


Thursday, January 31, 2013 - 2:55:44 AM - D Jota Back To Top (21823)

Hi Andy!

Let me say that this article was very useful when I was learning about I/O. Thank you!

I noted that if you run the commands this way:
>sqlio -dC -BH -kW -frandom -t1 -o1 -s90 -b64 testfile.dat
... then SQLIO generates and expands a "testfile.dat" in the same directory as the program file (in C: drive) and performs the test on it! Yes, SQLIO ignores the test file pointed in param.txt.

But we are expecting to benchmark the test file in L: drive.

Then I realized that the commands need a backslash before the test file name. In example:
>sqlio -dC -BH -kW -frandom -t1 -o1 -s90 -b64 \testfile.dat
... so SQLIO uses the test file you generated in L: drive (according to "param.txt" file).
What do you think about?


I wrote the commands in one .BAT file for each test:
>test_random_writes.bat
>test_random_reads.bat
>test_sequential_reads.bat
>test_sequential_writes.bat
... then ran it to print on txt.


Thanks again!

Djalma J da Silva


Wednesday, June 20, 2012 - 11:47:04 AM - Suman Debnath Back To Top (18131)

thanks for sharing this….its very helpful….

i have a quick question…..why the IOPS decreases with more Outstanding IO, i understand it would saturates at some point, but I noticed in some test which i ran the IOPS decreases after a certain no. of “Outstanding IO”…..?


Monday, December 12, 2011 - 3:35:24 PM - Ken Powers Back To Top (15341)

Hi Andy,

In the article you say to use a value of "H" for the Buffering option, since SQL Server uses Hardware Buffering:

-B Controls buffering.  N for no buffering, H for only hardware controller buffering, S for Windows software buffering, or Y for both hardware and software buffering.  To match what SQL Serve is doing, use -BH for just hardware buffering.

I ran a series of tests in which I only varied the Buffering parameter.  When I looked at the results for No Buffering, I found this:

buffering set to not use file nor disk caches (as is SQL Server)

I've checked other sites (such as those by Jonathan Kehayias and Kevin Kline) and they are also using the Hardware Buffering parameter.  Why is SQLIO reporting that SQL Server does not use Hardware Buffering?

I also noticed that I seemed to get the best performance when I used the No Buffering option.  My results are included.

sqlio v1.5.SG
using system counter for latency timings, 2213652 counts per second
2 threads writing for 300 secs to file E:testfile.dat
 using 64KB random IOs
 enabling multiple I/Os per thread with 1 outstanding
 buffering set to not use file nor disk caches (as is SQL Server)
using current size: 720896 MB for file: E:testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec:  2158.84
MBs/sec:   134.92
latency metrics:
Min_Latency(ms): 0
Avg_Latency(ms): 0
Max_Latency(ms): 6263
histogram:
ms: 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 98  1  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0

sqlio v1.5.SG
using system counter for latency timings, 2213652 counts per second
2 threads writing for 300 secs to file E:testfile.dat
 using 64KB random IOs
 enabling multiple I/Os per thread with 1 outstanding
 buffering set to use both file and disk caches
using current size: 720896 MB for file: E:testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec:  1726.13
MBs/sec:   107.88
latency metrics:
Min_Latency(ms): 0
Avg_Latency(ms): 0
Max_Latency(ms): 76
histogram:
ms: 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 40 54  6  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0

sqlio v1.5.SG
using system counter for latency timings, 2213652 counts per second
2 threads writing for 300 secs to file E:testfile.dat
 using 64KB random IOs
 enabling multiple I/Os per thread with 1 outstanding
 buffering set to use hardware disk cache (but not file cache)
using current size: 720896 MB for file: E:testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec:  1814.67
MBs/sec:   113.41
latency metrics:
Min_Latency(ms): 0
Avg_Latency(ms): 0
Max_Latency(ms): 29180
histogram:
ms: 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 95  3  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0

sqlio v1.5.SG
using system counter for latency timings, 2213652 counts per second
2 threads writing for 300 secs to file E:testfile.dat
 using 64KB random IOs
 enabling multiple I/Os per thread with 1 outstanding
 buffering set to use software file cache (but not disk cache)
using current size: 720896 MB for file: E:testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec:  1688.02
MBs/sec:   105.50
latency metrics:
Min_Latency(ms): 0
Avg_Latency(ms): 0
Max_Latency(ms): 42253
histogram:
ms: 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 89  9  1  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0  0


Saturday, August 27, 2011 - 3:45:44 PM - Lazuardi Nasution Back To Top (14534)

Why you only do fewer tests of Random Writes Test (4 tests) than the other tests (11 tests)? 

 
Please explain me the reason why you use such values of -t, -o, -s and -b options.
 
Some articles tell that SQL Server will do many 8KB operation. What do you think about this statement.
 

Wednesday, January 26, 2011 - 5:22:58 PM - David F. Back To Top (12721)

Great stuff -

We've been attempting to debug some SQLIIO results randomness for some time now and you have presented some different conclusions and script parameters that I haven't used before.

Here's a sample of the script we're using:

sqlio -kR -s1200 -frandom -o8 -b8 -LS -Fparam.txt >> sqlio1.txt
sqlio -kW -s1200 -frandom -o8 -b8 -LS -Fparam.txt >> sqlio1.txt

ping -n 91 127.0.0.1 >NUL

sqlio -kR -s1200 -fsequential -o8 -b8 -LS -Fparam.txt >> sqlio1.txt
sqlio -kW -s1200 -fsequential -o8 -b8 -LS -Fparam.txt >> sqlio1.txt

ping -n 91 127.0.0.1 >NUL

sqlio -kR -s1200 -fsequential -o8 -b64 -LS -Fparam.txt >> sqlio1.txt
sqlio -kW -s1200 -fsequential -o8 -b64 -LS -Fparam.txt >> sqlio1.txt

ping -n 91 127.0.0.1 >NUL

sqlio -kR -s1200 -fsequential -o8 -b128 -LS -Fparam.txt >> sqlio1.txt
sqlio -kW -s1200 -fsequential -o8 -b128 -LS -Fparam.txt >> sqlio1.txt

ping -n 91 127.0.0.1 >NUL

sqlio -kR -s1200 -fsequential -o8 -b256 -LS -Fparam.txt >> sqlio1.txt
sqlio -kW -s1200 -fsequential -o8 -b256 -LS -Fparam.txt >> sqlio1.txt

ping -n 91 127.0.0.1 >NUL

sqlio -kR -s1200 -fsequential -o8 -b1024 -LS -Fparam.txt >> sqlio1.txt
sqlio -kW -s1200 -fsequential -o8 -b1024 -LS -Fparam.txt >> sqlio1.txt

We're running each of these test runs in sequence 10 times in an attempt to settle on some consistent results. The servers we're running on are only running OS and SQLIO, and we cut them off from the network (NICs disabled and controlled access).

We're seeing some significant "bouncing around" of our results that often times when graphed out it looks like an earthquake display on a seismograph! Some max deviations can be as much as 30%-60%.

Any ideas? Have you experienced this before with SQLIO?


Wednesday, December 8, 2010 - 5:10:37 AM - Marc Back To Top (10424)

Thanks Andy, you were right.

I had created the correct 25 GB testfile.dat at the root of c:\ but for some strange reason when I ran the tests it recreated a new testfile.dat in the folder where I was running the test and it was 8 MB in size. Running the same test from the root of the C:\ drive solved the issue and it is now using the correct 25 GB testfile.dat with performances lower than for my servers which seems more coherent:

C:\>"Program Files (x86)\SQLIO\sqlio" -kW -t2 -s120 -dC -o1 -frandom -b64 -BH -LS Testfile.dat
sqlio v1.5.SG
using system counter for latency timings, 2929384 counts per second
2 threads writing for 120 secs to file C:Testfile.dat
        using 64KB random IOs
        enabling multiple I/Os per thread with 1 outstanding
        buffering set to use hardware disk cache (but not file cache)
using current size: 24576 MB for file: C:Testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec:   120.69
MBs/sec:     7.54
latency metrics:
Min_Latency(ms): 0
Avg_Latency(ms): 16
Max_Latency(ms): 880
histogram:
ms: 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 11  1  0  0  1  2  4  4  5  3  2  4  5  5  5  5  6  6  4  3  3  3  2  2 13

:)


Tuesday, December 7, 2010 - 8:53:50 PM - Andy Novick Back To Top (10423)

Hi,

The "using current size: 8 MB"  is for the size of textfile.dat, which is the file that SQLIO runs it's I/O operations aginst and you've specifed the file name on the command line.  8MB must be the default. The number goes in Param.txt.  It doesn't look like your command lines point to a parms.txt so 8MB might be the default or you might not have a file that it could find.

HTH

Andy

 


Tuesday, December 7, 2010 - 5:11:39 AM - Marc Back To Top (10416)

Hi,

When I run these tests on my server I get these poor results :

C:\Program Files (x86)\SQLIO>sqlio -kW -t4 -s120 -dE -o1 -frandom -b64 -BH -LS Testfile.dat
sqlio v1.5.SG
using system counter for latency timings, 14318180 counts per second
4 threads writing for 120 secs to file E:Testfile.dat
        using 64KB random IOs
        enabling multiple I/Os per thread with 1 outstanding
        buffering set to use hardware disk cache (but not file cache)
using current size: 24576 MB for file: E:Testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec:   377.89
MBs/sec:    23.61
latency metrics:
Min_Latency(ms): 1
Avg_Latency(ms): 10
Max_Latency(ms): 109
histogram:
ms: 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%:  0  2  4  3  4  7  9 10  9  8  8  7  6  4  3  3  3  2  1  1  1  1  1  1  4

Whereas when I run them on my desktop computer I get better ones :

C:\Program Files (x86)\SQLIO>sqlio -kW -t4 -s120 -dC -o1 -frandom -b64 -BH -LS Testfile.dat
sqlio v1.5.SG
using system counter for latency timings, 2929375 counts per second
4 threads writing for 120 secs to file C:Testfile.dat
        using 64KB random IOs
        enabling multiple I/Os per thread with 1 outstanding
        buffering set to use hardware disk cache (but not file cache)
using current size: 8 MB for file: C:Testfile.dat
initialization done
CUMULATIVE DATA:
throughput metrics:
IOs/sec:  1224.20
MBs/sec:    76.51
latency metrics:
Min_Latency(ms): 0
Avg_Latency(ms): 2
Max_Latency(ms): 245
histogram:
ms: 0  1  2  3  4  5  6  7  8  9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24+
%: 11 46 16  5  4  3  3  2  2  2  1  1  1  0  0  0  0  0  0  0  0  0  0  0  0

What I'd like to know is what this settings means : "using current size: 8 MB" and why on the server it's "using current size: 24576 MB" ?

Thanks a lot in advance,


Monday, October 4, 2010 - 11:52:05 AM - Greg Back To Top (10221)
Disregard my comment.  I see you where first running a test utilizing the number of CPUs by increasing the "-t" option, and then increasing the outstanding requests after that by using the "-o". after than.  I just read the post incorrectly, my mistake.

 


Monday, October 4, 2010 - 11:48:22 AM - Greg Back To Top (10220)
Great post, very informative.  In your tip you mentioned this statement "Each line doubles the number of outstanding requests from the previous line", but looked like you only changed the "-t" option, and the "-o" option stated at 1.  I'm sure I don't understand SQLIO, since I'm new to using it, but isn't the "-o" option suppose to be the one that controls the outstanding request, and not the "-t" option?  What am I missing?

Greg


Monday, October 4, 2010 - 8:45:31 AM - Tim Back To Top (10218)
Bent Ozar posted a great procedure that can parse the output of SQLIO and load the values into a table making the process practically painless.

http://www.brentozar.com/archive/2008/11/storage-performance-testing-with-sqlio/















get free sql tips
agree to terms