Background:
I got into a heated discussion about the use /dev/zero with the unix utility dd in order to benchmark speed disk.
To me, writing zero serves little purpose, it is highly compressible to start with. Aside writing data linearly, is hardly a method to benchmark your disk speed!
To deal with compression, you must write data that just do not compress: that mean randomisation.
But one of the opposite argument was that you couldn't because you would have to use /dev/random as source of random data, and that's too slow.
/dev/random requires high level of entropy as to provide good random values; it is also very CPU intensive. But one of the main issue with /dev/random is that it is blocking if it doesn't have enough entropy. And it will block until it gets some.
So yes, /dev/random isn't a proper source of data for dd.
On Linux /dev/urandom is a better choice, but that's still too slow (and on FreeBSD /dev/urandom is just a link to /dev/random)
Well, you don't have to use /dev/random to start with, there are other ways to generate random data at much faster rate.
And so come dd2.
dd2 provides similar features to dd (though, at this stage, it doesn't work with pipes), only files or block devices.
It takes the following optional arguments
-i : input file name (equivalent to if=)
-o : output file name (equivalent to of=)
-b : block size for read/write (equivalent to bs=)
-n : number of blocks to write in total (equivalent to count=)
-z : write zeros instead of random data (this is much faster than reading from /dev/zero: 175 times faster on OS X 10.9)
block size and number of blocks takes an integer. So if you want bs=1k, use -b 1024 etc..
The difference with dd is that by default, it generates random data very quickly. Quickly enough as to make it negligible.
There are multiple randomisation algorithms available.
I recommend to use SWB; it's fast enough (close to 2GB/s) for testing all spinning hard disk, including striped raid.
You can find the source code of dd2 here
with gcc compile with:
I got into a heated discussion about the use /dev/zero with the unix utility dd in order to benchmark speed disk.
To me, writing zero serves little purpose, it is highly compressible to start with. Aside writing data linearly, is hardly a method to benchmark your disk speed!
To deal with compression, you must write data that just do not compress: that mean randomisation.
But one of the opposite argument was that you couldn't because you would have to use /dev/random as source of random data, and that's too slow.
/dev/random requires high level of entropy as to provide good random values; it is also very CPU intensive. But one of the main issue with /dev/random is that it is blocking if it doesn't have enough entropy. And it will block until it gets some.
So yes, /dev/random isn't a proper source of data for dd.
On Linux /dev/urandom is a better choice, but that's still too slow (and on FreeBSD /dev/urandom is just a link to /dev/random)
Well, you don't have to use /dev/random to start with, there are other ways to generate random data at much faster rate.
And so come dd2.
dd2 provides similar features to dd (though, at this stage, it doesn't work with pipes), only files or block devices.
It takes the following optional arguments
-i : input file name (equivalent to if=)
-o : output file name (equivalent to of=)
-b : block size for read/write (equivalent to bs=)
-n : number of blocks to write in total (equivalent to count=)
-z : write zeros instead of random data (this is much faster than reading from /dev/zero: 175 times faster on OS X 10.9)
block size and number of blocks takes an integer. So if you want bs=1k, use -b 1024 etc..
The difference with dd is that by default, it generates random data very quickly. Quickly enough as to make it negligible.
There are multiple randomisation algorithms available.
- INTELSSE: a SSE2 accelerated LCG (Linear Congruential Generator) Algorithm.
Original code found here. Only modifications I made was to make it compile with gcc and clang - ARCRANDOM: arc4 random number generator. Available on BSD systems (including OSX). It uses the key stream generator as employed by the arc4 cypher
- RANDOMC: a collection of C of seven fast random generators by George Marsalia.
INTELSSE is very fast, the fastest; giving over 7GB/s of random generation on a i7-2600 CPU. But it's far from perfect as far as random quality is concerned.
ARCRANDOM is the slowest, and really only provided there for comparison purposes. It is too slow for testing disk speed (97MB/s on my system)
RANDOMC provides seven methods: LIFB4, SWB, KISS, CONG, SHR3, MWC and FIB. You can read all about them there.
All methods but INTELSSE or RANDOMC-FIB generates data incompressible by either gzip or bzip2.
INTELSSE and RANDOMC-FIB will compress at around 50%: but they are *FAST*
I recommend to use SWB; it's fast enough (close to 2GB/s) for testing all spinning hard disk, including striped raid.
You can find the source code of dd2 here
with gcc compile with:
gcc -O -msse2 -o dd2 dd2.c
with clang:
clang -O -msse2 -o dd2 dd2.c
-msse2 isn't required unless you use the INTELSSE randomiser.
Enjoy.
Added -z option, to write zeros instead of random data
ReplyDelete