- BlueArc 2 Titan heads, fiber channel mounted over NFS on a
farm of Linux 2.4.21-32.0.1.ELsmp #1 SMP
- stargrid03 to data10 Linux stargrid03.rcf.bnl.gov 2.4.20-30.8.legacysmp #1 SMP Fri Feb 20 17:13:00 PST 2004 i686 i686 i386 GNU/Linux to a reserved NFS mounted volume.
- PANASAS file system mounted on a Linux rplay15.rcf.bnl.gov 2.4.20-19.8smp #1 SMP Wed Apr 14 10:50:14 EDT 2004 i686 i686 i386 GNU/Linux.
Consult Panasas web site for more information about their products. This was tested in May 2004 (driver available)
- PANASAS file system mounted on a Linux rplay21 2.4.20-19.8smp #1 SMP Mon Dec 29 16:49:59 EST 2003 i686 i686 i386 GNU/Linux.
Consult Panasas web site for more information about their products. This was tested on Dec 30st 2003 (driver version
- IDE RAID (SunOS rmine608 5.8 Generic_108528-22 sun4u sparc SUNW,Ultra-4)
- SCSI vs IDE comparison.
SCSI are 36 GB 10 rpm (QUANTUM ATLAS_V_36_SCA).
IDE disks are 40GB 5400 rpm (QUANTUM FIREBALLlct20 40)
on the same dual Pentum III processor machine, 1 GB of RAM, running
linux 7.2, kernel 2.4.9-31smp (done on Dec 13 2002)
- Linux Client -> IBM Server tests.
comments are in the page (done on Jan 7th 2002)
- MTI performances (done on Nov 17 2001)
- LSI performances (before it died)
(done on Nov 6th 2001)
Notes : all keeping in mind that the 8KByte (2^3) tests may
not represent the best scanning region ...
- Both vendor have very poor write performance, maximum read/write seems
to be around 16/47 MB/sec. This result is supported by IOzone.
- Individual IO tests shows similar MTI/LSI results, the LSI seems
slightly better (the IO profile is flatter, ther are only little performance
degradations depending on file size and/or block size).
- The MTI tests contains serveral Ioperf tables made under different
conditions in order to get a detail on the 'best performance' ridge.
This page will contain results for different IO performance made on
the RCF hardware. The performances tests are based on 2 programs
- IOzone : The benchmark generates and
measures a variety of file operations. Iozone has been ported to many
machines and runs under many operating systems. We used this program to
get a quick estimate of the IO profile. I used the -a flag (
automatic mode, full) but reported only a few of the results I found
Note that the 'buffered' IO also benchmark the OS hability to flush
data in/out of cache as well as the C function implementation (asynchronous
or not). On each plot, x and y axes are in log2 base kbyte, the
initial gap up to 2^2 kBytes (4 kBytes) represents the minimal startup
value ; the sqare hole at large file size is an artifact of this program.
- Left side : the Write operations
- Right side: the Read operations
- From top to bottom : Buffered IO, Random IO, un-buffered IO.
Ioperf is a program I have written myself after great frustrations
I had looking at the Bonnie results (Bonnie reports IO performances
far higher than what the card can do, even in unbuffered mode, so it
is wrong !!). In any cases,
- buffered C-IO (fwrite, fread) are made BUT ensuring
that all IO are flushed when the test is done (sync). This was chosen
as to be a representative of typical IO usage. However, the read operations
may still be affected by buffering in case of NFS transactions
between client and server. We recommend to test performances on the
- Character per character IO tests are displayed as a worst-case-scenario
- random seeks tests are made over the entire file size. This tests
the device ability to locate and access data at any place on the
partition. Those values also give you an idea of what to expect as
- Each result is tabulated by real time, CPU time (or maximum
performance) and %tage efficiency. Note that the ratio (or percentage)
may be affected by the system's load. However, all tests were done in
no-load mode so the Realtime/CPUtime is in our case a good measure
of the system's response.
- Ioperf does/creates a 400000 KB file for its test. That's 18.6 on the
IOZone file size axis, a region where IOZone does not provide any results.
One has to visually extrapolate ...