QuicksearchDisclaimerThe individual owning this blog works for Oracle in Germany. The opinions expressed here are his own, are not necessarily reviewed in advance by anyone but the individual author, and neither Oracle nor any other party necessarily agrees with them.
Navigation |
ZFS benchmarksTuesday, April 24. 2007Trackbacks
Trackback specific URI for this entry
No Trackbacks
Comments
Display comments as
(Linear | Threaded)
Very interesting!
Since either the spam-preventer or my mind's image recognition has a bug, I can't comment the original blog. So I'm doing it here. Some remarks: 1. For sustained throughput, we won't see the same behaviour, I think. At some point ZFS just has to write the data. If ZFS then doesn't go higher than 50 MB/Sec bandwidth utilization, it will slow down the throughtput. If it does go higher, then the perceived advantage no longer exists - regarding writes. 2. The Iozone file size used in the benchmark is too small for that kind of system, I think. For Iozone, the file size should at least exceed the RAID controller's memory and also the main memory available to the benchmark. Otherwise, you measure lots of cache performance instead of getting a comprehensive overview. The 3510 has 1GB cache on each controller. We run a single controller 3510 with 12 36GB 15K disks, we did try Iozone, and the difference between using 512MB and 2GB file size was staggering. 3. As for the slow write speed: the 3510 with dual controllers has an overhead for the active-active configuration. During writes, the cache needs to be synchronized, and that takes time. In general, we too were a little "underwhelmed" by the 3510s performance. Write speed compared to our dated T3+ units is actually slower, although the T3+ use 9 disks instead of 12 and slower disks, too. 4. As always, I find it rather difficult to apply findings from a benchmark or performance test to a concrete usage scenario. General comment regarding ZFS: We will shortly be installing two new machines, and we still won't use ZFS. Reasons? Still no ZFS boot and still not convinced that ZFS is a good match for providing block devices for virtualisation and databases. So we continue looking at ZFS with the I-wish-we-could attitude.
#1
on
2007-04-24 12:49
1. Of course ZFS can go higher than 50 MB/Second ... generally ZFS makes better use of the resources, by transforming random writes into sequential ones.
2. With Update 4 the performance of an tuned ZFS is within striking distance of a tuned UFS for example with Oracle, while providing all advantages of ZFS. 3. I think the advantages of ZFS to provide block devices are really compelling.
I wouldn't compare ZFS with UFS for Oracle or DB2 containers.
The competition is in raw devices on RAID1 or RAID10 volumes, and I'm not convinced that ZFS can compete with these from a performance point of view (for databases!). The ideal solution: an Oracle and/or DB2 version that hooks into ZFS at a lower level instead of being a regular Joe Vanilla filesystem user. ZFS for raw devices: ok, for provisioning iSCSI targets, sure, that's a very promising track. Let's see what happens. Right now, it would be very nice if I could have a Solaris version (Sparc and x86) where I can just say "use ZFS as fileystem".
#2.1
on
2007-04-24 16:51
The author does not allow comments to this entry
|
The LKSF bookThe book with the consolidated Less known Solaris Tutorials is available for download here
Web 2.0Contact
Networking xing.com My photos Comments
about Mon, 01.05.2017 11:21
Thank you for many interesting
blog posts. Good luck with al
l new endeavours!
about Fri, 28.04.2017 13:47
At least with ZFS this isn't c
orrect. A rmdir for example do
esn't trigger a zil_commit, as
long as you don't speci [...]
about Thu, 27.04.2017 22:31
You say:
"The following dat
a modifying procedures are syn
chronous: WRITE (with stable f
lag set to FILE_SYNC), C [...]
|