Now I needed a test system... So I dug out my older Ultra 60 workstation, hooked up a DVD drive, and a few hours later I was good to go. Thus far, the only real change I noticed from the original Solaris 10 release was a newer and nicer looking login screen.
Time to hook up a boatload of hard drives! I had an expansion box from my now-since-decomissioned CLARiiON FC RAID monster, good to go with 10x36GB 10krpm FC hard drives. All I needed to do was connect them, reformat them with a normal block size (they were formated for 520 bytes instead of the normal 512, thanks to the CLARiiON controller), and I'd be good to go. Unfortunately, all I had to connect them to was a QLogic QLA2100 FC HBA. The QLA2100 isn't supported past Solaris 8, or so they'd lead you to believe. Thankfully you just have to get the Solaris driver, unpack it from the package stream QLogic provides, modify the package to not complain about your Solaris version, and install it. As expected, it then worked just fine.
To fix the block size on the drives, I got the "scu" utility from here, and then followed the instructions on this page. All pretty straightforward, but it did take about an hour per drive. It doesn't really do much I/O to to the drive from your system, though, so doing all the drives at once does speed things up.
Finally, I went through "format" on each drive to fix the annoying "bad magic" messages. Now I had 10 drives off the end of an FC link, all set and good to go!
Setting up ZFS was really easy. If you haven't done so yet, I strongly recommend going here to review their documentation and screencasts. The specific commands are really easy to figure out, but that site shows them to you. Essentially, with ZFS, you make a pool out of mirrors, RAID-Z sets, or individual disks. You can then chop up the pool however you see fit.
In any case, I tried a few configurations and ran some benchmarks. Keep in mind that testing with "dd" and a large block size will ALWAYS yield better results than you'll ever see on a real benchmark program. (I think I got up to 80MB/s with "dd" at some point) Also, running multiple benchmark programs or "dd" sessions in parallel may also yield higher throughput. FYI, I was connecting to all 10 drives over a single 100MB/s FC link. So on with the results!
One 10-drive RAID-Z set
$ bonnie++ -d . -s 2G
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
proxima 2G 14685 91 35308 48 23733 49 13301 92 52396 50 512.1 13
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 4798 99 +++++ +++ 7455 99 5230 99 +++++ +++ 7350 97
proxima,2G,14685,91,35308,48,23733,49,13301,92,52396,50,512.1,13,16,4798,99,+++++,+++,7455,99,5230,99,+++++,+++,7350,97
One 5-drive RAID-Z set
$ bonnie++ -d . -s 2G
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
proxima 2G 15241 94 32991 44 24989 45 13676 93 58862 52 550.2 10
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 4821 97 +++++ +++ 7509 99 5190 98 +++++ +++ 7849 99
proxima,2G,15241,94,32991,44,24989,45,13676,93,58862,52,550.2,10,16,4821,97,+++++,+++,7509,99,5190,98,+++++,+++,7849,99
Two 5-drive RAID-Z sets
$ bonnie++ -d . -s 2G
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
proxima 2G 15051 92 30531 41 26045 47 14018 93 57507 56 864.6 12
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 4908 99 +++++ +++ 6800 95 4868 99 +++++ +++ 5847 81
proxima,2G,15051,92,30531,41,26045,47,14018,93,57507,56,864.6,12,16,4908,99,+++++,+++,6800,95,4868,99,+++++,+++,5847,81
1 comment:
Hi thanks for sharing thiis
Post a Comment