Earlier this evening, I was having a little chat with my good friend and development wizard Chris. I was chatting with him about a recent frustration with the world of certain "Frameworks".
As I proceed forward in a software engineering class project involving an application built on JavaServer Faces running on top of GlassFish, I found myself reminded on this rant someone sent me a link to recently. While I ultimately have a far more balanced view, I do find his rambling somewhat amusing.
So anyways, here's my rant:
After a group meeting for my class, it looks like the most sensible way to design our web pages for this web app is essentially a templated composite view, so we can have standard headers, sidebars, etc, without having to code them on each page.
As such, I start looking up how to do them properly in the context of JSF (JavaServer Faces), which conventionally uses JSP (JavaServer Pages) for its page description system.
Then I start reading that you can do it this way, or that way, but that those ways all have problems for this reason or that reason.
In fact, most conventional documents on how to use JSF ignore the problem completely.
But when they discount all the obvious ways, they then recommend this alternate page composition system called "Facelets".
So I start reading up on Facelets, and it says how they use XHTML instead of JSP, because JSP is bad for this reason or that reason, especially with JSF, and talk all about how Facelets and JSF go wonderfully together.
As such, I look into what it'll take to have Facelets integrate cleanly with my chosen IDE (NetBeans), which I chose because it integrates so well with the other ways I was trying to do things.
I discover a NetBeans plugin that'll let me to Facelets, and provide all that tag-completion on the XHTML files that I was getting in the JSP files. Only problem is that its at version 0.3.
I install it anyways, and finally figure it all out.
In the end, Facelets has turned out to actually be a very nice solution, and I'm really happy with it.
(and since JSF supports plugable page composition systems, it integrates cleanly and correctly too)
But, given that JSF seemed to start from the basis of "we see all these other frameworks out there designed to fix/improve-upon what JSP provides", I wonder why I didn't start with reading about Facelets.
(and in general, I'm actually quite pleased with the whole JSF approach)
I just wish Facelets got more coverage in my JSF-bible-type book (published last month, co-authored by one of the spec leads). All it got was a few examples in the short chapter on plugable page rendering libraries.
Monday, November 06, 2006
Monday, October 30, 2006
The Java Preferences API vs. MacOS 10.4
While it may not be obvious from all my sysadmin-style tinkering discussed in this blog, I am also a software developer. Lately I've been working on a Java GUI program that has to save and load a complex tree of settings. While the project is nearing its completion, I've decided to occasionally spend some time trying to find and fix various issues and performance bottlenecks. This is the story of one of those bottlenecks...
A common way to save and load application preferences in Java, is to use the Preferences API. This API uses a different back-end implementation on each operating system, but essentially provides a tree view for storing preferences. Its implementation is also intended to make the backing-store somewhat transparent, so you just have to set your preferences. There is no need to perform a "save" operation afterwards.
Now first of all, I did want to make saving an explicit operation in my application. So while I maintained my own configuration tree, writing it to nodes in the Java Preferences tree was a user-triggered operation. Originally I also wrote out the Preferences tree to a file on disk, which was loaded in place of Java's Preferences store. However, upon realizing that was a needless waste of resources, I removed the external configuration file.
One major part of my application involves configuring a number of "thingies," where each "thingie" has a potentially large number of individual parameters directly under its configuration node. A problem I noticed was that as I increased the number of "thingies" in the configuration, save and load became dramatically slower. However, here's the really interesting part. It only really became slower on MacOSX! On my Linux test machine, the performance impact wasn't even noticeable!
Thanks to the wonderful profiler in NetBeans, I was able to track down this issue to functions under "java.util.prefs.AbstractPreferences", or more particularly "java.util.prefs.MacOSXPreferences". Apparently the MacOSX implementation of the node() and various put() methods can be quite slow. If you have enough of them, it really adds up.
So how did I fix it? Well, something a bit less elegant than how I was doing things before. You see, the save and load methods for "thingie" configurations really just involved converting items between a Map and Preferences nodes. Since the Map really just managed access to an object which contained a collection of simple types (String, Integer, etc.), I got an idea. Why not just serialize the Map directly, and store it as a byte array in a Preferences node? Sure, it may seem like a bit of an inelegant solution, but it worked! Not only did it work, but it resulted in a MASSIVE performance increase.
So how much of a performance improvement does this make? Reading the program's configuration used to take 2195ms, according to the profiler. It now only takes 956ms. However, writing used to take 7314ms. Now that number is down to 645ms!
So, what did we learn?
1. Java performance characteristics can vary wildly across operating systems.
2. Object serialization to a byte array node in the Java Preferences system can be significantly faster than creating an elegant structure within the Java Preferences API to store a lengthy configuration, especially on MacOSX.
A common way to save and load application preferences in Java, is to use the Preferences API. This API uses a different back-end implementation on each operating system, but essentially provides a tree view for storing preferences. Its implementation is also intended to make the backing-store somewhat transparent, so you just have to set your preferences. There is no need to perform a "save" operation afterwards.
Now first of all, I did want to make saving an explicit operation in my application. So while I maintained my own configuration tree, writing it to nodes in the Java Preferences tree was a user-triggered operation. Originally I also wrote out the Preferences tree to a file on disk, which was loaded in place of Java's Preferences store. However, upon realizing that was a needless waste of resources, I removed the external configuration file.
One major part of my application involves configuring a number of "thingies," where each "thingie" has a potentially large number of individual parameters directly under its configuration node. A problem I noticed was that as I increased the number of "thingies" in the configuration, save and load became dramatically slower. However, here's the really interesting part. It only really became slower on MacOSX! On my Linux test machine, the performance impact wasn't even noticeable!
Thanks to the wonderful profiler in NetBeans, I was able to track down this issue to functions under "java.util.prefs.AbstractPreferences", or more particularly "java.util.prefs.MacOSXPreferences". Apparently the MacOSX implementation of the node() and various put() methods can be quite slow. If you have enough of them, it really adds up.
So how did I fix it? Well, something a bit less elegant than how I was doing things before. You see, the save and load methods for "thingie" configurations really just involved converting items between a Map and Preferences nodes. Since the Map really just managed access to an object which contained a collection of simple types (String, Integer, etc.), I got an idea. Why not just serialize the Map directly, and store it as a byte array in a Preferences node? Sure, it may seem like a bit of an inelegant solution, but it worked! Not only did it work, but it resulted in a MASSIVE performance increase.
So how much of a performance improvement does this make? Reading the program's configuration used to take 2195ms, according to the profiler. It now only takes 956ms. However, writing used to take 7314ms. Now that number is down to 645ms!
So, what did we learn?
1. Java performance characteristics can vary wildly across operating systems.
2. Object serialization to a byte array node in the Java Preferences system can be significantly faster than creating an elegant structure within the Java Preferences API to store a lengthy configuration, especially on MacOSX.
Friday, September 01, 2006
Yet Another IPv6 Setup
Last weekend I got the bright idea to give IPv6 another attempt on my network. I had previously tried it a while back, tunneling straight from my Cisco router. However, I had an older Cisco router that could only do IPv6 on a "testing" build of IOS. Being sick and tired of potential "issues" with this build, I wound up just ditching IPv6 for the time being. At the time I had a single static IP, and I did not have any other good configuration options.
These days I have a connection with multiple static IPs, so I have more options available to myself now. My current network config is also rather interesting, so allow me to illustrate:
{Internet} ---->(Cisco 4500 rtr)---->(FreeBSD firewall)====>{Multiple internal subnets}
Basically I've banished all NAT to that Cisco, which does the common port-translating NAT for most machines on my network. However, it also does 1:1 (bi-directional) NAT for my firewall and server machines. The advantage of 1:1 NAT is that you only translate the network address, and nothing else. As such, you can use it for a lot more than just the usual restrictive TCP and UDP setup you have with port-translating NAT. Of course 1:1 NAT does just translate network addresses, so you need to configure your firewall as if your machines did have public addresses.
So coming out of the Cisco router, I have my private address range (with some public IPs mapped to some of the private IPs). Just behind it, the FreeBSD firewall takes the next step. First, it filters out any traffic I don't want going into my network (obviously). Second, it takes this private address range and subnets it further. (the internal side of the box is a VLAN trunk to my switches) Yes, I have multiple subnets internally. This lets me separate different types of traffic for the purposes of flexibility and/or security.
Basically, I wanted to connect my various internal networks to the IPv6 Internet, by way of this FreeBSD firewall. (FYI, the system is running FreeBSD 6.0-RELEASE at the time of this writing, and is named "Tritanium") To accomplish this, I had two main options at my disposal:
As such, I decided to attempt 6to4 this time around. The 6to4 method works by directly mapping your public IPv4 address into an IPv6 /48 subnet. Then your border router essentially tunnels IPv6 packets directly inside IPv4 packets (as IP protocol 41). What's really cool about this is that you don't need any external services or configurations. If you go to an IPv6 site, and ping your local 6to4 address, you will see the inbound packets while sniffing your external interface. So, with all that being set, time to get on to an account of my experiences:
Step 1: Figure out your IPv6 address
This is probably the easiest step of the entire adventure. You just take your public IPv4 address (yes, it does have to be a public routable address), convert it to hexadecimal, and tack it onto the end of the 6to4 prefix (2002). For the sake of this writeup, lets assume our public address is "12.34.56.78". In hex, that translates to "0C22384E". So that translates into the following 6to4 subnet:
2002:c22:384e::/48
(IPv6 lets you omit leading zeros and abbreviate the end of the address, in case you were wondering.)
Step 2: Configure the 6to4 tunnel
This was probably one of the most frustrating steps, despite the fact that it looks like it should be the easiest. I blame my configuration more than anything else, though. You see, while the external interface on Tritanium maps directly to a public IP address, it actually has a private IP address itself.
In short, you have to configure FreeBSD's stf(4) interface with your 6to4 address, and then setup routing. However, I had a bit of a problem. You see, for this to work in both directions, two things had to happen. First, Tritanium had to have something telling it that it did indeed have a relationship with its public IP. Second, certain sanity checks (that prevent you from using stf with private IPs) had to be bypassed.
The first step was easy. I just created an alias on Tritanium's external interface with its public IP address, and a /32 netmask:
# ifconfig fxp1 inet 12.34.56.78 netmask 0xffffffff alias
The second step turned out to be a lot more involved. What's going to happen is that Tritanium will be receiving incoming 6to4 packets where the IPv4 address (1:1 translated to a private IP by the Cisco router) will not match the IPv6 address (based on our public IP) contained within. Let's just say that it does not work out of the box. Upon reading the stf man page, it does however tell us that the: "Ingress filter can be turned off by IFF_LINK2 bit". (this is the "link2" flag you can pass to ifconfig when setting up an interface)
Glossing over what was an entire night of frustration and debugging, let's just say that LINK2 doesn't really do much of anything. The stf interface driver has a lot of sanity checks, some failing with my configuration, and the "ingress filter" block of code that LINK2 disables isn't one of those checks.
The fix I ultimately came up with involved fixing the source code (if_stf.c) to make the LINK2 flag disable the sanity checks that were failing on my setup. The result of my fix can be summed up in this patch. (yes, it is against 6.0-RELEASE, but it shouldn't be hard to adapt to a newer version)
Once that file was patched, and the kernel module reloaded, the next step was pretty simple:
# ifconfig stf0 create
# ifconfig stf0 inet6 2002:c22:384e::1 prefixlen 16 link2
The third step involves setting up routing. For this, we need to create a route to a public 6to4 router. I took the easy way with this one, as there is a public "anycast" address for your nearest 6to4 router. That address is 192.88.99.1 (in IPv4), or 2002:c058:6301:: (in 6to4 IPv6). So I set my default IPv6 route to that:
# route add -inet6 default 2002:c058:6301::
Step 3: Internal subnets and routing
First I set IPv6 addresses on my internal interfaces, using subnets of the /48 that I got with 6to4:
# ifconfig vlan1 inet6 2002:c22:384e:1::1 prefixlen 64
# ifconfig vlan2 inet6 2002:c22:384e:2::1 prefixlen 64
# ifconfig vlan3 inet6 2002:c22:384e:3::1 prefixlen 64
# ifconfig vlan4 inet6 2002:c22:384e:4::1 prefixlen 64
Then I enabled IPv6 forwarding:
sysctl net.inet6.ip6.forwarding=1
Finally, I enabled rtadvd(8) in my rc.conf, and also told it which interfaces to run on (a subset of the ones above), and then started it:
# /etc/rc.d/rtadvd start
In case you were wondering, "rtadvd" is the router advertisement daemon. Using it, all my internal IPv6-enabled systems will automatically learn their IPv6 network addresses and routers. Pretty cool, eh?
Step 4: The firewall
While the IPv6 Internet is probably not yet anywhere near as hazardous as the IPv4 internet, chances are that you still want some level of protection. Since I used to use OpenBSD for my firewalls in the past, I had become accustomed to using pf(4). Unfortunately, I discovered that pf has a very annoying problem with my configuration. Just having pf enabled (even with all rules flushed) seemed to inhibit IPv6 packet forwarding! It was actually kinda strange how it behaved. I could talk normally on the IPv6 Internet from Tritanium directly. However, only ICMP worked correctly from my internal machines. Outbound TCP and UDP packets were never forwarded across Tritanium, while inbound ones worked just fine.
What's the solution? Use ipfw(8) instead of pf, and your problem will be solved. Just make sure you configure the IPv4 side of ipfw so that IP protocol 41 packets are permitted unscathed. (my version wouldn't let me specifically allow proto 41, for some strange reason, so I just permitted all IP packets that I hadn't explicitly blocked with some other rules elsewhere in my configuration.)
Step 5: And there was much rejoicing!
I'm now connected to the IPv6 internet, after a week's worth of evening tinkering. Yippee!
I may eventually put all my configurations into rc.conf (I had some difficulties when I first tried, and gave up soon afterwards), but right now most of this stuff is just running out of rc.local on the machine.
These days I have a connection with multiple static IPs, so I have more options available to myself now. My current network config is also rather interesting, so allow me to illustrate:
{Internet} ---->(Cisco 4500 rtr)---->(FreeBSD firewall)====>{Multiple internal subnets}
Basically I've banished all NAT to that Cisco, which does the common port-translating NAT for most machines on my network. However, it also does 1:1 (bi-directional) NAT for my firewall and server machines. The advantage of 1:1 NAT is that you only translate the network address, and nothing else. As such, you can use it for a lot more than just the usual restrictive TCP and UDP setup you have with port-translating NAT. Of course 1:1 NAT does just translate network addresses, so you need to configure your firewall as if your machines did have public addresses.
So coming out of the Cisco router, I have my private address range (with some public IPs mapped to some of the private IPs). Just behind it, the FreeBSD firewall takes the next step. First, it filters out any traffic I don't want going into my network (obviously). Second, it takes this private address range and subnets it further. (the internal side of the box is a VLAN trunk to my switches) Yes, I have multiple subnets internally. This lets me separate different types of traffic for the purposes of flexibility and/or security.
Basically, I wanted to connect my various internal networks to the IPv6 Internet, by way of this FreeBSD firewall. (FYI, the system is running FreeBSD 6.0-RELEASE at the time of this writing, and is named "Tritanium") To accomplish this, I had two main options at my disposal:
- Use a "tunnel broker" service (i.e. Hurricane Electric, Hexago, or Sixxs)
- Use a 6to4 tunnel (RFC3056)
As such, I decided to attempt 6to4 this time around. The 6to4 method works by directly mapping your public IPv4 address into an IPv6 /48 subnet. Then your border router essentially tunnels IPv6 packets directly inside IPv4 packets (as IP protocol 41). What's really cool about this is that you don't need any external services or configurations. If you go to an IPv6 site, and ping your local 6to4 address, you will see the inbound packets while sniffing your external interface. So, with all that being set, time to get on to an account of my experiences:
Step 1: Figure out your IPv6 address
This is probably the easiest step of the entire adventure. You just take your public IPv4 address (yes, it does have to be a public routable address), convert it to hexadecimal, and tack it onto the end of the 6to4 prefix (2002). For the sake of this writeup, lets assume our public address is "12.34.56.78". In hex, that translates to "0C22384E". So that translates into the following 6to4 subnet:
2002:c22:384e::/48
(IPv6 lets you omit leading zeros and abbreviate the end of the address, in case you were wondering.)
Step 2: Configure the 6to4 tunnel
This was probably one of the most frustrating steps, despite the fact that it looks like it should be the easiest. I blame my configuration more than anything else, though. You see, while the external interface on Tritanium maps directly to a public IP address, it actually has a private IP address itself.
In short, you have to configure FreeBSD's stf(4) interface with your 6to4 address, and then setup routing. However, I had a bit of a problem. You see, for this to work in both directions, two things had to happen. First, Tritanium had to have something telling it that it did indeed have a relationship with its public IP. Second, certain sanity checks (that prevent you from using stf with private IPs) had to be bypassed.
The first step was easy. I just created an alias on Tritanium's external interface with its public IP address, and a /32 netmask:
# ifconfig fxp1 inet 12.34.56.78 netmask 0xffffffff alias
The second step turned out to be a lot more involved. What's going to happen is that Tritanium will be receiving incoming 6to4 packets where the IPv4 address (1:1 translated to a private IP by the Cisco router) will not match the IPv6 address (based on our public IP) contained within. Let's just say that it does not work out of the box. Upon reading the stf man page, it does however tell us that the: "Ingress filter can be turned off by IFF_LINK2 bit". (this is the "link2" flag you can pass to ifconfig when setting up an interface)
Glossing over what was an entire night of frustration and debugging, let's just say that LINK2 doesn't really do much of anything. The stf interface driver has a lot of sanity checks, some failing with my configuration, and the "ingress filter" block of code that LINK2 disables isn't one of those checks.
The fix I ultimately came up with involved fixing the source code (if_stf.c) to make the LINK2 flag disable the sanity checks that were failing on my setup. The result of my fix can be summed up in this patch. (yes, it is against 6.0-RELEASE, but it shouldn't be hard to adapt to a newer version)
Once that file was patched, and the kernel module reloaded, the next step was pretty simple:
# ifconfig stf0 create
# ifconfig stf0 inet6 2002:c22:384e::1 prefixlen 16 link2
The third step involves setting up routing. For this, we need to create a route to a public 6to4 router. I took the easy way with this one, as there is a public "anycast" address for your nearest 6to4 router. That address is 192.88.99.1 (in IPv4), or 2002:c058:6301:: (in 6to4 IPv6). So I set my default IPv6 route to that:
# route add -inet6 default 2002:c058:6301::
Step 3: Internal subnets and routing
First I set IPv6 addresses on my internal interfaces, using subnets of the /48 that I got with 6to4:
# ifconfig vlan1 inet6 2002:c22:384e:1::1 prefixlen 64
# ifconfig vlan2 inet6 2002:c22:384e:2::1 prefixlen 64
# ifconfig vlan3 inet6 2002:c22:384e:3::1 prefixlen 64
# ifconfig vlan4 inet6 2002:c22:384e:4::1 prefixlen 64
Then I enabled IPv6 forwarding:
sysctl net.inet6.ip6.forwarding=1
Finally, I enabled rtadvd(8) in my rc.conf, and also told it which interfaces to run on (a subset of the ones above), and then started it:
# /etc/rc.d/rtadvd start
In case you were wondering, "rtadvd" is the router advertisement daemon. Using it, all my internal IPv6-enabled systems will automatically learn their IPv6 network addresses and routers. Pretty cool, eh?
Step 4: The firewall
While the IPv6 Internet is probably not yet anywhere near as hazardous as the IPv4 internet, chances are that you still want some level of protection. Since I used to use OpenBSD for my firewalls in the past, I had become accustomed to using pf(4). Unfortunately, I discovered that pf has a very annoying problem with my configuration. Just having pf enabled (even with all rules flushed) seemed to inhibit IPv6 packet forwarding! It was actually kinda strange how it behaved. I could talk normally on the IPv6 Internet from Tritanium directly. However, only ICMP worked correctly from my internal machines. Outbound TCP and UDP packets were never forwarded across Tritanium, while inbound ones worked just fine.
What's the solution? Use ipfw(8) instead of pf, and your problem will be solved. Just make sure you configure the IPv4 side of ipfw so that IP protocol 41 packets are permitted unscathed. (my version wouldn't let me specifically allow proto 41, for some strange reason, so I just permitted all IP packets that I hadn't explicitly blocked with some other rules elsewhere in my configuration.)
Step 5: And there was much rejoicing!
I'm now connected to the IPv6 internet, after a week's worth of evening tinkering. Yippee!
I may eventually put all my configurations into rc.conf (I had some difficulties when I first tried, and gave up soon afterwards), but right now most of this stuff is just running out of rc.local on the machine.
Monday, August 14, 2006
Solaris Live Upgrade (on an SVM mirror set)
Many of you have probably heard of Sun's live upgrade feature by now. Live upgrade essentially lets you upgrade your system from one Solaris version to another with minimal downtime. If done right, the only downtime you need to suffer is the time required for rebooting your server.
Live Upgrade works like this:
This whole procedure is also easier if you separate your system partitions from your data ones. Yes, I know this is normally a good practice. However, I've grown to just use a huge "/" and smaller "/var" on most of my machines these days. It's just easier, and I still have "/home" on an external file server.
So what was I to do? The Solaris 10 6/06 DVD set was here, and I wanted to upgrade. (my server was running the original Solaris 10 release) I needed something large to make my ABE on, but also needed it to be somewhere I was comfortable using as my long-term boot drive. I also wanted to avoid involving anything beyond that server itself. Then it occured to me... the "system disk" of my server was actually an SVM mirror set!
In short form, here was my plan of action:
While I should now show a complete walkthrough of what I did, a full post-mortem reconstruction would be rather tedious. Besides, if you're familar with SVM and can read through Sun's LU docs, following my strategy should be straightforward and simple. (yes, it does work) Just remember to install the recommended patches before using LU, or it'll fail.
Also, I strongly recommend mounting the upgraded ABE before that first reboot. You should then check the "/var/sadm/system/data/upgrade_cleanup" file for any changes of interest that it made. I failed to do this myself, and wound up having sendmail misconfigured for several hours. On the bright side, it does make backup copies of any configuration files that it changes.
Good luck!
Live Upgrade works like this:
- Create a "boot environment" (BE) representing your current system
- Create an "alternate boot environment" (ABE) which is a clone of your BE
- Run a Solaris upgrate against the ABE
- Switch the active "boot environment" to the ABE
- Reboot
This whole procedure is also easier if you separate your system partitions from your data ones. Yes, I know this is normally a good practice. However, I've grown to just use a huge "/" and smaller "/var" on most of my machines these days. It's just easier, and I still have "/home" on an external file server.
So what was I to do? The Solaris 10 6/06 DVD set was here, and I wanted to upgrade. (my server was running the original Solaris 10 release) I needed something large to make my ABE on, but also needed it to be somewhere I was comfortable using as my long-term boot drive. I also wanted to avoid involving anything beyond that server itself. Then it occured to me... the "system disk" of my server was actually an SVM mirror set!
In short form, here was my plan of action:
- Make a backup (thankfully this machine has a DDS3 drive installed in it)
- Remove the second disk from the mirror and unconfigure its meta devices
- Run live upgrade, using that second disk as the ABE
- Switch the default BE to the one on the second disk
- Boot off the second disk, into the new version of Solaris
- Make sure the server is still working correctly
- Unconfigure the mirror devices in SVM
- Recreate the meta devices on the second disk, mirrors containing them, run metaroot, etc.
- Reboot again
- Add the first drive back into the mirrors
While I should now show a complete walkthrough of what I did, a full post-mortem reconstruction would be rather tedious. Besides, if you're familar with SVM and can read through Sun's LU docs, following my strategy should be straightforward and simple. (yes, it does work) Just remember to install the recommended patches before using LU, or it'll fail.
Also, I strongly recommend mounting the upgraded ABE before that first reboot. You should then check the "/var/sadm/system/data/upgrade_cleanup" file for any changes of interest that it made. I failed to do this myself, and wound up having sendmail misconfigured for several hours. On the bright side, it does make backup copies of any configuration files that it changes.
Good luck!
Fun with Solaris 10 6/06 and ZFS
The 6/06 release of Solaris 10 finally incorporated ZFS as part of the operating system. This is quite exciting, because now we can start using ZFS without having to run a Solaris Express or OpenSolaris distribution. As such, I was itching to try it out. I started by ordering the "Solaris Enterprise System" DVD stack from Sun. Sure, I could have downloaded it, but its nicer to have a whole set of media already there for me.
Now I needed a test system... So I dug out my older Ultra 60 workstation, hooked up a DVD drive, and a few hours later I was good to go. Thus far, the only real change I noticed from the original Solaris 10 release was a newer and nicer looking login screen.
Time to hook up a boatload of hard drives! I had an expansion box from my now-since-decomissioned CLARiiON FC RAID monster, good to go with 10x36GB 10krpm FC hard drives. All I needed to do was connect them, reformat them with a normal block size (they were formated for 520 bytes instead of the normal 512, thanks to the CLARiiON controller), and I'd be good to go. Unfortunately, all I had to connect them to was a QLogic QLA2100 FC HBA. The QLA2100 isn't supported past Solaris 8, or so they'd lead you to believe. Thankfully you just have to get the Solaris driver, unpack it from the package stream QLogic provides, modify the package to not complain about your Solaris version, and install it. As expected, it then worked just fine.
To fix the block size on the drives, I got the "scu" utility from here, and then followed the instructions on this page. All pretty straightforward, but it did take about an hour per drive. It doesn't really do much I/O to to the drive from your system, though, so doing all the drives at once does speed things up.
Finally, I went through "format" on each drive to fix the annoying "bad magic" messages. Now I had 10 drives off the end of an FC link, all set and good to go!
Setting up ZFS was really easy. If you haven't done so yet, I strongly recommend going here to review their documentation and screencasts. The specific commands are really easy to figure out, but that site shows them to you. Essentially, with ZFS, you make a pool out of mirrors, RAID-Z sets, or individual disks. You can then chop up the pool however you see fit.
In any case, I tried a few configurations and ran some benchmarks. Keep in mind that testing with "dd" and a large block size will ALWAYS yield better results than you'll ever see on a real benchmark program. (I think I got up to 80MB/s with "dd" at some point) Also, running multiple benchmark programs or "dd" sessions in parallel may also yield higher throughput. FYI, I was connecting to all 10 drives over a single 100MB/s FC link. So on with the results!
One 10-drive RAID-Z set
One 5-drive RAID-Z set
Two 5-drive RAID-Z sets
Now I needed a test system... So I dug out my older Ultra 60 workstation, hooked up a DVD drive, and a few hours later I was good to go. Thus far, the only real change I noticed from the original Solaris 10 release was a newer and nicer looking login screen.
Time to hook up a boatload of hard drives! I had an expansion box from my now-since-decomissioned CLARiiON FC RAID monster, good to go with 10x36GB 10krpm FC hard drives. All I needed to do was connect them, reformat them with a normal block size (they were formated for 520 bytes instead of the normal 512, thanks to the CLARiiON controller), and I'd be good to go. Unfortunately, all I had to connect them to was a QLogic QLA2100 FC HBA. The QLA2100 isn't supported past Solaris 8, or so they'd lead you to believe. Thankfully you just have to get the Solaris driver, unpack it from the package stream QLogic provides, modify the package to not complain about your Solaris version, and install it. As expected, it then worked just fine.
To fix the block size on the drives, I got the "scu" utility from here, and then followed the instructions on this page. All pretty straightforward, but it did take about an hour per drive. It doesn't really do much I/O to to the drive from your system, though, so doing all the drives at once does speed things up.
Finally, I went through "format" on each drive to fix the annoying "bad magic" messages. Now I had 10 drives off the end of an FC link, all set and good to go!
Setting up ZFS was really easy. If you haven't done so yet, I strongly recommend going here to review their documentation and screencasts. The specific commands are really easy to figure out, but that site shows them to you. Essentially, with ZFS, you make a pool out of mirrors, RAID-Z sets, or individual disks. You can then chop up the pool however you see fit.
In any case, I tried a few configurations and ran some benchmarks. Keep in mind that testing with "dd" and a large block size will ALWAYS yield better results than you'll ever see on a real benchmark program. (I think I got up to 80MB/s with "dd" at some point) Also, running multiple benchmark programs or "dd" sessions in parallel may also yield higher throughput. FYI, I was connecting to all 10 drives over a single 100MB/s FC link. So on with the results!
One 10-drive RAID-Z set
$ bonnie++ -d . -s 2G
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
proxima 2G 14685 91 35308 48 23733 49 13301 92 52396 50 512.1 13
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 4798 99 +++++ +++ 7455 99 5230 99 +++++ +++ 7350 97
proxima,2G,14685,91,35308,48,23733,49,13301,92,52396,50,512.1,13,16,4798,99,+++++,+++,7455,99,5230,99,+++++,+++,7350,97
One 5-drive RAID-Z set
$ bonnie++ -d . -s 2G
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
proxima 2G 15241 94 32991 44 24989 45 13676 93 58862 52 550.2 10
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 4821 97 +++++ +++ 7509 99 5190 98 +++++ +++ 7849 99
proxima,2G,15241,94,32991,44,24989,45,13676,93,58862,52,550.2,10,16,4821,97,+++++,+++,7509,99,5190,98,+++++,+++,7849,99
Two 5-drive RAID-Z sets
$ bonnie++ -d . -s 2G
Version 1.03 ------Sequential Output------ --Sequential Input- --Random-
-Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
proxima 2G 15051 92 30531 41 26045 47 14018 93 57507 56 864.6 12
------Sequential Create------ --------Random Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 4908 99 +++++ +++ 6800 95 4868 99 +++++ +++ 5847 81
proxima,2G,15051,92,30531,41,26045,47,14018,93,57507,56,864.6,12,16,4908,99,+++++,+++,6800,95,4868,99,+++++,+++,5847,81
Friday, February 10, 2006
Air Travel and Mobile Computing
Almost since their inception, laptop computers have been an increasingly popular implement of the air traveler. You see them used both in airports and on airplanes, by more and more people. However, airports and airplanes today do not provide a friendlier environment for laptop users than they did years ago.
Problem #1: Electrical power
Regardless of what the makers of DC-DC converter bricks may tell you, most airplanes are not equipped with DC power outlets for passengers. Occasionally you might find them in first class, and I once saw them in coach on a short Orlando-Atlanta flight, but normally they are simply not available. As such, the only solution is having laptops with good battery life, and topping them off at airports in-between flights.
Airports, however, aren't that great either. Outlets in sitting areas tend to be very scarce, and often squatted by other laptop users or people who don't even realize they're in the way. I frequently find myself scouring the entire food court, or the entire gate waiting area, only seeing one or two outlets. Even then, I'm lucky to get access to them.
Problem #2: Internet access
Access while on airplanes is something we presently don't expect, and thus can live without. After all, for most domestic trips, the airlines don't want to keep you on the same one for more than 2 hours anyways. Sure, there is talk about installing access, but you all know how that's going to be done. It'll be prohibitively expensive, and/or only offered to first class, and will wind up being practically unavailable to your average laptop-toting passenger. (Remember the sky phones?)
Airports, however, have been installing Wi-Fi access points all over the place. Except, they do it in a way that makes it nearly useless. First, they all insist on charging for access. This is a problem because even though it is usually cheap, it is still hard to justify $5.95-9.95 for a 10 minute E-Mail check between flights. (thankfully I can use GPRS on my cell phone instead) If you are a frequent traveller, they do have monthly access plans. Of course every airport's Wi-Fi installation is managed by a different organization, and thus these plans are worthless unless you fly the "exact same trip" with long layovers on a regular basis. In essense, airport internet access is implemented in such a way that it is practically useless to most travelers on a 1-hour layover. (Well, at least until there are popular programs that can tunnel IP over DNS and ICMP, which are the only things their proxies seem to let out onto the global internet.)
Problem #1: Electrical power
Regardless of what the makers of DC-DC converter bricks may tell you, most airplanes are not equipped with DC power outlets for passengers. Occasionally you might find them in first class, and I once saw them in coach on a short Orlando-Atlanta flight, but normally they are simply not available. As such, the only solution is having laptops with good battery life, and topping them off at airports in-between flights.
Airports, however, aren't that great either. Outlets in sitting areas tend to be very scarce, and often squatted by other laptop users or people who don't even realize they're in the way. I frequently find myself scouring the entire food court, or the entire gate waiting area, only seeing one or two outlets. Even then, I'm lucky to get access to them.
Problem #2: Internet access
Access while on airplanes is something we presently don't expect, and thus can live without. After all, for most domestic trips, the airlines don't want to keep you on the same one for more than 2 hours anyways. Sure, there is talk about installing access, but you all know how that's going to be done. It'll be prohibitively expensive, and/or only offered to first class, and will wind up being practically unavailable to your average laptop-toting passenger. (Remember the sky phones?)
Airports, however, have been installing Wi-Fi access points all over the place. Except, they do it in a way that makes it nearly useless. First, they all insist on charging for access. This is a problem because even though it is usually cheap, it is still hard to justify $5.95-9.95 for a 10 minute E-Mail check between flights. (thankfully I can use GPRS on my cell phone instead) If you are a frequent traveller, they do have monthly access plans. Of course every airport's Wi-Fi installation is managed by a different organization, and thus these plans are worthless unless you fly the "exact same trip" with long layovers on a regular basis. In essense, airport internet access is implemented in such a way that it is practically useless to most travelers on a 1-hour layover. (Well, at least until there are popular programs that can tunnel IP over DNS and ICMP, which are the only things their proxies seem to let out onto the global internet.)
Monday, February 06, 2006
New blog!
My blog on LiveJournal was more like a collage of personal ramblings and reflections, and the blogs there from various friends of mine were also like that. As such, I've decided to separate out the technical content. Below you'll see a bunch of technical posts that I've copied over here. In the future, I hope to put all my technical postings on this site instead.
Why am I doing this? Well, the reasons are two-fold. First, most technical postings on LJ would get lost in the noise of personal-life ramblings from everyone on everyone else's friends pages. Second, I'd rather post these in a forum open to people that really have no need nor desire to know about any of my own personal-life ramblings.
I'd also like to have a personal tech blog to complement my efforts on this website:
Household Enterprise Computing
As well as my tinkerings with this excessive collection of operational computer hardware:
Logicprobe Systems List
(or any of the many less-operational boxes that I didn't bother to list there)
If anyone still wants to know exactly what I consider "Household Enterprise Computing" to be, here is a good writeup that I did a while ago.
Why am I doing this? Well, the reasons are two-fold. First, most technical postings on LJ would get lost in the noise of personal-life ramblings from everyone on everyone else's friends pages. Second, I'd rather post these in a forum open to people that really have no need nor desire to know about any of my own personal-life ramblings.
I'd also like to have a personal tech blog to complement my efforts on this website:
Household Enterprise Computing
As well as my tinkerings with this excessive collection of operational computer hardware:
Logicprobe Systems List
(or any of the many less-operational boxes that I didn't bother to list there)
If anyone still wants to know exactly what I consider "Household Enterprise Computing" to be, here is a good writeup that I did a while ago.
Subscribe to:
Posts (Atom)