Friday, March 07, 2008

C# WTF: Abuse of events

Someone actually noticed my last post, so I'm going to start putting more of that stuff on here.

Getting a property through events:

Common uses:

  • Pretending to decouple classes for no apparent reason.



public delegate int GetSomeValueDelegate();

 

class Foo

{

    public event GetSomeValueDelegate GetSomeValue;

 

    public void DoStuff()

    {

        /* . . . */

        if (this.GetSomeValue != null && this.GetSomeValue > 4)

        {

            OtherStuff();

        }

        /* . . . */

    }

}

 

class Bar

{

    private Foo foo;

    private int someValue;

 

    public Bar()

    {

        foo = new Foo();

        foo.GetSomeValue += new GetSomeValueDelegate(this.HandleGetSomeValue);

    }

 

    public void Run()

    {

        foo.DoStuff();

    }

 

    private int HandleGetSomeValue()

    {

        return this.someValue;

    }

}



Common variants:

  • Having the event return a reference to the Bar instance that manages Foo.

  • Having the event return a property of singleton that is accessible from both classes.





Externalizing logic through events
Common uses:

  • Decoupling a class from its internal logic.



public delegate void ResultsEventHandler(string[] results);

 

class Foo

{

    public event ResultsEventHandler ResultsEvent;

 

    private string[] resultsData;

 

    public string[] Results

    {

        get { return this.resultsData; }

        set { this.resultsData = value; }

    }

 

    public void DoStuff()

    {

        string[] results = PerformOperation();

        if (ResultsEvent != null)

        {

            ResultsEvent(results);

        }

    }

}

 

class FooManager

{

    private Foo foo;

    private Bar bar;

 

    public FooManager()

    {

        foo = new Foo();

        bar = new Bar();

        foo.ResultsEvent += new ResultsEventHandler(this.OnResultsEvent);

    }

 

    public void DoStuff()

    {

        foo.DoStuff();

        ProcessResults(foo.Results);

    }

 

    private void OnResultsEvent(string[] results)

    {

        foo.Results = results;

        bar.Data = results;

    }

}

Thursday, January 03, 2008

C# WTF: Abuse of exception handlers

I've started collecting assorted examples of bad code, and will start periodically posting them. Everything seen here was first seen in real code (and fixed, when possible). Just remember, these are examples of what you shouldn't ever do! :-)

Type checking by InvalidCastException:

Common uses:
  • Checking the type of an object in a collection when the developer does not know about the "is" operator or its equivalents.


Selection selItems = UtilServices.GetInstance().GetSelectedItems();

if (selItems != null)

{

    try

    {

        foreach (object item in selItems)

        {

            // Cast to a Foo Item.

            FooItem fooItem = (FooItem)item;

        }

        ProcessSuccess();

    }

    catch (System.Exception ex)

    {

        // If an exception is thrown, the items are not FooItems

        Logger.Error("Exception: " + ex.Message);

        ProcessFailure();

    }

}



Null checking by Exception handler:

Common uses:
  • Not being bothered by the hassle of all those "if" statements.


try

{

    foo = bar.GetSomeData();

    thing = foo.ThingValue;

}

catch

{

    foo = bar.GetOtherData();

    thing = foo.ThingValue;

}

Thursday, April 12, 2007

Converting from a DIB to a System.Drawing.Bitmap

Yes, I even do some .NET programming in C# these days. Don't worry, its just at my day job. In any case, since everyone and their brother, cousin, and best friend seems to have a blog with random .NET tidbits, I figured I'd add one here too. (Speaking of which... Please, people, your blogs aren't so important that you need to fill 75% of it with links to other people's blog postings, lest your "special" readers never know about them. It makes Googling painful at times.)

In any case, I found myself inside some code that got an image (in native Windows "DIB" format, as an object that was of type "byte[]") and needed to convert it into a System.Drawing.Bitmap. The code actually did this already, by extracting the image size from the DIB header, creating a new Bitmap, then proceeding to iterate over every single pixel to directly set it in the Bitmap. Obviously that was way too slow, and a better solution was needed.

I then searched the web far and wide, not really finding anything useful. Sure, there were a lot of hints, but no real solutions. Bitmap itself actually has no easy way to be created from a byte array. What it does have, strangely enough, is the ability to be created from a pointer to unmanaged memory and some hints:

public Bitmap (
int width,
int height,
int stride,
PixelFormat format,
IntPtr scan0
)


Since I had my data in a byte[], not as a chunk referenced by an IntPtr, the first step was to remedy the situation. Here is how you do it, as weird as it may look:

GCHandle handle = GCHandle.Alloc(dib, GCHandleType.Pinned);
IntPtr handlePtr = handle.AddrOfPinnedObject();


(When you are done using the handle, don't forget to call "handle.Free()".)

Now all I had to do was offset the IntPtr by 40 (size of the DIB header), add in the information the existing code already extracted from the header, and create a new bitmap. Here's the final result:

using System.Drawing;
using System.Drawing.Imaging;
using System.Runtime.InteropServices;
...
GCHandle handle = GCHandle.Alloc(dib, GCHandleType.Pinned);
IntPtr handlePtr = handle.AddrOfPinnedObject();
IntPtr scan0 = new IntPtr(handlePtr.ToInt32() + 40);
Bitmap bitmap =
new Bitmap(width, height, width * 3, PixelFormat.Format24bppRgb, scan0);
handle.Free();
...


It may look ugly, and very icky, but it does work. In fact, this allowed me to take an operation that took two whole seconds of wall-clock time and make it instantaneous.

Monday, November 06, 2006

Frameworks, oh my!

Earlier this evening, I was having a little chat with my good friend and development wizard Chris. I was chatting with him about a recent frustration with the world of certain "Frameworks".

As I proceed forward in a software engineering class project involving an application built on JavaServer Faces running on top of GlassFish, I found myself reminded on this rant someone sent me a link to recently. While I ultimately have a far more balanced view, I do find his rambling somewhat amusing.

So anyways, here's my rant:

After a group meeting for my class, it looks like the most sensible way to design our web pages for this web app is essentially a templated composite view, so we can have standard headers, sidebars, etc, without having to code them on each page.

As such, I start looking up how to do them properly in the context of JSF (JavaServer Faces), which conventionally uses JSP (JavaServer Pages) for its page description system.

Then I start reading that you can do it this way, or that way, but that those ways all have problems for this reason or that reason.

In fact, most conventional documents on how to use JSF ignore the problem completely.

But when they discount all the obvious ways, they then recommend this alternate page composition system called "Facelets".

So I start reading up on Facelets, and it says how they use XHTML instead of JSP, because JSP is bad for this reason or that reason, especially with JSF, and talk all about how Facelets and JSF go wonderfully together.

As such, I look into what it'll take to have Facelets integrate cleanly with my chosen IDE (NetBeans), which I chose because it integrates so well with the other ways I was trying to do things.

I discover a NetBeans plugin that'll let me to Facelets, and provide all that tag-completion on the XHTML files that I was getting in the JSP files. Only problem is that its at version 0.3.

I install it anyways, and finally figure it all out.

In the end, Facelets has turned out to actually be a very nice solution, and I'm really happy with it.

(and since JSF supports plugable page composition systems, it integrates cleanly and correctly too)

But, given that JSF seemed to start from the basis of "we see all these other frameworks out there designed to fix/improve-upon what JSP provides", I wonder why I didn't start with reading about Facelets.

(and in general, I'm actually quite pleased with the whole JSF approach)

I just wish Facelets got more coverage in my JSF-bible-type book (published last month, co-authored by one of the spec leads). All it got was a few examples in the short chapter on plugable page rendering libraries.

Monday, October 30, 2006

The Java Preferences API vs. MacOS 10.4

While it may not be obvious from all my sysadmin-style tinkering discussed in this blog, I am also a software developer. Lately I've been working on a Java GUI program that has to save and load a complex tree of settings. While the project is nearing its completion, I've decided to occasionally spend some time trying to find and fix various issues and performance bottlenecks. This is the story of one of those bottlenecks...

A common way to save and load application preferences in Java, is to use the Preferences API. This API uses a different back-end implementation on each operating system, but essentially provides a tree view for storing preferences. Its implementation is also intended to make the backing-store somewhat transparent, so you just have to set your preferences. There is no need to perform a "save" operation afterwards.

Now first of all, I did want to make saving an explicit operation in my application. So while I maintained my own configuration tree, writing it to nodes in the Java Preferences tree was a user-triggered operation. Originally I also wrote out the Preferences tree to a file on disk, which was loaded in place of Java's Preferences store. However, upon realizing that was a needless waste of resources, I removed the external configuration file.

One major part of my application involves configuring a number of "thingies," where each "thingie" has a potentially large number of individual parameters directly under its configuration node. A problem I noticed was that as I increased the number of "thingies" in the configuration, save and load became dramatically slower. However, here's the really interesting part. It only really became slower on MacOSX! On my Linux test machine, the performance impact wasn't even noticeable!

Thanks to the wonderful profiler in NetBeans, I was able to track down this issue to functions under "java.util.prefs.AbstractPreferences", or more particularly "java.util.prefs.MacOSXPreferences". Apparently the MacOSX implementation of the node() and various put() methods can be quite slow. If you have enough of them, it really adds up.

So how did I fix it? Well, something a bit less elegant than how I was doing things before. You see, the save and load methods for "thingie" configurations really just involved converting items between a Map and Preferences nodes. Since the Map really just managed access to an object which contained a collection of simple types (String, Integer, etc.), I got an idea. Why not just serialize the Map directly, and store it as a byte array in a Preferences node? Sure, it may seem like a bit of an inelegant solution, but it worked! Not only did it work, but it resulted in a MASSIVE performance increase.

So how much of a performance improvement does this make? Reading the program's configuration used to take 2195ms, according to the profiler. It now only takes 956ms. However, writing used to take 7314ms. Now that number is down to 645ms!

So, what did we learn?
1. Java performance characteristics can vary wildly across operating systems.
2. Object serialization to a byte array node in the Java Preferences system can be significantly faster than creating an elegant structure within the Java Preferences API to store a lengthy configuration, especially on MacOSX.

Friday, September 01, 2006

Yet Another IPv6 Setup

Last weekend I got the bright idea to give IPv6 another attempt on my network. I had previously tried it a while back, tunneling straight from my Cisco router. However, I had an older Cisco router that could only do IPv6 on a "testing" build of IOS. Being sick and tired of potential "issues" with this build, I wound up just ditching IPv6 for the time being. At the time I had a single static IP, and I did not have any other good configuration options.

These days I have a connection with multiple static IPs, so I have more options available to myself now. My current network config is also rather interesting, so allow me to illustrate:
{Internet} ---->(Cisco 4500 rtr)---->(FreeBSD firewall)====>{Multiple internal subnets}

Basically I've banished all NAT to that Cisco, which does the common port-translating NAT for most machines on my network. However, it also does 1:1 (bi-directional) NAT for my firewall and server machines. The advantage of 1:1 NAT is that you only translate the network address, and nothing else. As such, you can use it for a lot more than just the usual restrictive TCP and UDP setup you have with port-translating NAT. Of course 1:1 NAT does just translate network addresses, so you need to configure your firewall as if your machines did have public addresses.

So coming out of the Cisco router, I have my private address range (with some public IPs mapped to some of the private IPs). Just behind it, the FreeBSD firewall takes the next step. First, it filters out any traffic I don't want going into my network (obviously). Second, it takes this private address range and subnets it further. (the internal side of the box is a VLAN trunk to my switches) Yes, I have multiple subnets internally. This lets me separate different types of traffic for the purposes of flexibility and/or security.

Basically, I wanted to connect my various internal networks to the IPv6 Internet, by way of this FreeBSD firewall. (FYI, the system is running FreeBSD 6.0-RELEASE at the time of this writing, and is named "Tritanium") To accomplish this, I had two main options at my disposal:
The last time around, I used a tunnel broker. However, this method depends on having an active account with an external service. While that does work, it is a bit of an annoyance it would be nice to do without.
As such, I decided to attempt 6to4 this time around. The 6to4 method works by directly mapping your public IPv4 address into an IPv6 /48 subnet. Then your border router essentially tunnels IPv6 packets directly inside IPv4 packets (as IP protocol 41). What's really cool about this is that you don't need any external services or configurations. If you go to an IPv6 site, and ping your local 6to4 address, you will see the inbound packets while sniffing your external interface. So, with all that being set, time to get on to an account of my experiences:

Step 1: Figure out your IPv6 address
This is probably the easiest step of the entire adventure. You just take your public IPv4 address (yes, it does have to be a public routable address), convert it to hexadecimal, and tack it onto the end of the 6to4 prefix (2002). For the sake of this writeup, lets assume our public address is "12.34.56.78". In hex, that translates to "0C22384E". So that translates into the following 6to4 subnet:
2002:c22:384e::/48

(IPv6 lets you omit leading zeros and abbreviate the end of the address, in case you were wondering.)

Step 2: Configure the 6to4 tunnel
This was probably one of the most frustrating steps, despite the fact that it looks like it should be the easiest. I blame my configuration more than anything else, though. You see, while the external interface on Tritanium maps directly to a public IP address, it actually has a private IP address itself.

In short, you have to configure FreeBSD's stf(4) interface with your 6to4 address, and then setup routing. However, I had a bit of a problem. You see, for this to work in both directions, two things had to happen. First, Tritanium had to have something telling it that it did indeed have a relationship with its public IP. Second, certain sanity checks (that prevent you from using stf with private IPs) had to be bypassed.

The first step was easy. I just created an alias on Tritanium's external interface with its public IP address, and a /32 netmask:
# ifconfig fxp1 inet 12.34.56.78 netmask 0xffffffff alias

The second step turned out to be a lot more involved. What's going to happen is that Tritanium will be receiving incoming 6to4 packets where the IPv4 address (1:1 translated to a private IP by the Cisco router) will not match the IPv6 address (based on our public IP) contained within. Let's just say that it does not work out of the box. Upon reading the stf man page, it does however tell us that the: "Ingress filter can be turned off by IFF_LINK2 bit". (this is the "link2" flag you can pass to ifconfig when setting up an interface)

Glossing over what was an entire night of frustration and debugging, let's just say that LINK2 doesn't really do much of anything. The stf interface driver has a lot of sanity checks, some failing with my configuration, and the "ingress filter" block of code that LINK2 disables isn't one of those checks.

The fix I ultimately came up with involved fixing the source code (if_stf.c) to make the LINK2 flag disable the sanity checks that were failing on my setup. The result of my fix can be summed up in this patch. (yes, it is against 6.0-RELEASE, but it shouldn't be hard to adapt to a newer version)

Once that file was patched, and the kernel module reloaded, the next step was pretty simple:
# ifconfig stf0 create
# ifconfig stf0 inet6 2002:c22:384e::1 prefixlen 16 link2

The third step involves setting up routing. For this, we need to create a route to a public 6to4 router. I took the easy way with this one, as there is a public "anycast" address for your nearest 6to4 router. That address is 192.88.99.1 (in IPv4), or 2002:c058:6301:: (in 6to4 IPv6). So I set my default IPv6 route to that:
# route add -inet6 default 2002:c058:6301::

Step 3: Internal subnets and routing
First I set IPv6 addresses on my internal interfaces, using subnets of the /48 that I got with 6to4:
# ifconfig vlan1 inet6 2002:c22:384e:1::1 prefixlen 64
# ifconfig vlan2 inet6 2002:c22:384e:2::1 prefixlen 64
# ifconfig vlan3 inet6 2002:c22:384e:3::1 prefixlen 64
# ifconfig vlan4 inet6 2002:c22:384e:4::1 prefixlen 64

Then I enabled IPv6 forwarding:
sysctl net.inet6.ip6.forwarding=1

Finally, I enabled rtadvd(8) in my rc.conf, and also told it which interfaces to run on (a subset of the ones above), and then started it:
# /etc/rc.d/rtadvd start

In case you were wondering, "rtadvd" is the router advertisement daemon. Using it, all my internal IPv6-enabled systems will automatically learn their IPv6 network addresses and routers. Pretty cool, eh?

Step 4: The firewall
While the IPv6 Internet is probably not yet anywhere near as hazardous as the IPv4 internet, chances are that you still want some level of protection. Since I used to use OpenBSD for my firewalls in the past, I had become accustomed to using pf(4). Unfortunately, I discovered that pf has a very annoying problem with my configuration. Just having pf enabled (even with all rules flushed) seemed to inhibit IPv6 packet forwarding! It was actually kinda strange how it behaved. I could talk normally on the IPv6 Internet from Tritanium directly. However, only ICMP worked correctly from my internal machines. Outbound TCP and UDP packets were never forwarded across Tritanium, while inbound ones worked just fine.

What's the solution? Use ipfw(8) instead of pf, and your problem will be solved. Just make sure you configure the IPv4 side of ipfw so that IP protocol 41 packets are permitted unscathed. (my version wouldn't let me specifically allow proto 41, for some strange reason, so I just permitted all IP packets that I hadn't explicitly blocked with some other rules elsewhere in my configuration.)

Step 5: And there was much rejoicing!
I'm now connected to the IPv6 internet, after a week's worth of evening tinkering. Yippee!
I may eventually put all my configurations into rc.conf (I had some difficulties when I first tried, and gave up soon afterwards), but right now most of this stuff is just running out of rc.local on the machine.

Monday, August 14, 2006

Solaris Live Upgrade (on an SVM mirror set)

Many of you have probably heard of Sun's live upgrade feature by now. Live upgrade essentially lets you upgrade your system from one Solaris version to another with minimal downtime. If done right, the only downtime you need to suffer is the time required for rebooting your server.

Live Upgrade works like this:
  • Create a "boot environment" (BE) representing your current system
  • Create an "alternate boot environment" (ABE) which is a clone of your BE
  • Run a Solaris upgrate against the ABE
  • Switch the active "boot environment" to the ABE
  • Reboot
Seems simple enough, right? Well, on the surface it does seem simple. There's just one problem. You need an empty partition to use as the ABE! You know what really annoyed me though? Most of the LU examples and docs I've read seem to involve using some random extra scrap of a partition for the ABE. Well, as you can probably guess, the ABE becomes your system boot partition at the end of this process. Do you really want some random scrap partition to be your system partition for the long term? I certainly didn't.

This whole procedure is also easier if you separate your system partitions from your data ones. Yes, I know this is normally a good practice. However, I've grown to just use a huge "/" and smaller "/var" on most of my machines these days. It's just easier, and I still have "/home" on an external file server.

So what was I to do? The Solaris 10 6/06 DVD set was here, and I wanted to upgrade. (my server was running the original Solaris 10 release) I needed something large to make my ABE on, but also needed it to be somewhere I was comfortable using as my long-term boot drive. I also wanted to avoid involving anything beyond that server itself. Then it occured to me... the "system disk" of my server was actually an SVM mirror set!

In short form, here was my plan of action:
  • Make a backup (thankfully this machine has a DDS3 drive installed in it)
  • Remove the second disk from the mirror and unconfigure its meta devices
  • Run live upgrade, using that second disk as the ABE
  • Switch the default BE to the one on the second disk
  • Boot off the second disk, into the new version of Solaris
  • Make sure the server is still working correctly
  • Unconfigure the mirror devices in SVM
  • Recreate the meta devices on the second disk, mirrors containing them, run metaroot, etc.
  • Reboot again
  • Add the first drive back into the mirrors
Seems simple enough, right? ;-) When all is said and done, the goal was to have the same drive configuration before. The only differences would be that my mirror components would be reversed, and I'd be running a newer version of Solaris.

While I should now show a complete walkthrough of what I did, a full post-mortem reconstruction would be rather tedious. Besides, if you're familar with SVM and can read through Sun's LU docs, following my strategy should be straightforward and simple. (yes, it does work) Just remember to install the recommended patches before using LU, or it'll fail.

Also, I strongly recommend mounting the upgraded ABE before that first reboot. You should then check the "/var/sadm/system/data/upgrade_cleanup" file for any changes of interest that it made. I failed to do this myself, and wound up having sendmail misconfigured for several hours. On the bright side, it does make backup copies of any configuration files that it changes.

Good luck!