Loadbalancing

Almost ten years ago, i was the non-responsible engineer (we had a different engineer for that, but at the end i decided about the networking gear … different story) for networking. And there it was: The Alteon 180e. My first loadbalancer. And it sucked … big time. My girlfriend hated this device. It worked seamless with our load tests, but i´ve had some sleepless nights, several teleconferences with the people at alteon and four or five firmware releases to get this baby halfway stable). Round about six or seven years ago i was the responsible director for networking at a german startup. As much of our horsepower drained into the encryption of ssl, i´ve build a ssl accelerator based on reverse proxing and a hardware crypto card. Worked quite good … Almost three years ago i presented the N2000 switches at the Cebit 2005, it was a really cool device, but … well … i helped other colleagues as nobody really thought of Sun as a manufacturer of networking equipment and i didn´t had many customers. Nevertheless it was a cool piece of hardware. I wrote this to explain my tendency to view the world with a networking focus and the with the focus of the usability of components for networking equipment. A few month ago shortly after the Cebit 2007 i thought a little bit about the then upcoming Niagara2 systems. And after a cup of the incredible bad coffee in our office there was some kind of enlightening: A Niagara2 system would be a hell of a loadbalancers. Why? Everything you need is there. Many processing engines (like in the early Alteons, they consisted out of multiple PowerPC cores adjactent to the interfaces with a central control CPU), an integrated cryto unit, 2 10GbE interfaces (sufficient for a LB-on-a-stick) configuration. You would be able to virtualize it via LDOMS or container (somewhat similar to the virtualisation in our N2000 loadbalancers). And with Crossbow you get network virtualization and traffic control. There was one thing missing: The load balancing software. Thus i wrote to our german OS ambassadors that a feature with a similar functionality like the Linux Virtual Server would be the key to build “a hell of a” loadbalancer out of the N2. My idea didn´t got really far partly out of reasons i won´t discuss in this forum. But the other reason was a different project: A few days later i´ve learned about discussions with a software vendor to implement something similar. And now, some months later we announced a licencing agreement with Zeus about their ZXTM. Or as in Techworld´s “Sun picks Zeus for traffic control”:

The telco market is a big one for Sun - it has blade servers designed specifically for this market, plus a carrier-grade version of Solaris - but it didn't have its own ADC offering before. It will now add ZXTM to its price-book, said Brennan.

It isn´t as far fetching as my idea of integrating it into the Solaris or OpenSolaris kernel, but nevertheless a very good thing and we didn´t have to reinvent the wheel. And perhaps this solves one of the big idiosyncracies of the modern datacenter. Loadbalancer and crypto acceleration are really near to the the application. As this functionalities are commonly implemented in networking switches, this devices are often administrated and purchased by the networking department. But the networking department has a greater distance to the application as the server department. This leads to a really inefficient process to change applications (at least, when you don´t know someone from the networking department ;) ). Thus a general purpose server as a loadbalancer could solve this issue quite nice …