It's not exactly accurate to use words like "legacy" when describing systems like IBM's i5 (it will always be the AS/400 to me). Our "legacy" systems are so critical to our ($1B) business it's not an overstatement to say that our restaurants could not transact business without them. The simple majority of our development time, energy, and money is spent writing new RPG code, introducing new green screen applications, and finding new ways to make the 400 work with the rest of our expanding private cloud infrastructure. Calling something legacy has usually implied that newer systems are taking the place of the "old" way of doing things. I suppose you could say that programmers use the word "legacy" interchangeably with "obsolete".
Our AS/400 is not going away. For that reason, it's silly to call it obsolete.
I've gotten some great feedback from the session on private cloud infrastructures I did at this year's SpringOne 2GX in Chicago. People are very interested in how these traditional systems can work with the new cloud services many are introducing into their enterprise. Plenty of organizations have decades of business knowledge and data tied up in "legacy" systems and they want to know how in the world they can get a fancy new cloud application server like tc Server to talk to their AS/400 (through more than SQL and JDBC).
You've spent a lot of time setting up a private cloud of servers. Everything's virtualized and you have it organized by function. Your messaging VMs run on these hosts and your web servers run on those hosts. You've tested it extensively and you're happy with how everything talks to each other. The worst is over, right? Wrong. Now you have to move past the theoretical and actually use this thing in production. It's time to start deploying the applications you're building into this cloud of virtualized resources. It's time to develop some scheme to keep your applications updated when changes are made. Keep in mind, whatever mistakes you inject at this point will be multiplied by the number of machines that deploys to.
Don't be! It's really not that hard. In this article, I'll introduce you to some concepts I used in developing the fairly simple system of messages and scripts that deploy artifacts into our private cloud. This won't be a technical HOWTO so much as it will be a casual dinner conversation about the pitfalls and rewards. Above all, I want to get across that having a bunch of virtual machines that do the same thing doesn't have to keep you up at night.
In my first article on taking advantage of RabbitMQ's asynchronous messaging to implement a cloud-friendly session manager, I covered the method behind the madness and introduced you to some of the functions a session manager has to perform in the course of doing its job. In this, the second of two articles covering this topic, I'll describe how loading and updating session objects is handled and, in the interest of fairness, give some disclaimers and acknowledge the trade-offs made to get here.
Whenever code wants to interact with a user session, whether that be to load it into its own memory or to replicate attributes on that session, it sends these messages to a queue whose name contains the session ID. The pattern used to create the queue name is configurable so you can partition user sessions by application, or even by something more fine-grained (without resorting to using multiple, non-clustered RabbitMQ servers).
The session store first checks its internal Map to see if the requested session happens to be local to this store. If it is, the store simply hands that session back to the manager. If it's not, then the store sends a "load" message to the session's queue. It doesn't just fire off a message, though. It turns out that a user's session can be requested from the store many times during the course of a request. If one were to send a load message every time this happens, then we would see a serious performance degradation. To get around this, before a load message is sent, the store checks to see if it's already trying to load this session. If it is, it simply waits until that process is finished and uses the session being loaded in that other thread.
The existing solutions for Tomcat session clustering are viable for moderate clusters and straightforward failover in traditional deployments. Several are well developed with a mature code-base, such as Apache's own multicast clustering. But there's no getting around the fact that today's cloud architectures place new and different expectations on Tomcat that didn't exist several years ago when those solutions were being written. In cloud architectures, where horizontal scalability is king, a dozen or more Tomcat instances is not unusual.
In my hybrid, private cloud, first described in my earlier post on keeping track of the availability and states of services across the cloud, I wanted to fully leverage all my running Tomcat instances on every user request. I didn't want to use sticky sessions as I wanted each request (including AJAX requests on a page) to be routed to a different application server. This is how virtual machines work at a fundamental level. Increased throughput is obtained by parallelizing as much of the work as possible and utilizing more of the available resources. I wanted the same thing for my dynamic pages. Not finding a solution in the wild, I resolved to roll my own session manager. The result is the "vcloud" (or virtual cloud) session manager. I've Apache-licensed the source code and put it on GitHub--mostly to try and elicit free help to make it better. You can find the source code here: http://github.com/jbrisbin/vcloud/tree/master/session-manager/
I sometimes pine for the days when I just had one server to worry about. I wax nostalgic, remembering how easy my life was when I didn't have servers and virtual machines growing out of my ears. It's almost the same feeling I get thinking about the days when I only had one child and you could just pop them in a carseat and take off. Now I've got kids driving themselves and their siblings to several activities a day and I can hardly keep track of whether I'm coming or going. I feel the same way about my data center.