Sunday, May 20, 2012

When to use http session affinity in GLASS

Seaside applications that deploy in GLASS will almost always run using multiple Gemstone VMs to put the parallel request processing capability of the platform to use. The standard GLASS setup of 3 VMs already brings you a long way in the scaling of your web application. Because of the way Seaside works in GLASS, incoming requests can be freely distributed to any VM, regardless of the Seaside session they are related to. This makes load balancing in GLASS easy, without the need to coordinate request handling or install any kind of session affinity.

Although I think this is a brilliant achievement, the way this works in GLASS can cause some performance problems when you are not careful with the way your application triggers requests. In this post, I will explain when and why this happens and how we solved it (this post's title might already give you a clue ;-).

In short: performance will degrade when your web application triggers multiple AJAX requests to the server concurrently. This is because the standard load balancer's behavior interferes with the way GLASS handles concurrent requests for a single Seaside session. The load balancer will distribute the concurrent requests over the different VMs (which blocks them from processing requests for other sessions) but because requests for a single Seaside session cannot be processed in parallel, the processing of each such ajax request will block the VM for a longer time period than necessary. Finally, the total time for all requests to finish will be longer than when the requests would have been processed sequentially by a single VM.

The reason for this behavior is that GLASS locks the WASession instance when processing a request for a Seaside session. This ensures that no other request for the same Seaside session can be processed by a concurrent thread (VM). In this post: GLASS 101: Simple Persistence, Dale explains us what happens when such a lock is denied:

So, when an object lock is denied while processing a request, we throw an exception, abort the transaction, delay for a bit and retry the request. Each request is handled in its own thread, so while the request is delayed, other requests can be processed by the server. A nice, clean solution that fits very well within the GemStone/Seaside framework.

This means that the request handling will be delayed for some time and then retried. Depending on the processing time of each request and the number of concurrent requests, this can cause quite some delay. For example, in Yesplan, we noticed how a particular request took more than three times longer to complete when we activated 3 VMs instead of a single one. (Some more explanation of http request retries can be found in this post of Dale: GLASS 101: …Fire).

With the increased interactivity we are expecting from web applications, it's not uncommon to end up with multiple ajax requests being fired in parallel to your server. I would even say more: it's the desired behavior of an asynchronous request that you can send other such requests while 'waiting' for a response. Although it's obviously a good idea to maximize the bundling of ajax requests, this is not always desirable from an implementation point-of-view (more on that in another post). And whenever performance-related code changes start interfering with code quality or complexity, we should give them a serious thought.

A solution to the problem that does not interfere with how we implement the application and its request handling is by configuring the http load balancer to perform session affinity on Seaside's session key url parameter (i.e. '_s'). This will ensure that subsequent requests for the same Seaside session are queued to the same VM by the load balancer. Most HTTP web servers have some mechanism to configure load balancing with session affinity on a url parameter. In Nginx, the HttpUpstreamRequestHashModule external module provides us with this ability. The relevant configuration snippet that establishes session affinity is:

upstream seaside {
        hash $arg__s;
        server localhost:9001;
        server localhost:9002;
        server localhost:9003;
        hash_again 2;
}
server {
        server_name  myapp.some.domain;
        location / {
                include fastcgi_params;
                fastcgi_pass seaside;
        }
}

Now, there are caveats with this particular implementation of load balancing. First of all, all initial requests (i.e. those without a session key) are handled by the same VM. This causes one VM to be responsible for the reachability of your application. Next, if the hash function's distribution is bad, you can end up with idle VMS while others are overloaded, and there is no way that changes unless application sessions are stopped. Finally, a VM that died will be noticed by a percentage of your users.

Saturday, September 3, 2011

Ajaxified Seaside Components

The Seaside sprint after the ESUG conference in Edinburgh was just the perfect moment for implementing some of the ideas that have been floating around in my mind. Here's one I posted on the seaside mailinglist, about which I was reminded by Nick Ager at the beginning of the sprint. Thanks for the reminder, Nick! ;-)

What do I mean with Ajaxified Components?
The standard way Seaside applications behave is to trigger a complete webpage rendering after you have performed an action (i.e. clicking a link that triggers a callback). That works just fine for many use cases but there are ample times where you do not want to trigger a complete page rendering and merely update those components that actually changed (i.e. those components that need to be rerendered on the user's web page).
Using Ajax and JQuery, which are nicely integrated in Seaside, it is already fairly simple to implement such behavior. Nevertheless, the abstractions for such behavior as implemented by dirty widgets in Illiad and the ajaxified web components in Aida are very useful. Therefore, I decided to have a go at integrating the approach we had implemented for our Yesplan application into Seaside itself, such that an "ajaxified Seaside components" implementation can be shared by different Seaside applications.

How it works
Using such ajaxified seaside components, the triggering of ajax updates for your application becomes as transparent as the full page rendering. Let us illustrate that by making an ajaxified counter application (sorry for the lack of an original example). The rendering method of the Counter component (shown below) uses a normal callback for the decrease action and an ajax callback for the increment action. The #callbackWithAjaxUpdate: selector accepts a Smalltalk block just like a normal callback. The difference is that after executing the callback block, Seaside will trigger the rendering of only those components that have been marked as "dirty". Marking a component as "dirty" is done by sending it the #markDirty message, which is exemplified in the increment callback.
html heading
level: 2;
with: value.
html anchor
callbackWithAjaxUpdate: [ value := value + 1. self markDirty];
with: '+'.
html space.
html anchor
callback: [ value := value -1 ];
with: '-'.
That's it? Almost. Ajaxified Seaside components need to be a subclass of WAAjaxifiedSeasideComponent (which is itself a direct subclass of WAComponent). This is because any ajaxified component needs to hold on to an id that is the html id of its outermost html markup. The ajax update uses jQuery to replace that markup with the newly rendered one. Furthermore, it defines the standard ajax update script, which you sometimes need to override to perform additional behavior when the ajax update happens.
If it's not an option for you to subclass from WAAjaxifiedComponent, it would suffice to just implement the few methods of that class into whatever component's implementation you want to ajaxify (or wait until traits are supported in all Smalltalk dialects).
You are also not limited to anchor callbacks at all. In fact, it is really easy to embed the ajax update script in any other javascript you generate. For example, consider the following implementation of a reset button for the counter example. Sending the #ajaxifiedUpdateScriptWith: message to the canvas returns javascript that triggers the ajax update in exactly the same way as when using the #callbackWithAjaxUpdate: message on an anchor (the unary message #ajaxifiedUpdateScript exists as well).
html button
  onClick:  (html ajaxifiedUpdateScriptWith: [value := 0. self markDirty]);
  with: 'Reset']]
I should add that there is little magic to this implementation. I was even pleasantly surprised how easy it was to integrate this idea into the Seaside implementation itself. The #callbackWithAjaxUpdate: method is only a wrapper for a jQuery script callback and the gathering of all update scripts of all components in the application is done using subclasses of the standard Seaside component traversal visitors.

Try it yourself
The implementation and an expanded counter example are available on squeaksource3: http://ss3.gemstone.com/ss/SeasideAjaxifiedComponents.html
The current version is a dump of what Andy Kellens and I produced during the sprint. I think some pondering over naming and some cleanups are still to be expected.
I am also aware that similar implementations have been done before, and, as I mentioned, this one is based upon how we do it in Yesplan. Three elements are important here: the ability to specialize the update script on a per-component basis, the explicit triggering of ajax updates and the ability to embed such an update in your own generated javascripts. Since this approach seems to be what other Smalltalk web frameworks (such as Illiad and Aida) provide, I hope it might be useful for others to use, improve and extend.