ONJava.com -- The Independent Source for Enterprise Java
oreilly.comSafari Books Online.Conferences.

advertisement

AddThis Social Bookmark Button

Monitoring Session Replication in J2EE Clusters Monitoring Session Replication in J2EE Clusters

by Fermin Castro
09/08/2004

Clustering has become one of the most common and most emphasized features of J2EE application servers, because, along with load balancing, it is the fundamental element on which the scalability and reliability of an application server rely.

Nowadays, J2EE clustering is generally associated with the mechanism used to replicate the information stored in a session across different instances of an application server. Whenever a failure happens in a stateful application, a cluster allows the failover to occur in a way that is transparent to the incoming requests: application servers take care of replicating the session information to the participants in the cluster.

Each application server allows you to customize different aspects of a cluster. Two examples are the replication scope ("How many nodes do I want to share my session across?") and the replication trigger ("When do I want to replicate the information?"). However, once a cluster is up and running, it is usually difficult to analyze its behavior. The reason for this is that application servers use low-level APIs to implement their clusters and do not expose any metrics about what's going on underneath the application's code.

This lack of information becomes particularly relevant when the environment around a deployment changes. There are different alterations that can jeopardize the correct behavior of a J2EE cluster. A very common example: we may have done a great job configuring and tuning our cluster, but find out that everything stops working correctly as soon as other stateful applications are deployed on the same resources.

This article provides a tool for obtaining metrics for the most-common processes involved in an application server cluster: serialization and IP multicast. It also shows how to interpret those metrics and identify potential issues in a cluster. Finally, it briefly presents some hints for solving those problems.

Session Replication at a Glance

Three main processes are involved in synchronizing a session among the nodes in a cluster:

  1. The objects managed in the session are "flattened" to be sent across the network.
  2. The "flattened" objects are sent through the network.
  3. The "flattened" objects are restored to their code-manageable state.

The "flattening" and restoration are done the same way in all application servers through a specific standard API called Object Serialization. Object serialization is the process of saving an object's state to a sequence of bytes. Deserialization is the process of rebuilding those bytes into a live object at some future time. The Java Serialization API provides a standard mechanism for developers to handle object serialization. As required in the Servlet 2.3 specification, the "container" (the application server) must, within an application marked as distributable, be able to handle all objects placed into instances of the HttpSession and accept all objects that implement the serializable interface. This sequence of bytes into which objects are transformed is what the application server instances send to each other.

As for the way objects travel across the network, most J2EE application servers (such as JBoss, Oracle, and Borland) use IP multicast communication. Multicast is based on the concept of a network group. An arbitrary group of receivers expresses an interest in receiving a particular data stream. Nodes that are interested in receiving data flowing to a particular group must use a common multicast address. This enables application servers to extend and diminish their clusters in a decoupled manner (without a requirement to maintain a list of nodes in their cluster): "If you want to have my state, this is my multicast address." Figure 1 shows several servers participating in such a multicasting arrangement.

Figure 1
Figure 1. Session replication process in application servers

Monitoring Session Replication

After the previous explanation, you soon realize that the replication process affects your environment in two ways:

  1. It affects your network.
  2. It increases the overhead caused by the serialization and deserialization of objects.

The severity of the impact on your network depends on:

  • The amount of information you maintain in the session (the larger the HTTPSession object, the more bandwidth that is required to synchronize the nodes).
  • The number of application instances in your cluster (more instances implies that more senders and receivers are adding information to the multicast network).

The serialization and deserialization process is an I/O operation and, as such, involves the consumption of resources in every application server instance where it takes place. Fortunately, the application server takes care of making it as efficient as possible. However, depending on the size of the session objects, this I/O operation needs more or less RAM.

To minimize both issues, most application servers do not replicate the entire session in every request and they send only deltas (changes between each session modification). However, the first replication always involves a complete serialization-deserialization and a complete network trip for the objects in the session. Wouldn't it be nice to have a tool to analyze the possible effects of this overhead?

The Architecture of the Replication Latency Tester

To monitor the overhead created by session replication, I've written a Java application, the Replication Latency Tester, that measures the latency between two multicast nodes. It can provide you with some basic information (whenever multicast is used) for verifying that things are working properly.

Here is how the application works: it sends and receives serialized objects across the network, using IP multicast, and measures the time spent in this operation (the latency between nodes plus the overhead in the serialization/deserialization). It uses two threads, one for sending information (Node.class) and another for receiving (Client.class). The reason why the logic isn't separated in two programs (which may have been more clear) is that by using a single JVM for both processes, we are simulating a more realistic scenario. After all, application servers have to take care of the following in a single JVM:

  1. Generating information for their partners in a cluster (for the sessions they maintain).
  2. Receiving information from those partners (for the sessions other application server instances maintain).

The server thread sends a configurable number of objects of the type NodeInfo (NodeInfo.class). The larger the number of objects being sent by the server, the more meaningful the obtained metrics are. This is not only because, statistically, the average values are more representative, but also because, by adding more iterations, we expand the runtime period to a larger lapse of time and analyze the state of our cluster under different conditions.

The size of each object being sent through the network is randomized by the addition of a random StringBuffer attribute. Here is how the random-size attribute is built:

public StringBuffer createRandomSizeAttribute(){
    Random rnd = new Random(System.currentTimeMillis());
    int randInt = Math.abs(rnd.nextInt()) % 100;
    StringBuffer dev=new StringBuffer("big");
    for(int i=0;i<randInt;i++){
      dev=dev.append("bigger!");
    }
    return dev;
}

Random-size objects ensure that we cover different session scenarios: from small amounts of information per session (such as in an Internet bookstore where purchases rarely exceed a couple of items) to large amounts (such as in an Internet supermarket, where a whole shopping cart is maintained in memory).

Additionally, each random-size object contains other important attributes:

  • The message string the server wants to share with the client.
  • The server's hostname (the node that originated the message).
  • The delay between each send operation and the next one.
  • A time stamp defining when the object was created.

The creation time stamp is the base upon which the latency metrics are built. The server marks each object with its creation time and then sends it through IP multicast. When the object is rebuilt (on the client thread), this time stamp is used to calculate the delay in the operation. The time stamp plus the delay between each send operation and the next one are subtracted from the current time. Here is the code used to read the objects and to calculate the latency (the deserialization is done in a separate method). This code is run as a separate thread, thus the run signature.

 ...
public void run() {
   // Create the multicast elements
   MulticastSocket socket=new MulticastSocket(getPORT());
   InetAddress address=InetAddress.getByName(getMCAST());
   byte[] data = new byte[100000];
   DatagramPacket packet=new DatagramPacket(data, data.length);
   // Join the socket to the multicast adddress group
   socket.joinGroup(address); 
   Vector received= new Vector();
   // Loop on the socket and receive data
   while(true){
       //Variable definitions
       ...
       // Receive info
       socket.receive(packet);
       // Deserialize object, cast to NodeInfo and calculate
       byte[] indata = packet.getData();
       NodeInfo receivedNodeInfo=(NodeInfo)deserializer(indata);
       realstart=receivedNodeInfo.getTime()
                -receivedNodeInfo.getDelay();
       totaldelay=System.currentTimeMillis()-realstart;
       size=getSizeinfo(receivedNodeInfo);
       ratio=((double)totaldelay/size)*1028;
       statistics= new Statistics((double)totaldelay,ratio);
       received.add(statistics);
    ...
   }
...
}

Figure 2 describes the architecture of the Replication Latency Tester.

Figure 2
Figure 2. Replication Latency Tester architecture

Pages: 1, 2

Next Pagearrow