ONJava.com -- The Independent Source for Enterprise Java
oreilly.comSafari Books Online.Conferences.

advertisement

AddThis Social Bookmark Button Java Design and Performance Optimization

Catching OutOfMemoryErrors to Preserve Monitoring and Server Processes

08/22/2001

Why would you want to catch an OutOfMemoryError? If an OutOfMemoryError is generated by the JVM, there is not much you can do, so what use is the OutOfMemoryError?

Encountering an OutOfMemoryError means that the garbage collector has already tried its best to free memory by reclaiming space from any objects that are no longer strongly referenced. If it could not reclaim enough space, then it also tried to obtain memory from the underlying operating system, unless heap space is already at the JVM upper memory bound set by the -Xmx parameter (-mx in JVMs prior to Java 2). So encountering the OutOfMemoryError means that there is no more heap space that can currently be reclaimed, and that either the operating system cannot provide any more memory to the JVM or you have reached the JVM upper memory bound. In any case, there is not much you can do, so when would you ever want to catch an OutOfMemoryError?

The following sections describe a few special situations when it can be useful to catch an OutOfMemoryError.

Expanding Memory and Determining Memory Limits

The JVM heap space is the memory area where all objects reside. In addition to objects, the heap can also contain memory reserved for the garbage collector and for some other JVM activities. The overall heap size is normally set by two parameters of the Java executable:

  • -Xms (-ms before Java 2) to specify the initial heap size when the JVM starts up; and
  • -Xmx (-mx before Java 2) to specify the maximum heap size that the JVM is allowed to grow to.

If these parameters are not specified, the JVM uses default values that vary, depending on the JVM version and vendor. The default initial values are usually one or two megabytes, and the default maximum values are typically between 16 and 64 Mbytes.

Related Reading

Java Performance TuningJava Performance Tuning
By Jack Shirazi
Table of Contents
Index
Sample Chapter
Full Description
Read Online -- Safari

Clearly, if you control how the JVM is started, you can specify the heap values you want. But what if your application is running in a JVM which was not started under your control? How can you determine what size the JVM can reach? There is a method in the Runtime class, Runtime.totalMemory(), which looks like it would give us this information, but the value returned is the current memory size, not the largest possible memory size. In Java version 1.4 there will be a new method, Runtime.maxMemory(), which returns the -Xmx value, but that doesn't help with earlier JVM versions. In addition, even if we know the value passed to -Xmx, we are not guaranteed that the JVM can reach the indicated size, since a size may specified that is too big for the underlying system.

One way to reliably determine the maximum possible heap size is to grow the heap until it can no longer expand. Doing this is quite simple: keep creating objects until we hit an OutOfMemoryError. The following testMemory() method repeatedly creates objects with a size of one megabyte until an OutOfMemoryError is generated:

public static final int MEGABYTE = 1048576;
public static long testMemory(int maximumMegabytesToTest)
{
  //Hold on to memory, or it will be garbage collected
  Object[] memoryHolder = new Object[maximumMegabytesToTest];
  int count = 0;
  try
  {
    for (; count < memoryHolder.length; count++)
    {
      memoryHolder[count] = new byte[MEGABYTE];
    }
  }
  catch(OutOfMemoryError bounded){}
  long highWater = Runtime.getRuntime().totalMemory();
  //  System.out.println("High water in bytes: " 
  //  + highWater);
  //  System.out.println("Megabytes allocatable in 
  //  megabytes: " + count);
  memoryHolder = null; //release for GC
  //We know we could allocate "count" megabytes and 
  //have a high water mark of "highWater". Return 
  //whichever you prefer.
  //return count;
  return highWater;
}

The method returns the size that the heap reached when the OutOfMemoryError is generated. There are, however, some consequences to using this method. Firstly, although I use a one megabyte size to incrementally request memory, the actual size allocated will be more than one megabyte since the byte[] array object has some additional size overhead from being an object. Secondly, if the heap is fragmented, and the garbage collector cannot or does not defragment the heap sufficiently, there will actually be further space in the heap for object creation, even though we cannot create further megabyte-sized objects. Finally, the JVM may have grown so much that it will now be paged by the operating system (see the "Operating system paging" sidebar), which will cause a significant decrease in performance. This would occur anyway if the JVM needed to grow large enough during the normal operation of the program, but running the testMemory() method would impose the overhead sooner.

Operating system paging

Operating system paging occurs when a program is too big to fit into available real memory (RAM), but can fit in virtual memory. Paging moves pages of the program back and forth between the RAM and the paging file on disk, allowing the operating system to seem like it has a larger memory than the available RAM, but at the expense of program performance.

None of these points matter if you are only interested in the high-water mark, that is, the maximum size that the heap can be grown to, but can be important in other situations.

Flushing Memory

I often need to run tests on Java code to determine performance or memory costs. Whenever I'm trying to compare two different methods, I need to ensure that the tests start from the same base environment for each method, so that I'm comparing like with like. Part of this environment leveling is to ensure that there are no other processes or threads taking system resources, that the system is running processes at the same priorities, and that any caches that apply are equally filled or emptied before starting each method. One part of environment leveling is to ensure that the runtime system starts with the same memory availability for each method. Without this leveling, it is possible for the first method run to incur the cost of growing the JVM heap size (requesting chunks from the underlying system), or alternatively, for the second method to encounter the costs of garbage collection as it requires memory which was allocated during the first method but has not yet been reclaimed.

Ideally, the JVM would provide a method that would flush memory for me. System.gc() looks like it should be such a call. Unfortunately, System.gc() is only a hint to the JVM that "now would be a good time to run the garbage collector." The JVM can ignore the hint, or can run a partial garbage collection, or a full mark-and-sweep of all spaces, or whatever. Instead of relying on the garbage collector, I adapt the method from the previous section to flush memory. I do this by allocating as much memory as possible, as with the earlier testMemory() method, and then I release all held memory and request a little more. The last request is to trigger the garbage collector to immediately reclaim all the memory I was holding onto. The method is straightforward:

public static void flushMemory()
{
  //Use a vector to hold the memory.
  Vector v = new Vector();
  int count = 0;
  //increment in megabyte chunks initially
  int size = 1048576;
  //Keep going until we would be requesting 
  //chunks of 1 byte
  while(size > 1)
  {
    try
    {
      for (; true ; count++)
      {
        //request and hold onto more memory
        v.addElement( new byte[size] );
      }
    }
    //If we encounter an OutOfMemoryError, keep 
    //trying to get more memory, but asking for 
    //chunks half as big.
    catch (OutOfMemoryError bounded){size = size/2;}
  }
  //Now release everything for GC
  v = null;
  //and ask for a new Vector as a new small object 
  //to make sure garbage collection kicks in before 
  //we exit the method.
  v = new Vector();
}

This is a JVM-independent solution to flushing memory. You could even conceivably use this in an application if you knew that you had several seconds of time at some point when the application could be doing nothing else, but I wouldn't really recommend it. Although flushMemory() should work on any JVM, it is a stressful procedure and may break some JVMs. In particular I know that the Windows JVM from the Sun 1.2.0 release crashed the main thread when I ran flushMemory(), and put the garbage collector thread into a loop if I additionally ran with the -verbosegc option.

Other Situations

Further Resources

• The Java performance tuning website

• The Java Performance Tuning book

As you've seen, there are some situations where catching an OutOfMemoryError is useful. Most runtime situations where you might want to catch an OutOfMemoryError involve trying to carry on processing in some long-lived monitor or server process beyond such a fatal error. It is feasible to catch the OutOfMemoryError if, in doing so, you know that you can restore your application to a known state that can proceed with processing. For example, many applications have threads running in a daemon state. This type of thread is automatically terminated by the JVM when all the threads left running are daemon threads. If you have a daemon thread, it normally provides some simple service to the application. For example, a timer thread could simply set timeout flags on shared variables. In these kinds of cases, it is feasible to catch the OutOfMemoryError, since it is relatively simple to return to a known state, and the thread can continue providing its service. Some other applications are entirely event-driven. These applications can catch an OutOfMemoryError, deregister the event listener that threw the OutOfMemoryError, request all registered cache managers to reduce memory used, and then simply carry on processing.

However, for most threads, there are usually too many things that could go wrong when OutOfMemoryError is thrown. Rather than catch an OutOfMemoryError and a host of other possible Error objects, you are normally better off monitoring the process from a separate process, or using a looped startup script, and automatically restarting if a fatal error occurs. This way, the restarted application is in a known state. You can do this from Java using Runtime.exec() to start the process and use Process.waitFor() to monitor that it is alive. If you use this technique, you should ensure that you continually read the Process output (from Process.getInputStream() and Process.getErrorStream()), otherwise the process could be blocked on its output, and would apparently not terminate.

Jack Shirazi is the author of Java Performance Tuning. He was an early adopter of Java, and for the last few years has consulted mainly for the financial sector, focusing on Java performance.


Read more Java Design and Performance Optimization columns.

Return to ONJava.com.