ONJava.com -- The Independent Source for Enterprise Java
oreilly.comSafari Books Online.Conferences.

advertisement

AddThis Social Bookmark Button

Embedded Java

by Vincent Perrier
08/15/2001

Java's strong appeal for embedded applications is sometimes offset by concerns about its speed and its memory requirements. However, there are techniques that you can use to boost Java performance and reduce memory needs, and of course the Java virtual machine you choose affects Java performance, too. You can make better-informed decisions about using Java by understanding the factors that affect its performance and selecting meaningful benchmarks for embedded applications.

Techniques for improving application execution and choosing the right Java virtual machine (JVM) address only a few aspects of system architecture that affect overall Java performance. When selecting an embedded Java platform, you must take into account a host of other factors, beyond the scope of this article, that have an impact on performance. Among them are hardware processor selection, Java compatibility and supported APIs, application reliability and scalability, the choice of a real-time operating system (RTOS) with associated native libraries and drivers, the availability of Java development tool kits and middleware, graphics support, and the ability to put the application code into ROM.

Once you've selected a hardware and software development platform, there are a variety of factors to consider that will help you choose the best-performing Java virtual machine (JVM) for your application.

Java Is Big and Slow: Myth or Reality?

Although the average Java bytecode application executes about ten times more slowly than the same program written in C or C++, how well an application is written in Java can have a tremendous impact on performance, as a study by Lutz Prechelt, "Comparing Java vs. C/C++ Efficiency Differences to Interpersonal Differences" (Communications of the ACM, October 1999), has shown. In the study, 38 programmers were asked to write the same application program in either C/C++ or Java. Applying statistical analysis to the performance data for the programs revealed that actual performance differences depended more on the way the programs were written than on the language used. Indeed, the study showed that a well-written Java program could equal or exceed the efficiency of an average-quality C/C++ program.

Various approaches are available for boosting bytecode execution speed. They include using a just-in-time (JIT) compiler, an ahead-of-time compiler, or a dynamic adaptive compiler; putting the Java application code into ROM ("ROMizing" it); rewriting the JVM's bytecode interpretation loop in assembly language; and using a Java hardware accelerator.

Consider Compilers

Diagram.
You can impliment graphics above the hardware level with Java's heavy weight graphical tool kit (left) or the lightweight version (right). The lightweight version runs faster and has a smaller memory footprint, but writing an implementaion is harder and slower.

JIT compilers, which compile bytecode on the fly during execution, generally aren't suitable for embedded applications, though. They produce excellent performance improvements in desktop Java applications but typically require 16 to 32 MB of RAM in addition to the application's requirements. The large memory requirement places JIT compilers out of reach for many categories of embedded applications.

Ahead-of-time compilers rival JIT compilers in increasing Java execution speed. Unlike JIT compilers, they're used before the application is loaded onto the target device, as their name indicates. That eliminates the need for extra RAM, but it creates the need for more ROM or flash memory (that is, storage static memory), because compiled machine code requires four to five times the memory of Java bytecode. Compiling ahead of time tends to undermine one of the great benefits of the Java platform because a measure of dynamic extensibility can be lost, since it may not be possible to download new versions of compiled classes. Additionally, any dynamically loaded code, like an applet, won't benefit from ahead-of-time compilation and will execute more slowly than resident compiled code.

Profiling Java code, although somewhat complex, can help minimize code expansion when you're using an ahead-of-time compiler. A good goal is to compile only that 20 percent of the Java classes in which the application spends 80 percent or more of its time.

Dynamic adaptive compilers offer a good compromise between JIT and ahead-of-time compilers (see Table 1). They're similar to JIT compilers in that they translate bytecode into machine code on the fly. Dynamic adaptive compilers, however, perform statistical analysis on the application code to determine where the code merits compiling and where it's better to let the JVM interpret the bytecode. The memory used by this type of compiler is user-configurable, so you can evaluate the trade-off between memory and speed and decide how much memory to allocate to the compiler.

Table 1.

Placing the bytecode into ROM can contribute to faster application performance. It doesn't make the code run faster. It does, however, translate the code into a format that the JVM can execute directly from ROM, causing the code to load faster by eliminating class loading and code verification, tasks normally performed by the JVM.

Another way to speed up bytecode execution without using ahead-of-time or dynamic compilation techniques is to rewrite the bytecode interpreter in the JVM. Because the interpreter is a large C program, you can make it run faster by rewriting it in assembly language.

Pages: 1, 2

Next Pagearrow