ONJava.com -- The Independent Source for Enterprise Java
oreilly.comSafari Books Online.Conferences.

advertisement

AddThis Social Bookmark Button

Cat Fight in a Pet Store: J2EE vs. .NET

by Dean Wampler
11/28/2001

Java Pet Store is an example application in the "J2EE Blueprints" series. It documents best practices, design patterns, and architectural ideas for J2EE applications. Recently, gotdotnet.com, a Microsoft-sponsored Web site, implemented the same application in C# and .NET. Compared to Java Pet Store, Microsoft claims that the .NET version requires one-third the lines of code (LOCs) and provides 28 times faster average response times (for 450 concurrent users), requires one-sixth the CPU utilization, and scales much better as the number of users increases. Microsoft also argues that the architecture of .NET is superior. (Sun has provided a partial response.)

I did a project this year that started with the Pet Store as a springboard, so I decided to examine these claims. When done properly, implementing the same application with two different technologies can be a good way to compare the technologies. In this case, however, there are a number of problems with these claims:

  • The experimental flaws render the performance results unreliable.
  • Counting LOCs says little about productivity.
  • While the two implementations support the same feature set, they actually support two different sets of "nonfunctional" requirements, namely an emphasis of portability over performance (J2EE version) or vice versa (.NET version). Microsoft's analysis of the two versions says more about these tradeoffs than about the actual strengths and weaknesses of .NET vs. J2EE.

In this article, I show that experimental flaws invalidate the performance results and that the code comparisons don't address developer productivity or framework superiority. I also discuss how the presentation-tier technology in .NET has some advantages over the J2EE technology, which the Java community should address.

About Those Performance Numbers

Oracle Ran Some Tests

This catfight started when Oracle published a comparison of their Oracle9iAS application server against an unnamed J2EE "Application Server X." Oracle used Java Pet Store as a test application and claimed superior performance for Oracle9iAS.

These results aren't important for our discussion here. We note, however, that Oracle used Sun SPARC servers running Solaris in a three-tier deployment: a client tier running an Oracle proprietary client simulator, a middle tier for the business logic and page construction, and a database tier.

Also, Oracle fixed some performance bugs and modified the Pet Store code and schema to improve scalability for a large number of users. (Microsoft labels the modified application "highly tuned.") Oracle also limited the tests to those pages with better scalability characteristics.

Most importantly, Oracle admitted that the Pet Store is not designed as a performance benchmark. They used it because ECPerf benchmarks were not yet available. In fact, Pet Store was designed as a learning tool, not as a performance tool. To avoid obscuring code, most optimizations were avoided. Furthermore, Oracle observed that their tests didn't exercise the Enterprise Beans very heavily. Most of the work happens in the database and in the Data Access Objects (DAOs), which are Java objects that are used to provide a database-agnostic interface to persistence (one of the design patterns emphasized in Java Pet Store). Since the EJBs are the meat of a J2EE application, useful tests must give them appropriate weight.

Microsoft and gotdotnet.com Open a .NET Pet Store

Microsoft implemented the Pet Store in C# and .NET. They did not implement the Administrator or Mail applications that are part of Java Pet Store. They were careful, however, to exclude those parts of Java Pet Store in their comparisons.

Microsoft reported the dramatic performance numbers mentioned in the introduction. Unfortunately, they made a serious mistake: rather than re-run the Oracle tests in their environment, they simply quoted Oracle's results.

They used a three-tier test bed with Intel servers, Windows 2000, and SQL Server, which is very different from the Sun/Oracle test bed used by Oracle. This means that unknown, potentially large systematic differences are present in the numbers:

  • Since the tests used different client drivers, "response time" might mean something very different on each testbed.
  • The background network and OS activities are certainly different.
  • The performance of SQL Server vs. Oracle, Solaris vs. Windows, and Intel vs. Sun hardware are different and should be analyzed.

Hence, meaningful performance comparisons between J2EE and .NET can't be made here. The differences might be real, but we just can't tell. Furthermore, since J2EE and .NET are designed for large-scale, highly concurrent, high-availability applications, Pet Store tests say little about how well these frameworks support those applications. Finally, performance should be put into perspective. The real question isn't "which is faster," but "if approach X satisfies my higher-priority criteria, will it be fast enough?"

Pages: 1, 2

Next Pagearrow