ONJava.com -- The Independent Source for Enterprise Java
oreilly.comSafari Books Online.Conferences.

advertisement

AddThis Social Bookmark Button

Lightweight R/O Mapping

by Norbert Ehreke
12/07/2005

An unwritten consensus in the IT industry is that data shared between object-oriented software and relational databases is best exchanged with object/relational (O/R) mapping frameworks where the entity relationship (ER) model follows the object-oriented model. This article proposes a reversed, lightweight approach supported by a small framework called Amber. This approach uses Java annotations to manage the CRUD cycle (Create Read Update Delete) of JavaBeans. Transaction management is put back into the database, and XML mapping descriptors are replaced by annotations. This article is for intermediate Java developers who are interested in efficient transactions with databases without XML descriptors.

Motivation

Common O/R mapping frameworks are very powerful; however, their design and setup introduces several problems that are rarely discussed. We will address these shortcomings, listed below, demonstrated with a small framework called Amber.

  1. OO-driven data modeling leads to poor entity relationship models.
  2. XML descriptors make maintenance difficult.
  3. Transaction management in the O/R tier is difficult.
  4. The learning curve of existing frameworks is relatively steep.

For exchanging data between models that are described with entity relationships compared to object-oriented models, it necessary to overcome the so-called impedance mismatch. With most O/R mapping tools, the object model rules over the relational model. In essence, this means that the Java persistence layer is responsible for generating the entity relationship model from an existing object model. The idea is compelling, because the promise is that once the business model is designed, the development team no longer needs to worry about the persistence anymore.

Related Reading

Java 5.0 Tiger: A Developer's Notebook
By David Flanagan, Brett McLaughlin

For regular O/R tools, the ER model is a result, a product, at best a container. This clashes with system setups where the business process is actually designed as an ER model. In that case, then, a tuning of the ER model is difficult or even impossible because the O/R framework might reconstruct the ER model at any time. Also, when the business process changes and when the adaptations in the O/R domain are automatically reconstructed in the ER domain, the ER model becomes convoluted and sometimes the performance drops to critical levels.

Another problem exists. The classes that are to be persisted need to be configured with external XML specification (mapping) files. At first glance this seems to be not so bad. But when we're dealing with living systems, this becomes a pain in the neck very quickly. Whenever a change occurs, there is more than just one place to look in order to fix the problem, namely the source code and the mapping files.

Finally, existing O/R frameworks are designed to handle transactions. Following the philosophy of those frameworks, this is absolutely necessary because the storage container (i.e., the relational database) is just that: a stupid container. Having to deal with transaction management, however, is simply not desirable. This is something that belongs in the database.

Introducing Amber

Amber approaches the problem of data exchange from the opposite angle. It assumes that the ER model is the reference for resulting OO structures. It also assumes that the database access is handled mainly via stored procedures, which provide a unified point of access to the database and which are also perfectly set up to handle transactions. Put into a provocative statement: the middle tier is implemented as a set of stored procedures. This means an expert on ER modelling, the DBA, is responsible for design and optimization, including stored procedures, which results in much better structures, and faster and more secure access, compared to automatically created ones. Therefore, a number of rather difficult problems that normally need to be addressed fall away.
  • Transactions can (and should) be encapsulated within stored procedures.
  • Read access deals only with result sets.
  • Write access only requires a stored procedure call, not embedded SQL in Java code.
  • With stored procedures, there are no security leaks via SQL injection.

Of course, this means that an entire playing field, which is normally located in the Java domain, is taken away. Make no mistake. That is a huge gain for Java developers, not a loss.

Mapping

Central to Amber is the idea that no matter how a query is submitted to a database, the resulting tabular data is, in essence, a list of Java objects. Rather, that is the way it should be treated from the perspective of the Java developer. The only problem is to map the columns to the properties of an object. Conversely, when writing to a database, the properties of a Java object need to be mapped to the parameters in the call.

Amber maps the rows of a result set to a JavaBean and uses the same mechanism to map the bean, rather the contents of the bean, back to the parameters of an update, insert, or delete call to the database. For more information about JavaBeans and their definition, please check the Resources section.

This is done using a new Java language feature called annotations, available since J2SE 5.0.

Annotations, also called "metadata" under JSR 175, are supplementary code parts that can provide more information about the aim of a method, class, or field. The motivation for metadata stems mainly from the Javadoc API, which is used for inline documentation. So, without interfering with the actual code, annotations are used to describe the context of code, how it can or should be used. If you like to know more about annotations and what can be done with them, see Tiger: A Developer's Notebook, or, for a more playful example, my article " Annotations to the Rescue."

Pages: 1, 2, 3, 4, 5, 6, 7

Next Pagearrow