Unlocking the True Power of Entity EJBs
Pages: 1, 2
The New Architecture
There are all kinds of approaches to the design and architecture of a system. Some architects design from the data up and others design from the presentation view down. Other approaches have developers focus on transaction and request granularity. If you want to make the most of entity EJBs, I propose a new methodology: architecture by usage pattern.
This process requires an understanding of the business domain, a data model, and some expectations as to the patterns of usage of the data.
Understand your business requirements and model your entity EJBs in such a way that you create in-memory domain objects that encapsulate the data in a format most accessible by the rest of the system.
Implement your entity EJBs to this model, ignoring behavior of the data.
Test the EJB using a standard read-write transactional locking scheme to make sure that you can create, read, update, and destroy instances.
Compile a list of usage patterns that the system will need to perform on the data and their relative weights to one another. For example, I interviewed a couple of companies that have built Web-based systems. This is what we came up with:
Usage Pattern Relative Weight Read data for display 85% Read-write data, inserting records, and deleting records (all requiring transactional support) 10% Batch update data 5%
Create a separate deployment for each entity EJB for each usage pattern. For example, in WebLogic Server, changing certain flags implements different usage patterns. (For more information about WebLogic deployment and capabilities, please see BEA's site. WebLogic is merely used here as an example of how this methodology can work.)
Usage Pattern WebLogic Configuration Read data for display
read-only <concurrency-strategy>with invalidations from a separate read-write deployment and
Read-write data, inserting records, and deleting records (all requiring transactional support)
true(aggressive loading) and
Batch update data
Requiredtransaction demarcation, and only invoking
setXXX(...)methods of EJBs. This will cause a
findXXX(...)method to return a
Collectionof primary keys (PKs), but no data will be loaded into cache. A client can iterate through the
Collection, modifying fields by calling individual
setXXX(...)methods. When the transaction is committed, only the optimized fields of all of the PKs will be written to the database.
Partition your presentation logic along the lines of the usage patterns that you have defined. Different use cases will access different deployments. So, for the "Read data for display" use case, this might be implemented by a single SLSB that directly accesses the read-only deployment. Since each usage pattern is a different deployment, they will each have a different JNDI name to which they are bound. Your presentation logic will determine which deployment to use by the lookup performed on the naming server.
The Power and Drawback
Of course, the example displayed above is contrived, but demonstrates the possibilities. In fact, the number of usage patterns that you could derive for your system is quite extensive. For WebLogic Server's EJB container, by taking the permutation of available values for each data-access deployment descriptor tag, I counted nearly 40,000 different ways that an entity EJB could be configured to access a persistent store. Of course, most of these combinations would never be used, but this still quantifies the possibilities.
Of course, there is a penalty for using this methodology: memory consumption. Each deployment of an entity EJB will consume additional memory, based upon your cache settings. If not configured carefully, you could have the same PK in many different caches (one for each usage pattern). If you design a system that has each entity EJB deployed in 10 different formats, you are potentially replicating the same data nine times!
There is, however, a solution: determine the overarching cache size available for a single EJB, and then set the individual deployment limits to be a ratio based upon their expected usage. For example, if you have an entity EJB X that can be allocated 100,000 instances in its cache for all usage patterns, the "Read data for display" deployment would be set to 85,000, since its expected usage pattern was 85% of the activity in the system. The other usage patterns would have their cache sizes set appropriately. This is a simple, yet elegant, way to allocate the appropriate amount of memory to each operation, based upon its activity level in the system.
EJB 2.0 is just getting warmed up. Vendors are excited about EJB 2.0 not because it is a new version of a popular specification, but because of the possibilities that a solid container implementation can fulfill. Even though EJB 2.1 is currently in development, it will be two to three years before vendors provide container implementations that fully cover every conceivable data-access scenario; the number of container value-add features that can be incorporated into the container is nearly limitless.
Future articles will focus on drilling down into the specifics of the container and provide some insight into how a container can provide optimizations that doing straight SQL would find challenging.
Tyler Jewell , Director, Technical Evangelism, BEA Systems Tyler oversees BEA's technology evangelism efforts that are focused on driving early adoption of strategic BEA technologies into the ISV and developer community.
Read more EJB 2 columns.
Return to ONJava.com.