New to Java? We'll help you get started with our revised beginner's tutorial, or our free online textbook.
|
Get the latest Java books |
|
h t t p : / /w w w . j a v a c o f f e e b r e a k . c
o m /
|
Menu Articles Using Java Applets Looking for Java resources? Check out the Java Coffee Break directory! |
Sample : Chapter 9 of
|
Database Programming with JDBC and Java, Second Edition ISBN : 1-56592-616-1 Author: George Reese Publisher: O'Reilly & Associates 352 pages, $34.95 George Reese has taken an unusual path into business software development. After earning a B.A. in philosophy from Bates College in Lewiston, Maine, George went off to Hollywood where he worked on television shows such as The People's Court and ESPN's Up Close. The L.A. riots convinced him to return to Maine, where he finally became involved with software development and the Internet. George has since specialized in the development of Internet-oriented Java enterprise systems. He is the author of Database Programming with JDBC and Java and the world's first JDBC driver, the mSQL-JDBC driver for mSQL. |
Chapter 9
PersistenceObjects contain the possibility of all situations.
--Ludwig WittgensteinTractatus Logico PhilisophicusIf RAM were unlimited and computers never shut down, you would now have all of the tools you need to finish the server side of the banking application. You cannnot, however, afford to have all your data in memory all of the time; computers shut down far too often, sometimes by design, sometimes by error. You need to grant your business objects a certain level of immortality, to make them persist beyond the lifecycle of the process in which they are created.
Persistence is the act of making the state of an application stretch through the end of this process instance of the application to the next. In order to make an application persist, its state needs to be recorded in a data store that can survive computer shutdowns and crashes. The most common persistence tool is by far the relational database--and for Java, that means JDBC.
Database Transactions
Transactions appear throughout this book. In the first half of this book, you saw how JDBC manages transaction isolation levels, commits, and rollbacks. Chapter 8, Distributed Component Models, spoke of component-level transactions. You now need to tie the two together at the persistence layer.
The component transaction choreographs a persistence operation. When that transaction is notified that it is complete, it creates a persistence transaction--in your case, a JDBC transaction--and tells each object modified in the business transaction to insert, update, or delete itself in the persistent store. The persistence transaction makes sure all data store accesses take place within a single data store transaction. In the case of JDBC, the persistence subsystem is responsible for making sure all of the objects are saved using the same
Connection
object and committed at the end.The component model needs to remain impartial about the kind of persistence you use. To preserve this agnosticism, it uses a generic
Transaction
object to represent a component-level transaction. Besides sounding really cool, this agnosticism is actually intuitive once you step back and think about what it is to be an account or a customer. There is nothing about the concept of a bank account that says, "I save to a relational database." Instead, within the context of your application, you know that a bank account is something you wish to persist across time. How it persists is a technological detail that should not be melded into the essence of a bank account.The core of a solid persistence library contains no code specific to any data storage type. This means, of course, that it does not use any of the JDBC API you learned in the first section of the book. Figure 9-1 shows how you can structure this library so that you can write plug-in modules that support different data-storage technologies without committing your applications to any particular technology.
Figure 9-1. The persistence library architecture
A few key behaviors define bank accounts, customers, and any other kind of object as persistent. Specifically, they save to and restore from some kind of data store. Saving is much more complex than it sounds: each save could be creating a new account, updating an existing one, or deleting a closed one. Session components wanting to save an account or groups of accounts, however, should not be responsible for handling the logic that determines what kind of save a specific account requires. They should just begin transactions and end them; separate persistence tools figure out what the begins and ends really mean.
Figure 9-2. A sequence diagram showing a component transaction and its persistence transaction
To support this level of intelligence, persistent objects note when changes occur to them. You saw this change tracking in Chapter 8 inside the
prepareUpdate( )
method inBaseEntity
. When a transaction is ended and the transaction tells the component to save itself, the component knows what sort of save to perform based on what kind of change was made to it. Figure 9-2 provides a UML sequence diagram that illustrates this complex behavior.Persistence operations may of course fail. Earlier in the book, you saw how JDBC defaults all
Connection
objects to auto-committing database transactions. In order to support component transactions, you turn this feature off and enable a JDBC-specific implementation of theTransaction
class to manage the commit logic. Example 9-1 captures the generic transaction logic provided by theTransaction
abstract class.Example 9-1: The Abstract Transaction Class
package com.imaginary.lwp;
import java.util.Date;
import java.util.HashMap;
import java.util.HashSet;
import java.util.Iterator;
/**
* An abstract representation of a data storage transaction. This class
* manages the lifecycle of a data storage transaction. Applications can
* get a transaction instance by calling <CODE>getCurrent</CODE>. The
* transaction does not begin, however, until the <CODE>begin</CODE> method
* is called by an application.
* <BR>
* Last modified $Date: 2000/08/08 15:52:01 $
* @version $Revision: 1.14 $
* @author George Reese (borg@imaginary.com)
*/
public abstract class Transaction {
static private HashMap transactions = new HashMap( );
/**
* Provides access to the transaction currently in
* process for the specified user identifier.
* This method will create a new transaction if none
* currently exists for the identifier in question.
* @param id the user identifier
* @return the current transaction
* @throws com.imaginary.lwp.PersistenceTransaction
* could not create the transaction
*/
static public Transaction getCurrent(Identifier id)
throws PersistenceException {
Transaction trans;
String cname;
if( id == null ) {
// id was null, so create a new transaction
// LWPProperties.XACTION is a property that
// identifies the Transaction implementation class
cname = System.getProperty(LWPProperties.XACTION);
try {
trans = (Transaction)Class.forName(cname).newInstance( );
trans.userID = id;
}
catch( Exception e ) {
e.printStackTrace( );
throw new PersistenceException(e);
}
}
synchronized( transactions ) {
// if a transaction is in place, return it
if( transactions.containsKey(id) ) {
trans = (Transaction)transactions.get(id);
return trans;
}
cname = System.getProperty(LWPProperties.XACTION);
try {
trans = (Transaction)Class.forName(cname).newInstance( );
trans.userID = id;
}
catch( Exception e ) {
e.printStackTrace( );
throw new PersistenceException(e);
}
transactions.put(id, trans);
}
return trans;
}
private long timestamp = -1L;
private HashSet toCreate = new HashSet( );
shSet toRemove = new HashSet( );private HashSet toStore = new HashSet( );
private Identifier userID = null;
public Transaction( ) {
super( );
}
/**
* Starts a new transaction.
* This method establishes the transaction timestamp.
*/
public synchronized final void begin( ) throws PersistenceException {
if( timestamp == -1L ) {
timestamp = (new Date()).getTime( );
}
else {
throw new PersistenceException("Duplicate begin( ) call.");
}
}
/**
* Each data store implements this method
* to perform the actual commit.
*/
public abstract void commit( ) throws PersistenceException;
/**
* Ends the transaction at the component level and
* executes the persistence transaction. This method
* moves through each of the modified components and
* tells it to save to the data store. If all saves are
* successful, it sends a commit to the persistence layer.
* Otherwise, it sends an abort.
*/
public synchronized final void end( ) throws PersistenceException {
try {
Iterator obs;
// perform the different operations
// the order here is unimportant (remove vs. create vs. store)
// as long as there are no database constraints
// in place
obs = toRemove.iterator( );
while( obs.hasNext( ) ) {
BaseEntity p = (BaseEntity)obs.next( );
p.remove(this);
}
obs = toCreate.iterator( );
while( obs.hasNext( ) ) {
BaseEntity p = (BaseEntity)obs.next( );
p.create(this);
} obs = toStore.iterator( );
while( obs.hasNext( ) ) {
BaseEntity p = (BaseEntity)obs.next( );
p.store(this);
}
// commit the changes
commit( );
// let the objects know about the commit
obs = toRemove.iterator( );
while( obs.hasNext( ) ) {
BaseEntity p = (BaseEntity)obs.next( );
p.commit(this);
}
obs = toCreate.iterator( );
while( obs.hasNext( ) ) {
BaseEntity p = (BaseEntity)obs.next( );
p.commit(this);
}
obs = toStore.iterator( );
while( obs.hasNext( ) ) {
BaseEntity p = (BaseEntity)obs.next( );
p.commit(this);
}
toCreate.clear( );
// invalidate all removed objects
obs = toRemove.iterator( );
while( obs.hasNext( ) ) {
BaseEntity p = (BaseEntity)obs.next( );
p.invalidate( );
}
toRemove.clear( );
toStore.clear( );
// remove the transaction from the list of transactions
Transaction.transactions.remove(userID);
}
catch( Throwable t ) {
Transaction trans;
Iterator obs;
// an exception occurred, rollback
rollback( );
Transaction.transactions.remove(userID);
// use a different transaction to reload everyone
trans = Transaction.getCurrent(userID);
// force everyone to reload
obs = toRemove.iterator( );
while( obs.hasNext( ) ) {
BaseEntity ob = (BaseEntity)obs.next( );
try {
ob.reload(trans);
}
catch( Exception disaster ) {
ob.invalidate( );
}
}
obs = toStore.iterator( );
while( obs.hasNext( ) ) {
BaseEntity ob = (BaseEntity)obs.next( );
try {
ob.reload(trans);
}
catch( Exception disaster ) {
ob.invalidate( );
}
}
if( t instanceof PersistenceException ) {
throw (PersistenceException)t;
}
else {
throw new PersistenceException(t);
}
}
finally {
timestamp = -1L;
}
}
// the identifier of the user behind this transaction
public synchronized final Identifier getIdentifier( ) {
return userID;
}
// the timestamp of this transaction
public synchronized final long getTimestamp( ) {
return timestamp;
}
/**
* Called by prepareCreate( ) in the BaseEntity class.
* This method adds the specified object to the list
* of objects to be created in this transaction.
* @param ob the entity to be added to the transaction
*/
synchronized final void prepareCreate(BaseEntity ob) {
if( toCreate.contains(ob) ) {
return;
}
toCreate.add(ob);
}
// identifies whether the transaction has been begun
public synchronized final boolean isInProcess( ) {
return (timestamp != -1L);
}
/**
* Adds the specified entity to the list of entities
* to delete from the data store. If this entity is
* already marked for modification, it is removed from
* that list. If it is marked for creation, it is removed
* from that list and <EM>not</EM> added to the list of
* entities to be removed.
* @param ob the entity to be removed
*/
synchronized final void prepareRemove(BaseEntity ob) {
// It is already in the list, so do nothing.
if( toRemove.contains(ob) ) {
return;
}
// It is supposed to be created, which means
// there is now nothing to delete.
// Remove it from the list of things to create.
if( toCreate.contains(ob) ) {
toCreate.remove(ob);
return;
}
// It was modified before the remove( ) was called
// so remove it from the list of things to save.
if( toStore.contains(ob) ) {
toStore.remove(ob);
}
toRemove.add(ob);
}
/**
* Adds the specified entity to the list of entities
* modified in this transaction. If the entity is already
* marked as created or removed, it is not added.
* @param ob the entity that was modified
*/
synchronized final void prepareStore(BaseEntity ob) {
// if it already is part of this transaction in
// any capacity, leave it be
if( toStore.contains(ob) || toCreate.contains(ob) ) {
return;
}
if( toRemove.contains(ob) ) {
return;
}
toStore.add(ob);
}
/**
* Each data store implements this method to
* perform the actual rollbacks here.
*/
public abstract void rollback( ) throws PersistenceException;
}
The abstract
Transaction
class manages all component transaction issues. Theend( )
method goes through and tells each object to perform the appropriate persistence operation. It assumes an implementation of thePersistent
interface in Example 9-2 that is implemented byBaseEntity
.Example 9-2: The Persistent Interface
package com.imaginary.lwp;
public interface Persistent {
String getLastUpdateID( );
long getLastUpdateTime( );
long getObjectID( );
void create(Transaction trans) throws PersistenceException;
void load(Transaction trans, long oid) throws PersistenceException;
void reload(Transaction trans) throws PersistenceException;
void remove(Transaction trans) throws PersistenceException;
void store(Transaction trans) throws PersistenceException;
}
The
Transaction
base class that makes these calls leaves thecommit( )
androllback( )
methods to the persistence subsystem. For a JDBC transaction, thecommit( )
should look like this:public void commit( ) throws PersistenceException {
if( connection == null ) {
return;
}
if( connection.isClosed( ) ) {
throw new PersistenceException("Connection closed.");
}
try {
connection.commit( );
connection.close( );
connection = null;
}
catch( SQLException e ) {
throw new PersistenceException(e);
}
}
Similarly, the
rollback( )
should look like this:public void rollback( ) throws PersistenceException {
try {
if( connection == null ) {
return;
}
if( connection.isClosed( ) ) {
throw new PersistenceException("Invalid transactional "+
"state.");
}
connection.rollback( );
connection.close( );
connection = null;
}
catch( SQLException e ) {
throw new PersistenceException(e);
}
}
Mementos and Delegates
One of the key features of a solid persistence architecture is a separation of business logic from persistence logic. This separation is critical for these reasons:
- The skill set required for writing business components is very different from that required for database programming. By separating different kinds of behavior in various components, different people can easily "own" the development and maintenance of those components.
- If a business component is independent of the persistence logic, it requires no changes should the persistence logic change; even if that change involves a migration to a new database engine or even a new persistence mechanism.
You will use two key design patterns to support the separation of business logic from persistence logic: the memento pattern and delegation.
BaseEntity
specifically delegates its implementation of thePersistent
interface in Example 9-2 to a specialized persistence component. This sample code shows how that delegation works:public final void store(Transaction trans)
throws StoreException {
Memento mem = new Memento(this);
if( !isValid ) {
throw new StoreException("This object is no longer valid.");
}
handler.store(trans, mem);
}
The
BaseEntity
class references an attribute calledhandler
that is an instance of a class implementing thePersistenceSupport
interface. This object is called the delegate. It supports the persistence operations for an entity. Each method delegated to it requires aTransaction
object to identify what transaction governs the persistence operation and a memento that captures the entity's current state.I briefly introduced the classic memento design pattern in Chapter 7, Distributed Application Architecture. The memento pattern enables an object's state to be decoupled from its implementation. In order to perform a persistence operation, the delegate depends only on the
Memento
class.[1]
It gets all of the entity's state information from that memento. As a result, an entity can go through major code changes without any impact on its persistence delegate. Using these two tools, you now have a system for which a business component has no dependencies on the underlying data model, and a persistence delegate has no depencies on the business component it persists. Example 9-3 shows the genericPersistenceSupport
interface.Example 9-3: The PersistenceSupport Interface for Delegating Persistence Operations
package com.imaginary.lwp;
import java.util.Collection;
public interface PersistenceSupport {
public abstract void create(Transaction trans, Memento mem)
throws CreateException;
public abstract Collection find(Transaction trans, SearchCriteria sc)
throws FindException;
public abstract void load(Transaction trans, Memento mem)
throws LoadException;
public abstract void remove(Transaction trans, Memento mem)
throws RemoveException;
public abstract void store(Transaction trans, Memento mem)
throws StoreException;
}
This interface contains no mention of JDBC or of the entity it is saving. It knows only about its transaction context and the memento.
JDBC Persistence
Now that you have a general foundation for object persistence, you can use these classes to create a JDBC-based persistence package. The generic library has set aside implementations of the
PersistenceSupport
andTransaction
interfaces as the places where data store-specific persistence operations should occur. To create a database persistence library, you thus need to create database-specific extensions of these two classes.Here you get the chance to put your JDBC skills to use. I already showed how a
JDBCTransaction
class might implementcommit( )
androllback( )
methods. JDBC support requires still more work. You need to create JDBCConnection
instances used to talk to the database. You also need to write the actual methods that talk to the database. AgetConnection( )
method in theJDBCTransaction
class takes care of the first problem:public Connection getConnection( ) throws SQLException {
if( connection == null ) {
Context ctx = new InitialContext( );
DataSource ds = (DataSource)ctx.lookup("jdbc/ora");
connection = ds.getConnection("borg", "pw");
connection.setAutoCommit(false);
}
return connection;
}
In this code, I use the JDBC 2.0 Optional Package method for connecting to a database. You may not have the JDBC 2.0 Optional Package available to you, in which case you may want to use the old-fashioned
DriverManager
approach to making aConnection
. Either way, you definitely want a pooled connection. Without access to the JDBC 2.0 Optional Package, you have to roll your own connection pooling.The heart of JDBC persistence rests in the persistence delegate. As you saw before in the
PersistenceSupport
interface, an implementation is responsible for the SQL that inserts, updates, or deletes the object in question from the database. Each implementation is dependent on the particular entity it is persisting. Example 9-4 provides thestore( )
method in theAccountSupport
class to save anAccount
entity to the database.Example 9-4: The store( ) Method for an Account Persistence Delegate
static private String UPDATE =
"UPDATE Account " +
"SET balance = ?, " +
"lastUpdateID = ?, " +
"lastUpdateTime = ? " +
"WHERE objectID = ? " +
"AND lastUpdateID = ? " +
"AND lastUpdateTime = ?";
public void store(Transaction trans, Memento mem)
throws StoreException {
long oid = mem.getObjectID( );
long lut = mem.getLastUpdateTime( );
String luid = mem.getLastUpdateID( );
Connection conn = null;
try {
PreparedStatement stmt;
Double d;
conn = ((JDBCTransaction)trans).getConnection( );
stmt = conn.prepareStatement(UPDATE);
d = (Double)mem.get(Account.class,
Account.BALANCE);
if( d == null ) {
stmt.setNull(1, Types.REAL);
}
else {
stmt.setDouble(1, d.doubleValue( ));
}
stmt.setString(2, trans.getIdentifier().getUserID( ));
stmt.setLong(3, trans.getTimestamp( ));
stmt.setLong(4, oid);
stmt.setString(5, luid);
stmt.setLong(6, lut);
if( stmt.executeUpdate( ) != 1 ) {
throw new StoreException("No row modified.");
}
stmt.close( );
}
catch( SQLException e ) {
throw new CreateException(e);
}
}
You may have noticed the
getLastUpdateID( )
andgetLastUpdateTime( )
methods in thePersistent
interface earlier in the chapter and wondered what their purpose was. They specifically enable you to work with a database in optimistic concurrency mode. Pessimistic concurrency means that the database will lock data on read and not release that lock without a commit. In other words, if you do aSELECT
to find an account, the row--or perhaps more--will be locked until you issue a commit. No one else can read or write to that row.As you can imagine, pessimistic concurrency is very bad for performance. With optimistic concurrency, however, you risk dirty writes. A dirty write is a situation in which two clients have read the same data simultaneously and then attempt to make different writes. For example, consider when a teller reads customer information to change the customer address, and the bank manager reads information about the same customer to add a comment to the customer file. If they both read the data at the same time, the person to save last risks erasing the changes made by the first person to save. By using the user ID of the last person to make a change, along with a timestamp noting when the change was made, you can get the performance benefit of optimistic concurrency with the protection against dirty writes of pessimistic concurrency.
Under this model, when you query the database, you get the user ID of the last user to make a change and the time the change was made. When you update the database with that data, you use that user ID and timestamp in the
WHERE
clause. If someone else changed the data before you, yourWHERE
clause will not match any rows in the database and will thus throw an exception.Searches
Not only does the persistence delegate support the basic database inserts, updates, and deletes, but it also supports the component model's searches. Writing logic to support arbitrary searches, however, can be very complex. You really do not want to have to repeat the complexity of search logic for every single component in your system if you can avoid it. Fortunately, you can avoid it by capturing search logic in a single place, the persistence delegate.
The final example in this chapter, Example 9-5, is the full source code to the
JDBCSupport
class, an implementation of thePersistenceSupport
class. It does not, on its own, provide implementations of the persistence operations you discussed so far in the chapter. Business components require subclasses ofJDBCSupport
that specifically map a specific business component to a data model.[2] The base class does have, however, a generalized search engine that accepts theSearchCriteria
object, translates it into SQL, and finally returns the results.Example 9-5: The Abstract JDBCSupport Class with a Generic SQL Search Algorithm
package com.imaginary.lwp.jdbc;
import com.imaginary.lwp.BaseFacade;
import com.imaginary.lwp.FindException;
import com.imaginary.lwp.PersistenceSupport;
import com.imaginary.lwp.SearchBinding;
import com.imaginary.lwp.SearchCriteria;
import com.imaginary.lwp.Transaction;
import com.imaginary.util.DistributedList;
import java.sql.Connection;
import java.sql.PreparedStatement;
import java.sql.ResultSet;
import java.sql.ResultSetMetaData;
import java.sql.SQLException;
import java.util.ArrayList;
import java.util.Collection;
import java.util.HashMap;
import java.util.Iterator;
/**
* Persistence support for JDBC-based persistence.
* <BR>
* Last modified $Date: 2000/08/08 15:52:01 $
* @version $Revision: 1.14 $
* @author George Reese (borg@imaginary.com)
*/
public abstract class JDBCSupport implements PersistenceSupport {
/**
* Provides a generalized mechanism for binding a set
* of values to any possible prepared statement. A calling
* method specifies a statement and the index from which
* binding should begin, as well as the actual bindings.
* This index is the index that gets passed to a
* prepared statement's setXXX( ) method for binding
* the values in the bindinds list
* @param stmt the statement being set up
* @param ind the index to start binding at
* @param bindings the bindings to bind
* @throws com.imaginary.lwp.FindException
* @throws java.sql.SQLException an error occurred binding the bindings
* to the statement
*/
private void bind(PreparedStatement stmt, int ind, Iterator bindings)
throws FindException, SQLException {
while( bindings.hasNext( ) ) {
SearchBinding bdg = (SearchBinding)bindings.next( );
Object val = bdg.getValue( );
if( val instanceof SearchCriteria ) {
SearchCriteria sc = (SearchCriteria)val;
bind(stmt, ind, sc.bindings( ));
}
else if( val instanceof BaseFacade ) {
BaseFacade ref = (BaseFacade)val;
stmt.setLong(ind++, ref.getObjectID( ));
}
else {
stmt.setObject(ind++, val);
}
}
}
/**
* Executes a search for objects meeting the specified criteria
* using the specified transaction.
* @param tr the transaction to use for the find operation
* @param sc the search criteria to base the find on
* @return an iterator of matching objects
* @throws com.imaginary.lwp.FindException an error occurred
* searching for objects meeting the search criteria
*/
public Collection find(Transaction tr, SearchCriteria sc)
throws FindException {
Iterator bindings = sc.bindings( );
DistributedList list = new DistributedList( );
String sql = getFindSQL(sc);
try {
JDBCTransaction trans;
Connection conn;
trans = (JDBCTransaction)tr;
try {
conn = trans.getConnection( );
}
catch( Exception e ) {
e.printStackTrace( );
return null;
}
PreparedStatement stmt = conn.prepareStatement(sql);
ResultSetMetaData meta;
ResultSet rs;
int cc;
bind(stmt, 1, bindings);
rs = stmt.executeQuery( );
meta = rs.getMetaData( );
cc = meta.getColumnCount( );
// This loop places result set values into
// a hash map with the column name as the key
// and the column value as the value. This
// map then gets passed to a new facade for
// pre-caching values.
while( rs.next( ) ) {
HashMap map = new HashMap( );
long oid = rs.getLong(1);
String cls = rs.getString(2);
for(int i=3; i<=cc; i++) {
String tbl = meta.getTableName(i).toUpperCase( );
String name = meta.getColumnLabel(i).toUpperCase( );
Object val = rs.getObject(i);
if( tbl.equals("") ) {
tbl = getPrimaryTable().toUpperCase( );
}
name = tbl + "." + name;
if( rs.wasNull( ) ) {
val = null;
}
map.put(name, val);
}
list.add(getFacade(oid, cls, map));
}
return list;
}
catch( SQLException e ) {
throw new FindException(e);
}
}
/**
* Provides the facade object for entities supported by this
* persistence support delegate.
* @param oid the object ID of the desired object
* @param cls the reference class name
* @param vals the initial cache values
* @return an instance of the reference class pointing to the specified
* object
* @throws com.imaginary.lwp.FindException the specified class could not
* be loaded
*/
public final BaseFacade getFacade(long oid, String cls, HashMap vals)
throws FindException {
try {
BaseFacade ref;
ref = (BaseFacade)Class.forName(cls).newInstance( );
ref.assign(oid, vals);
return ref;
}
catch( Exception e ) {
e.printStackTrace( );
throw new FindException(e);
}
}
/**
* Special method for building a <CODE>SELECT</CODE> statement that
* will perform a search using the named search critieria.
* @param sc the search criteria to build SQL from
* @return the SQL that performs the select
* @throws com.imaginary.lwp.FindException the SQL could not be built
*/
protected String getFindSQL(SearchCriteria sc) throws FindException {
StringBuffer sql = new StringBuffer("SELECT ");
ArrayList tables = new ArrayList( );
String where, order;
Iterator it;
sql.append(getPrimaryTable( ) + ".OBJECTID");
sql.append(", " + getPrimaryTable( ) + ".CRT_CLASS");
tables.add(getPrimaryTable( ));
it = sc.preloads( );
while( it.hasNext( ) ) {
String fld = mapField((String)it.next( ));
int i = fld.indexOf(".");
String tbl;
if( i != -1 ) {
tbl = fld.substring(0, i);
if( !tables.contains(tbl) ) {
tables.add(tbl);
}
}
sql.append(", ");
sql.append(fld);
}
where = getWhere(sc.bindings( ), tables);
order = getOrder(sc.sorts( ), tables);
it = tables.iterator( );
sql.append(" FROM ");
while( it.hasNext( ) ) {
sql.append((String)it.next( ));
if( it.hasNext( ) ) {
sql.append(", ");
}
}
if( where.length( ) > 0 ) {
sql.append(" WHERE ");
sql.append("(" + where + ")");
}
else if( tables.size( ) > 1 ) {
sql.append(" WHERE ");
}
it = tables.iterator( );
while( it.hasNext( ) ) {
String tbl = (String)it.next( );
JDBCJoin join;
if( tbl.equals(getPrimaryTable( )) ) {
continue;
}
join = getJoin(tbl);
sql.append(" AND " + join.toString( ) + " ");
}
if( order.length( ) > 0 ) {
sql.append(" ORDER BY " + order);
}
return sql.toString( );
}
/**
* Given a table, this method needs to provide a portion of a
* <CODE>WHERE</CODE> clause that supports joining to the specified
* table.
* @param tbl the table to join to
* @return the join object that represents a join for the primary
* table to the specified table
* @throws com.imaginary.lwp.FindException a join could not be constructed
*/
protected abstract JDBCJoin getJoin(String tbl) throws FindException;
/**
* Provides the <CODE>ORDER BY</CODE> clause to support ordering of
* the results.
* @param sorts the sort criteria from the search criteria object
* @param a pass by reference thing where any new tables that need
* to be joined to are added to this list
* @return a string with the <CODE>ORDER BY</CODE> clause
* @throws com.imaginary.lwp.FindException the clause could not be
* built
*/
private String getOrder(Iterator sorts, ArrayList tables)
throws FindException {
StringBuffer order = null;
if( !sorts.hasNext( ) ) {
return "";
}
do {
String col = (String)sorts.next( );
int i;
if( order == null ) {
order = new StringBuffer( );
}
else {
order.append(", ");
}
col = mapField(col);
order.append(col);
i = col.indexOf(".");
if( i != -1 ) {
String tbl = col.substring(0, i);
if( !tables.contains(tbl) ) {
tables.add(tbl);
}
}
} while( sorts.hasNext( ) );
return order.toString( );
}
/**
* Implemented by subclasses to provide the name of the primary
* table for storing objects supported by this class.
* @return the name of the primary table
*/
protected abstract String getPrimaryTable( );
/**
* Provides the <CODE>WHERE</CODE> clause to support a find.
* @param bindings the search bindings from the search criteria object
* @param a pass by reference thing where any new tables that need
* to be joined to are added to this list
* @return a string with the <CODE>WHERE</CODE> clause
* @throws com.imaginary.lwp.FindException the clause could not be
* built
*/
private String getWhere(Iterator bindings, ArrayList tables)
throws FindException {
StringBuffer where = null;
if( !bindings.hasNext( ) ) {
return "";
}
do {
SearchBinding bdg = (SearchBinding)bindings.next( );
Object val = bdg.getValue( );
String fld = bdg.getField( );
if( where == null ) {
where = new StringBuffer( );
}
else {
where.append(" " + bdg.getBoolean().toString( ) + " ");
}
if( val instanceof SearchCriteria ) {
SearchCriteria sc = (SearchCriteria)val;
where.append("(");
where.append(getWhere(sc.bindings( ), tables));
where.append(")");
}
else {
int i;
fld = mapField(fld);
where.append(fld);
i = fld.indexOf(".");
if( i != -1 ) {
String tbl = fld.substring(0, i);
if( !tables.contains(tbl) ) {
tables.add(tbl);
}
}
where.append(" " + bdg.getOperator().toString( ) + " ?");
}
} while( bindings.hasNext( ) );
if( where == null ) {
return "";
}
else {
return where.toString( );
}
}
/**
* Maps a field from the supported object's attributes to a database
* field.
* @param fld the Java object.attribute for the field to map
* @return the database table to map the field to
* @throws com.imaginary.lwp.FindException the field could not be mapped
*/
protected abstract String mapField(String fld) throws FindException;
}
The bulk of work done in this class is done by the
getFindSQL( )
method. It takes aSearchCriteria
instance and builds SQL to support the desired criteria. TheSearchCriteria
represents a set of criteria on which to perform a search independent of the underlying data store semantics. You can arbitrarily associate attributes with values and the nature of that relationship. For example, you can use theSearchCriteria
to specify that an attribute must equal some value and a second attribute be greater than another value. Your client might construct a search in the following way:String[] precache = { "lastName", "firstName" };
SearchCriteria sc = new SearchCriteria(precache);
// ssn is the social security number being sought
sc.addBinding("taxID", ssn);
sc.addBinding(SearchBoolean.OR, "birthDate",
SearchOperator.EQUALS, bd);
The result is a collection of façades containing customers who either have the specified social security number or the specified birth date. Each façade will be precached with the customer's first and last name.
All other methods in the class basically support the SQL building: the
getWhere( )
providing theWHERE
clause and thegetOrder( )
supporting any potentialORDER BY
clause. Once the SQL is built, thefind( )
method uses that SQL and help fromResultSetMetaData
to execute the SQL and process the results. For each matching row, aFaçade
is instantiated and placed into aCollection
specially optimized for distributed searches.
1. The full source code for the
Memento
class comes with the code supporting this book (ftp://ftp.ora.com/pub/examples/java/jdbc).2. A mostly automated mapping of any generic component to a data model would be possible, but it is very complex and much beyond the scope of the book. The biggest obstacle to automated mapping is the lack of parameterized types in Java.
Copyright 1998, 1999, 2000 David Reilly
|
Privacy | Legal | Linking | Advertise! |
Last updated:
Monday, June 05, 2006
|