ObjectStore Java API User Guide
Chapter 4

Managing Databases

You create databases to store your objects. The Database class provides the API for creating and managing databases.

The Java interface to ObjectStore supports rawfs databases. For information about rawfs databases, see the book ObjectStore Management.

Creating a Database

Creating Segments

Opening and Closing a Database

Moving or Copying a Database

Performing Garbage Collection in a Database

Schema Evolution: Modifying Class Definitions of Objects in a Database

Destroying a Database

Obtaining Information About a Database

Implementing Cross-Segment References for Optimum Performance

Database Operations and Transactions

Upgrading Databases for Use with the JDK 1.2

Creating a Database

The Database class is an abstract class that represents a database. When you create a database,

Databases are cross-platform compatible. You can create databases on any supported platform and access them from any supported platform.

This section discusses the following topics:

Method Signature for Creating a Database

To create a database, call the static create method on the Database class and specify the database name and an access mode. The method signature is

public static Database create(String name, int fileMode)
ObjectStore throws AccessViolationException if the access mode does not provide owner write access.

Example of Creating a Database

For example:

import COM.odi.*;
class DbTest {
      void test() {
            Database db = Database.create("objectsrus.odb",
ObjectStore.OWNER_WRITE);
... } }
This example creates an instance of Database and stores a reference to the instance in the variable named db. The Database.create method is called with two parameters.

The first parameter specifies the pathname of a file. The path can specify a relative name or a fully qualified name. It must always specify a file that is one of the following:

An ObjectStore Server must be available for the directory that contains the specified file.

The second parameter specifies the access mode for the database.

Terminology note

Database is an abstract class, so ObjectStore actually creates an instance of a subclass that extends Database. From your point of view, it does not matter whether ObjectStore creates an instance of Database or an instance of a Database subclass.

Result of Creating a Database

The result is a database named "objectsrus.odb" with an access mode that allows the owner to modify the database. The example stores the reference to the Database object in the db variable. This means that db represents, or is a handle for, the objectsrus.odb database.

For each database you create, ObjectStore creates an instance of Database to represent your database. Each database is associated with one instance of Database. Consequently, you can use the == operator to determine whether or not two Database objects in the same session represent the same database, for example, the following method returns true:

boolean checkIdentity(String dbname) {
      Database db = Database.create(dbname, 
ObjectStore.OWNER_WRITE);
Database dbAgain = Database.open(dbname,
ObjectStore.UPDATE);
return (db == dbAgain); }

Specifying a Database Name in Creation Method

When you create or open a database, you can specify or pass a database name that is a relative name, an absolute operating system pathname, or a rawfs pathname. ObjectStore takes into account local network mount points when interpreting pathnames. A pathname can refer to a database on a remote host. However, an ObjectStore Server must be available to the local host of the directory that contains an ObjectStore database.

If you want to refer to a database on a remote host for which there is no local mount point, you can use a Server host prefix. This is the name of the remote host followed by a colon (:), as in oak:/foo/bar.odb or jackhammer:c:\bob\mydb.odb. On Windows, you can also use UNC pathnames, as in \\oak\c\foo\bar.odb.

Also, you can use locator files to allow access to additional hosts. See ObjectStore Management for information.

When the Database Already Exists

If you try to create a database that already exists, ObjectStore throws DatabaseAlreadyExistsException. Before you create a database, you might want to check to see if it exists and destroy it if it does. For example, you can insert the following just before you create a database:

try {
      Database.open(dbName, ObjectStore.UPDATE).destroy();
}      catch(DatabaseNotFoundException e) {
}

Warning

Do this only if you want to destroy and recreate your database. Otherwise, invoke Database.open().

Discussion of Installing Schema upon Database Creation

The Database.create() method has an overloading that allows you to install the schema in batch mode rather than incrementally. The default is that ObjectStore performs schema installation as needed. The advantage of batch schema installation is that concurrency conflicts due to schema installation are minimized. The disadvantage is that database creation takes a little longer and the initial database size is larger. See Installing Schema Information in Batch Mode.

Creating Segments

ObjectStore creates each database with one segment that you can use to store objects. A segment is a variable-sized region of disk space that ObjectStore uses to cluster objects stored in the database. Initially, the size of the segment is about 3 K. As you store additional objects in the segment, ObjectStore increases the size of the segment automatically.

To create additional segments, use the Database.createSegment() method. To locate a segment by its ID, use the Database.getSegment() method. To retrieve the segment that contains a particular object, call the Segment.of() method on that object.

The initial segment in a database is the default segment. To change the default segment, invoke the Database.setDefaultSegment() method.

To lock all objects in a segment, see Locking Objects, Segments, and Databases to Ensure Access.

For additional information about segments, see Managing Databases in Chapter 1 of ObjectStore Management.

Storing Objects in a Particular Segment

You can store objects in different segments by changing which segment is the default segment or by explicitly storing an object in a particular segment. Distributing objects can improve performance, but the following tradeoffs must be considered:

To make a persistent reference from an object in one segment to an object in another segment, the object in the other segment (the referenced object) must be exported. See Implementing Cross-Segment References for Optimum Performance for additional information about distributing objects among segments.

Determining If a Database or Segment Is Transient

Sometimes there are Java peer objects that identify C++ objects that have been transiently allocated. ObjectStore stores these C++ objects in the transient database and transient segment. Java primary objects are never in the transient database or transient segment, even if they are transient. ObjectStore creates transient segments as part of some peer object operations. You cannot use ObjectStore to create or manipulate transient segments.

If you try to retrieve the segment or database of a transient primary object, ObjectStore throws ObjectNotPersistentException. To determine whether or not a database or segment is transient, you can do the following:

CPlusPlus.getTransientSegment() == segment
CPlusPlus.getTransientDatabase() == database

Iterating Through the Segments in a Database

To obtain an enumeration of the segments in a database, call the Database.getSegments() method. The signature is

public DatabaseSegmentEnumeration getSegments()
This method returns a DatabaseSegmentEnumeration object. After you have this object, you can use these methods to iterate over the segments in the enumeration:

If you or another session add a segment to a database after you create an enumeration, the enumeration might or might not include the new segment. If it is important for the enumeration to accurately list all segments, you should recreate the enumeration after you create the segment.

After you create an enumeration, a segment in the enumeration might be destroyed. If you use the enumeration to try to access a destroyed segment, ObjectStore skips the destroyed segment. However, if you retrieve a segment with DatabaseSegmentEnumeration.nextElement() or nextSegment() and then the segment is destroyed, ObjectStore throws SegmentNotFoundException if you try to use the destroyed segment.

Opening and Closing a Database

A database can be either open or closed. A database must be open before you can store or access objects in that database.When an application opens a database, it does not matter whether

When an application closes a database, a transaction cannot be in progress and the database must be open.

This section discusses the following topics:

Opening a Database

When you open a database, it does not matter whether there is a transaction in progress, nor does it matter whether the database is already open.

When you create a database, ObjectStore creates and opens the database. To open an existing database, call the static Database.open() method. The method signature is

public static Database open(String name, int openMode)
For example:

Database db = Database.open("myDb.odb",
ObjectStore.READONLY);
The first parameter specifies the pathname of your database. The second parameter indicates the open mode of the database.

Possible Open Modes

ObjectStore provides constants that you can specify for the openMode parameter to Database.open(). The constants you can specify for openMode are

Incorrect attempts to modify

If you open a database with ObjectStore.READONLY or ObjectStore.MVCC, and attempt to modify an object, ObjectStore throws UpdateReadOnlyException when you try to commit the transaction.

Example

Suppose you previously created and closed a database that is represented by an instance of a Database subclass stored in the db variable. You can call the instance open() method to open your database this way:

db.open(ObjectStore.READONLY);
You can use the static class open() method this way:

db = Database.open("myDb.odb", ObjectStore.READONLY);
Typically, both lines cause the same result. However, they might cause different results if a database has been destroyed and recreated.

To recover the database, open it for update. This automatically recovers the database if necessary. Alternatively, you can run the osjcheckdb utility with the -openUpdateForRecovery option.

To lock all objects in a database, see Locking Objects, Segments, and Databases to Ensure Access.

Opening the Same Database Multiple Times

Each subsequent opening of a database after the initial open operation returns the same database object. For example:

db1 = Database.open("foo", ObjectStore.UPDATE);
db2 = Database.open("foo", ObjectStore.UPDATE);
The expression db1 == db2 returns true. They refer to the same database object. Consequently, a call to db1.close() or db2.close() closes the same database. No matter how many times you open a database, a single call to the close() method closes the database. (This is different in the C++ interface to ObjectStore. In that interface, for example, if you call open() four times and close() three times all on the same database, the database is still open.)

Closing a Database

To close a database, call the close() method on the instance of the Database subclass that represents the database, for example,

db.close();
You cannot close a database when a transaction is in progress. The database you want to close must be open.

Object state after close

When you close a database, all persistent objects that belong to that database become stale or transient. If the last committed transaction that operated on the database retained persistent objects, you can use an overloading of close() that allows you to specify what should happen to the retained objects. (For information about retained objects, see Committing Transactions to Save Modifications.) The method signature is

public void close(boolean retainAsTransient)
Specify true to make retained objects transient. If you specify false, it is the same as calling the close() method without an argument. All access to retained objects ends.

Suppose you close a database and make retained objects transient. In the next transaction, if you reread an object from the database that you retained as a transient object, you then have two separate copies of the same object. One object is transient and one object is persistent. You do not have two references to a single object. When you close a database, all object identity is gone. After you close a database, the database is still associated with the session in which it was closed.

If you do not close

If you do not close a database, ObjectStore closes it when you shut down ObjectStore.

Database identity

Within a session, ObjectStore maintains database identity even after you close a database. For example, consider the following code:

import COM.odi.*;
public class Goo {
      public static void main(String[] args) {
            Session session = Session.create(null, null);
            session.join();
            try {
                   try {
                        Database db = Database.create("my.odb", 0664);
                        db.close();
                  } catch (DatabaseAlreadyExistsException e) {
                  }
                  Database db1 = Database.open("my.odb", 
                        ObjectStore.READONLY);
                  db1.close();
                  Database db2 = Database.open("my.odb", 
                        ObjectStore.READONLY);
                  System.out.println(db1 == db2);
            } finally {
                   session.terminate();
             }
       }
}
If you run a program with the previous code, the system displays true.

In general, it is best to leave databases open for the entire session and write your application so that it shuts down ObjectStore before it exits.

However, keeping databases open consumes some ObjectStore Server resources. Also, there is a limit to the number of databases the ObjectStore Server can keep open at one time. This depends on the number of open files that the operating system permits, which varies by platform.

Automatic Opens of a Database

Sometimes an application traverses a reference to a database that has not been explicitly opened. ObjectStore automatically opens the database according to the default open mode. The default open mode is one of the following:

An application can call the ObjectStore.setAutoOpenMode() method to change the default open mode. A call to the ObjectStore.getAutoOpenMode() method returns the current open mode. The default autoopen mode is READONLY. When you set the autoopen mode, it affects only the current session.

If you know the name of a database that has been automatically opened, you can use Database.open() to obtain the already open database. Another way to obtain a handle to an automatically opened database is to call Database.of() on an object from the automatically opened database.

If you do not close a database that ObjectStore automatically opens, ObjectStore closes it when you shut down ObjectStore.

Disabling autoopen

You can disable the ability of ObjectStore to automatically open databases. Call the ObjectStore.setAutoOpenMode() method and specify the ObjectStore.DISABLE_AUTO_OPEN constant. If you do disable automatic opens and your application tries to follow a reference to an unopened database, you receive DatabaseNotOpenException.

Objects in Closed Databases

Objects in a closed database are not accessible. However, if you close a database with an argument of true, ObjectStore retains the persistent objects as transient objects.

Moving or Copying a Database

You can use the ObjectStore utilities oscopy and osmv to copy and move a database. See ObjectStore Management for information.

You can move or copy databases among different supported platforms.

Performing Garbage Collection in a Database

The ObjectStore persistent garbage collector (GC) collects unreferenced Java objects and ObjectStore collections in an ObjectStore database. Persistent garbage collection frees storage associated with objects that are unreachable. It does not move remaining objects to make the free space contiguous.

Contents

This section discusses these topics:

Background About the Persistent Garbage Collector

The ObjectStore persistent GC is independent of the Java VM GC. The Java VM GC is strictly a transient object garbage collector. It never operates on objects in the database.

Applications can continue to use a database while persistent GC is in progress. GC locks portions of a segment as needed, as if it were just another application. In this way, the GC minimizes the number of pages that are locked and the duration for which the locks are held. Also, the GC retries operations when it detects lock conflicts.

By default, the GC runs with a transaction priority of zero. Consequently, it is the preferred victim when the Server must terminate a transaction to resolve a deadlock. At a later time, the GC redoes the work that was lost when the transaction was aborted.

The GC uses read and write lock timeouts of short duration. This avoids competition with other processes for locks. If the GC cannot acquire a lock because of a timeout, it retries the operation at a later time.

The GC performs its job in two major phases. In the mark phase, the GC identifies the unreachable objects. In the sweep phase, the GC frees the storage used by the unreachable objects.

A segment is the smallest storage unit that can be garbage collected. You can specify a segment or a database to be garbage collected. It is usually best to avoid destroying strings (or objects) altogether and let the persistent garbage collector take care of destroying such unreachable objects. The persistent garbage collector can typically destroy and reclaim such objects very efficiently, since it can batch such operations and cluster them effectively. If you set up the GC to run when the system is lightly loaded, you can effectively defer the overhead of the destroy operations to a time when your system would otherwise be idle, thus getting greater real throughput from your application when you really need it.

The persistent GC never removes tombstones for exported objects. For unexported objects, the GC treats tombstones the same way that it treats other objects. The GC removes tombstones if they are not referenced. In other words, the GC removes only unreferenced tombstones. This behavior preserves the safe detection of bad references.

API for Collecting Garbage in a Database

To perform garbage collection on a database, call the Database.GC() method. This method invokes the Segment.GC() method on each the segment in the database. The method signature is

public java.util.Properties GC(java.util.Properties GCproperties)
For the GCproperties parameter, specify null or a Properties object for the garbage collection operation. The properties are described in the next section as they are the same for Segment.GC(). If the GCproperties parameter is null, ObjectStore uses the default properties as defined in the documentation for Segment.GC(). The properties you can specify are the same as the properties for Segment.GC().

API for Collecting Garbage in a Segment

To perform garbage collection on a segment, call the Segment.GC() method. The signature is

public java.util.Properties GC(java.util.Properties GCproperties)
A transaction must not be in progress for the current session. The database that contains the segment you want to garbage collect must not be open for the current session. You cannot perform GC on the transient segment. If you try to, ObjectStore throws SegmentException.However, you must open it to create a Database object to represent it. After you close the database you want to garbage collect, you can call Database.GC() on the Database object that represents your closed database.

The GCproperties parameter specifies a list of GC properties or null. When you specify null, ObjectStore checks the system properties and uses the default properties, which are suitable for most operations. For more control over GC, you can specify one or more of the following properties. ObjectStore uses the default for any property you do not specify.

The GC() method returns a Properties object that contains information about the results of the garbage collection. The properties in this object include

Command Line Utility for Collecting Garbage

The command line utility for collecting garbage in a database is osgc.

Running osgc on C++ Databases or Segments

You can run the osgc utility on C++ databases and segments, but you must observe the following restrictions:

Schema Evolution: Modifying Class Definitions of Objects in a Database

You can modify the class definitions for objects already stored in a database. This process is called schema evolution, since a database schema is a description of the classes whose instances are stored in a database.

There are primarily two ways to evolve schema:

You can always use the schema evolution API, but you should use it when the data you must evolve contains very large object graphs. Also, you must use the API when a database contains instances of COM.odi.coll objects.

Use the serialization technique with the sample code provided only when the database you want to evolve fits into heap space. When you use the serialization technique, the database cannot contain ObjectStore collections because they are not serializable.

The topics discussed in this section are

When Is Schema Evolution Required?

If you change your class in the following way, you must evolve the schema:

ObjectStore initializes each new instance field with its default value as described in section 4.6.4 of the Java Language Specification. The new fields are not initialized with the variable initializer even if the class defines one for the new field.

If you change the type that is associated with a persistent instance field, ObjectStore performs a default initialization, except in cases where both the old and new instance fields are of the following primitive types: byte, short, int, float, or double. In this case, ObjectStore applies the appropriate narrowing or widening conversion to the old value and assigns that value to the new instance field. Conversions that involve the long type cause ObjectStore to perform a default initialization on the new field.

hashCode()

Also, you might need to perform schema evolution if you add or remove the hashCode() method. If you use the postprocessor, it determines whether or not to add a hashCode() method. If it previously added a hashCode() method and now it does not, or if it previously did not add a hashCode() method and now it does, schema evolution is required.

Inheritance

You cannot use schema evolution to change the inheritance hierarchy of a class by adding, removing, or changing a superclass.

Allowed changes

You can make the following changes to your class and you are not required to evolve the schema:

Indexable fields

When you make a field indexable, the change might require schema evolution. If a class (including its superclasses) does not contain any indexable fields, schema evolution is required if you make a field in the class indexable. The addition of the indexable field changes the representation of the class.

If a class (including its superclasses) has at least one indexable field, schema evolution is not required if you add indexes to other fields.

Preparing to Use the Schema Evolution API

Before you can use the API to perform schema evolution on a database, you must create a PersistentTypeSummary instance that identifies the classes whose definitions have changed. The easiest way to do this is to specify the -summary option when you run the postprocessor on the updated class definitions. Be sure to run the postprocessor on all the classes in your application at once or on a correctly grouped batch of classes.

If the database contains collections that use indexes, you must drop the indexes before you perform schema evolution. After you evolve the schema, you can restore the indexes on the collection.

It is always advisable to make a copy of your database before you evolve its schema. This allows you to restore the database if there are errors.

To perform schema evolution, the following conditions must be true:

Using the Schema Evolution API

When ObjectStore performs schema evolution, it makes the database inaccessible to any other operation. To evolve the schema for a database, call the Database.evolveSchema() method. The signature is

void evolveSchema(String dbName,
      String workdbName,
      PersistentTypeSummary summary)
The dbName parameter specifies the database whose schema you want to evolve. During schema evolution, this database is not available to any other operation.

The workdbName parameter specifies the name of a database that ObjectStore uses to hold a checkpoint version of the database while schema evolution progresses. ObjectStore creates this database as part of schema evolution and destroys it when schema evolution is complete. This working database allows ObjectStore to resume schema evolution if it is interrupted. For example, an interruption can be caused by a power failure or a lack of disk space.

The summary parameter specifies a PersistentTypeSummary object that identifies the classes whose definitions have changed. It is permissible for the summary to include types that have not changed. However, if the summary does not include a type that has changed, that type might not be evolved.

Typically, you create the summary object as follows:

  1. When you run the postprocessor on your updated class definitions, specify the -summary option.

    This creates a class file.

  2. Call the no-arguments constructor on this class to create the summary object.

Under unusual circumstances, you might create the PersistentTypeSummary object yourself. In this case, you must use its constructor for specifying the persistent classes and included summaries.

ObjectStore can evolve only one database at a time. This means that if there are cross-database references, only one database at a time is unavailable to other operations.

When you perform schema evolution on a particular class, you must provide definitions for all superclasses that are part of the schema for the database being evolved.

Considerations for Using Serialization to Perform Schema Evolution

To evolve a schema, you can

  1. Use serialization to dump the contents of a database.

  2. Modify your class definitions.

  3. Reload the data.

The next two sections provide instructions for using sample code and the sample code that does this. If you use this sample code, you should be aware of the following issues.

java.io.Serializable

To serialize objects into the database, the classes of all the objects stored in the database must implement java.io.Serializable. If you have a database that contains objects that do not implement Serializable, you can modify the class definitions just to implement Serializable, recompile them, and still access the database. This allows you to dump the database to a file before you make the real class modifications.

readObject()

When you modify a class after doing the dump, you must ensure that the readObject() method considers the old and new versions of the class to be compatible. The most straightforward way to do this is to create a static final long field called serialVersionUID in the modified class. This field must have the same value as the serial version UID for the original class. You can obtain the value for the original class with the serialver utility, for example:

serialver DumpReload$LinkedList
DumpReload$LinkedList: static final long serialVersionUID =
-5286408624542393371L;
For simplicity, the sample code includes this field.

Database size

The database whose schema you want to evolve must be small enough to fit into heap space. If it is not, you must customize the code that dumps and loads the database. You would have to organize your data so that you do not have to serialize all the data in the database at one time.

Large numbers of connected objects

The use of ObjectStore.deepFetch() is a performance concern for very large object graphs. The current implementation of deepFetch() is not careful about bounding stack space. A consequence of this is that it is sometimes impossible to successfully perform the deepFetch() operation for very large object graphs.

Steps for Using Sample Schema Evolution Serialization Code

The next section provides a program that takes an argument that causes the program to perform one of three actions:

For example, you can use the sample program to add a new field to the LinkedList class. To do so, follow these steps:

  1. Place the code in a file called DumpReload.java.

  2. Set your CLASSPATH environment variable to include the directory that contains the osjcfpout file and the DumpReload.java file.

  3. Compile the program with the command

    javac DumpReload.java.

  4. Run the postprocessor to annotate the DumpReload and LinkedList classes:

    osjcfp -dest osjcfpout DumpReload.class \
    DumpReload$LinkedList.class

  5. Create the database:

    java DumpReload create data.odb

  6. Use serialization to dump the data:

    java DumpReload dump data.odb data.out

  7. Change the LinkedList class. Do this by removing the comment flag from the newField field in LinkedList.

  8. Recompile the class:

    javac DumpReload.java

  9. Rerun the postprocessor to annotate the DumpReload and LinkedList classes:

    osjcfp -dest osjcfpout DumpReload.class \
    DumpReload$LinkedList.class

  10. Use serialization to reload the data:

    java DumpReload reload data.odb data.out

Sample Code for Using Serialization to Perform Schema Evolution

Here is the sample program for using serialization to perform schema evolution.

import COM.odi.*;
import COM.odi.util.OSHashtable;
import COM.odi.util.OSVector;
import java.io.FileInputStream;
import java.io.FileOutputStream;
import java.io.IOException;
import java.io.ObjectInputStream;
import java.io.ObjectOutputStream;
import java.util.Enumeration;
public class DumpReload {
      static public void main(String argv[]) throws Exception {
            if (argv.length >= 2) {
                  if (argv[0].equalsIgnoreCase("create")) {
                        createDatabase(argv[1]);
                  } else if (argv[0].equalsIgnoreCase("dump")) {
                        dumpDatabase(argv[1], argv[2]);
                  } else if (argv[0].equalsIgnoreCase("reload")) {
                        reloadDatabase(argv[1], argv[2]);
                  } else {
                        usage();
                  }
            } else {
                  usage();
            }
      }
      static void usage() {
            System.err.println(
                  "Usage: java DumpReload OPERATION ARGS...\n" +
                  "Operations:\n" +
                  "   create DB\n" +
                  "   dump FROMDB TOFILE\n" +
                  "   reload TODB FROMFILE\n");
            System.exit(1);
      }
      /* Create a database with 3 roots. Each root contains an 
            OSHashtable of OSVectors that contain some Strings. */
      static void createDatabase(String dbName) throws Exception {
            ObjectStore.initialize(null, null);
            try {
                  Database.open(dbName, ObjectStore.UPDATE).destroy();
            } catch (DatabaseNotFoundException DNFE) {
            }
            Database db = Database.create(dbName, 0644);
            Transaction t = Transaction.begin(ObjectStore.UPDATE);
            for (int i = 0; i < 3; i++) {
                  OSHashtable ht = new OSHashtable();
                  for (int j = 0; j < 5; j++) {
                        OSVector vec = new OSVector(5);
                        for (int k = 0; k < 5; k++)
                              vec.addElement(new LinkedList(i));
                        ht.put(new Integer(j), vec);
                  }
                  db.createRoot("Root" + Integer.toString(i), ht);
            }
            t.commit();
            db.close();
      }
      static void dumpDatabase(String dbName, String dumpName) 
            throws Exception {
            ObjectStore.initialize(null, null);
            Database db = Database.open(
                              dbName, ObjectStore.READONLY);
            FileOutputStream fos = new FileOutputStream(dumpName);
            ObjectOutputStream out = new ObjectOutputStream(fos);
            Transaction t = Transaction.begin(ObjectStore.READONLY);
            /* Count the roots and write out the count. */
            Enumeration roots = db.getRoots();
            int nRoots = 0;
            while (roots.hasMoreElements()) {
                  String rootName= (String) roots.nextElement();
                  /* Skip internal OSJI header */
                  if (!rootName.equals("_DMA_Database_header")) nRoots++;
            }
            out.writeObject(new Integer(nRoots));
            /* Rescan and write out the data. 
                  The deepFetch() call is necessary because it obtains the
                  contents of all objects that are reachable from root,
                  and makes them available for serialization. */
            roots = db.getRoots();
            while (roots.hasMoreElements()) {
                  String rootName = (String) roots.nextElement();
                  if (!rootName.equals("_DMA_Database_header")) {
                        out.writeObject(rootName);
                        Object root = db.getRoot(rootName);
                        ObjectStore.deepFetch(root);
                        out.writeObject(root);
            }
            t.commit();
            out.close();
      }
      static void reloadDatabase(String dbName, String dumpName)
            throws Exception {
            ObjectStore.initialize(null, null);
            try {
                  Database.open(
                        dbName, ObjectStore.OPEN_UPDATE).destroy();
            } catch (DatabaseNotFoundException DNFE) {
            }
            Database db = Database.create(dbName, 0644);
            FileInputStream fis = new FileInputStream(dumpName);
            ObjectInputStream in = new ObjectInputStream(fis);
            Transaction t = Transaction.begin(ObjectStore.UPDATE);
            int nRoots = ((Integer) in.readObject()).intValue();
            while (nRoots-- > 0) {
                  String rootName = (String) in.readObject();
                  Object rootValue = in.readObject();
                  System.out.println("Creating " + rootName + " " + rootValue);
                  db.createRoot(rootName, rootValue);
            }
            t.commit();
            db.close();
      }
      static
      class LinkedList implements java.io.Serializable {
            private int value;
            private LinkedList next;
            private LinkedList prev;
            //private Object newField;
            static final long serialVersionUID = -5286408624542393371L;
            LinkedList(int value) {
                  this.value = value;
                  this.next = null;
                  this.prev = null;
            }
      }
}

Destroying a Database

Destroying a database makes all objects in the database permanently inaccessible. You cannot recover a destroyed database except from backups.

To destroy a database, call the destroy() method on the Database subclass instance, for example:

db.destroy();
The database must be open-for-update and a transaction cannot be in progress.

When you destroy a database, all persistent objects that belong to that database become stale.

Obtaining Information About a Database

You can call methods on a database to answer the following questions.

Is a Database Open?

To determine whether or not a database is open, call the isOpen() method on the database, for example:

db.isOpen();
This expression returns true if the database is open. It returns false if the database is closed or if it was destroyed. To determine whether false indicates a closed or destroyed database, try to open the database.

What Kind of Access Is Allowed?

To check what kind of access is allowed for an open database, call the getOpenMode() method on the database. The database must be open or ObjectStore throws DatabaseNotOpenException. The method signature is

public int getOpenMode()
This method returns one of the following constants:

Here is an example of how you can use this method:

void checkUpdate(Database db) {
      if (db.getOpenMode() != ObjectStore.UPDATE)
            throw new Error("The database must be open for update.");
}

What Is the Pathname of a Database?

To find out the pathname of a database, call the getPath() method on the database, for example:

String myString = db.getPath();

What Is the Size of a Database?

To obtain the size of a database, call the getSizeInBytes() method on the database. The database must be open and a transaction must be in progress, for example:

db = Database.open("myDb.odb", ObjectStore.READONLY);
Transaction tr = Transaction.begin(ObjectStore.READONLY);
long dbSize = db.getSizeInBytes();
This method does not necessarily return the exact number of bytes that the database uses. The value returned might be the result of your operating system's rounding up to a block size. You should be aware of how your operating system handles operations such as these.

Which Session Is the Database or Segment Associated With?

To obtain the session with which a database or segment is associated, call the Placement.getSession() method. The method signature is

public Session Placement.getSession()

Which Objects Are in the Database?

The osjshowdb utility displays information about one or more databases. This utility is useful when you want to know how many and what types of objects are in a database. You can use this utility to verify the general contents of the database.

Information about the osjshowdb utility is in osjshowdb: Displaying Information About a Database.

Are There Invalid References in the Database?

The osjcheckdb utility or the Database.check() method checks the references in a database. This tool scans a database and checks that there are no references to destroyed objects. The most likely cause of dangling references is an incorrectly written program. You can fix dangling references by finding the objects that contain them and overwriting the invalid references with something else, such as a null value. In addition to finding references to destroyed objects, the tool performs various consistency checks on the database.

Information about the osjcheckdb utility is in osjcheckdb: Checking References in a Database.

Implementing Cross-Segment References for Optimum Performance

You can improve application performance by clustering together objects that are expected to be used together. Effective clustering both reduces the number of disk and network transfers the applications require and increases concurrency among applications. You can use a segment to store groups of objects that are accessed as a logical cluster.

ObjectStore allows an object in one segment to refer to an object in another segment in the same database or in a different database. The advantage of cross-segment references is that you can cluster related objects in separate segments and also provide links among the segments. Your database can continue to grow without necessarily slowing down access times.

There is a procedure to follow to implement cross-segment references that results in the best performance. A description of the overhead involved with cross-segment references shows why it is important to follow this procedure.

Procedure for Defining Cross-Segment References

To obtain the greatest benefit from cross-segment references, you must do the following:

  1. Create a plan for organizing your objects into clusters and for assigning the clusters to segments. You might want to put one or several logical clusters into a segment. Base your plan on the size of the objects and the patterns of access.

  2. In each planned segment, designate a few objects to be the points of reference to the other objects in the segment.

  3. Use the ObjectStore.migrate() method to store the designated objects in their respective segments.

    You do not want these designated objects to become persistent through reachability when a transaction commits. If they did, ObjectStore would store them in the same segment as whatever objects pointed to them. The migrate() method allows you to specify the segment in which to store an object. Be sure to specify true for the export argument to migrate(). For an object to be referred to by an object in another segment, the referred-to object must be exported.

  4. Ensure that the migrated objects reference the other objects in their clusters.

    This allows the other objects to be stored in the same segment as the object that references them. When you commit the transaction, ObjectStore automatically stores all referenced objects in the same segment as the migrated object, unless they were explicitly migrated to another segment.

  5. Ensure that no objects outside the cluster's segment try to store a reference to an object that was stored by transitive persistence.

  6. There might be some objects that are not part of a cluster, but that must be referenced by objects in multiple segments. Migrate such objects into convenient segments and be sure to export them.

It is most efficient to export objects when explicitly migrating them. Although you can call ObjectStore.export() to change an object in the database to be an exported object, doing so can be slow. The time it takes to export an object already in the database is proportional to the number of objects in the segment that contains the object.

You want to minimize the number of exported objects in each segment. A segment that contains large numbers of exported objects has more overhead for the persistent system data needed to describe the exported objects. Also, references to exported objects from within their own segments might take longer because of the increased size of the system data.

You only need to invoke ObjectStore.migrate() on objects that are not yet in the database. To change which segment an object is in, see Explicitly Migrating Exported Objects. You can call the migrate() method repeatedly with the same arguments, but there is usually no need to do so.

Exporting Objects

When you export an object, whether you use migrate() or export(), ObjectStore

When you export an object that is already in the database (export()), ObjectStore also checks every object already in the segment to determine if it refers to the exported object. If it does, ObjectStore updates the reference from a local reference to a reference through the export table.

If the segment is empty or relatively empty, the actual overhead is small because there are few, if any, objects to check. If there are already many objects in the segment, the overhead of checking each one can be quite large.

Consequently, it is preferable to use the ObjectStore.migrate() method and specify true for the export argument when you store the object in the database. This also ensures that you are storing the object in the intended segment.

Suppose you forget to export a particular object when you store it in the segment. You can fix this by calling the ObjectStore.export() method on the object. However, by the time you recognize the need to export an object, there might already be many objects in the segment. The overhead for exporting the object can be quite large.

With the exception of ObjectStore collection objects, you cannot export peer objects. If you try to export a peer object that is not an ObjectStore collection object, ObjectStore throws ObjectException.

How Many Exported Objects Are Needed?

You do not want too many objects in a segment to be referred to by objects in other segments. In other words, you want to minimize the number of exported objects in each segment. However, there is no additional cost if you have many objects that refer to the same exported object.

The overhead of having many exported objects in a segment comes from having many entries in the export table. Access to an exported object is through the export table. Having many exported objects is less efficient than having clustered objects because access to the exported objects requires an indirect lookup through the export table. While the lookup is implemented with a highly-tuned hashing mechanism, the overhead for maintaining the export table increases as it grows. Finally, if the exported objects are very volatile, the database locking mechanism can result in additional wait time for users who are creating or accessing exported objects.

For these reasons, it is desirable to design your application so that only a relatively small number of nonvolatile objects must be exported. The figure below shows the optimum way to set up cross-segment references.

Explicitly Migrating Exported Objects

Objects can become persistent because they are reachable from an already persistent object or a database root. ObjectStore places the newly persistent object in the same segment as the object from which it can be reached. Consequently, you must be careful to explicitly migrate those objects that must be exported.

In a transaction, if you create a situation in which an object is referred to by objects in more than one segment, and the object is not exported, ObjectStore throws ObjectNotExportedException when you try to do one of the following:

This situation can happen if an application stores a reference to a nonexported object from one segment into an object in another segment. The check for this does not occur when you store the reference.

Database Operations and Transactions

For each database operation, there are rules about whether it can be performed

The following table shows which rules apply to which operations. If your application tries to perform a database operation that breaks a rule, you receive a run-time exception.



Database Operation Can Be Performed Inside/Outside Transaction? Database Open?
acquireLock() Inside Open
check() Inside Open
close() Outside Open
create() Outside Not applicable
createRoot() Inside Open
createSegment() Inside Open
destroy() Outside Open
destroyRoot() Inside Open
GC() Outside Open
getDefaultSegment() Inside Open
getOpenMode() Both Open
getPath() Both Open
getRoot() Inside Open
getRoots() Inside Open
getSegment() Inside Open
getSegments() Inside Open
getSizeInBytes() Inside Open
installTypes Inside Open
isOpen() Both Open or closed
of() Inside Open
open() Both Open or closed
setDefaultSegment() Inside Open
setRoot() Inside Open
show() Inside Open

Upgrading Databases for Use with the JDK 1.2

The JDK 1.2, currently in beta testing but expected to be released soon by Sun Microsystems, computes hash codes for String types differently than the JDK 1.1. As a result, databases that depend on String hash codes force the database to only be usable from the JDK 1.1 or the JDK 1.2, but not both.

ObjectStore marks databases to specify whether the JDK 1.1 or JDK 1.2 was used when the database was created. It then ensures that the same JDK version is used when the database is accessed.

Databases created with releases previous to this one are assumed to have been created with the JDK 1.1. With this release of ObjectStore, you can use only the JDK 1.1 to access these databases unless you upgrade them for use with the JDK 1.2. ObjectStore provides a tool and an API for

After you upgrade a database, you can no longer use it with the JDK 1.1.

To use the upgrade tool, see osjuphsh: Upgrading String Hash Codes in Databases. To use the API, see COM.odi.Upgrade.upgradeDatabaseStringHash().

The upgrade facility also allows you to mark a database as not containing any objects that depend on String hash codes. If you mark a database in this way, you can use either the JDK 1.1 or the JDK 1.2 to access the database.

After you upgrade your database, Object Design recommends that you run the garbage collector on the database. Upgrading the database creates some garbage.

If you use the upgrade API to perform the upgrade, watch out for a problem that involves cross-database references from the upgraded database to other databases. If the database being upgraded contains references to other databases, those other databases might need to be opened during the upgrade. If they are opened, the upgrade API leaves them open when it is done. The upgrade allows the other databases to be opened regardless of whether they have been upgraded because of the limited way in which the other databases are required by the upgrade. However, if the application trys to use those database after performing the upgrade, it receives exceptions if the other database are not already upgraded.

The workaround is to ensure that you upgrade all databases you intend to use before you try to access any upgraded database.



[previous] [next]

Copyright © 1998 Object Design, Inc. All rights reserved.

Updated: 10/07/98 08:45:04