ObjectStore Management

Chapter 4d

Utilities

This chapter provides information about ObjectStore utilities. Many of these utilities are implemented using the os_dbutil class methods. See the ObjectStore C++ API Reference.

The following utilities are described in alphabetical order in this chapter:

Pathnames for utility executables
The pathname of the executable for an ObjectStore utility is
UNIX

$OS_ROOTDIR/bin/utility-name 
Windows and OS/2

%OS_ROOTDIR%\bin\utility-name.exe 

Earlier releases
In previous releases of ObjectStore, utilities were in the /admin and /debugging directories. These directories are obsolete.

FAT names on OS/2
On OS/2, if you install ObjectStore on a FAT file system, ObjectStore uses the FAT name for a utility if the usual name of the utility exceeds eight characters.

os_postlink: Fixing Vtbls and Discriminants



On cfront platforms, the os_postlink utility fixes vtbls and discriminants in your executable.

While os_postlink does not actually do anything on some platforms, Object Design recommends that you always call it from a makefile so that its absence does not cause a problem if you move the application to another platform.

Syntax

Use the OS_POSTLINK macro in your makefile to call the os_postlink utility.

$(OS_POSTLINK) executable 

Description

When ObjectStore reads in an object with virtual functions, it supplies an appropriate vtbl pointer from the current application. This is called vtbl relocation.

When your application references a persistent object of a class with virtual functions, ObjectStore must fill in the vtbl pointer in the object. Virtual function tables are not stored in databases; they are part of your executable. To fill in the vtbl pointer, ObjectStore needs the address of the vtbl for the class.

During relocation, ObjectStore might need vtbls and discriminant functions. It finds them in tables that map class names to references to vtbls and discriminant functions. The schema generator generates a C++ source file (or object file for Visual C++) containing these tables that relate your schema to your application.

These tables are filled in during application link or postlink or at program start-up time, or some combination of these, depending on the platform. At each of these steps, the referenced vtbls and discriminants are searched for in the executable and, if found, are entered into the tables. At run time, ObjectStore can use these tables to find items for relocation.

On cfront platforms, the os_postlink executable performs this job. On some platforms, the compiler/linker does it. On some platforms, this search might be done at run time based on the currently available DLLs.

API
None.

osarchiv: Logging Transactions Between Backups

The osarchiv utility records all transaction activity for specified databases. You can run this utility interactively or in the background.

Syntax

osarchiv [options] -d directory [pathname...]
-d directory 
Specifies the directory in which to create the archive log files. This is required.

pathname... 
Specifies a database or rawfs directory whose transactions you want to log. You can specify one or more pathnames. Pathnames can be on different Servers.

Databases can be file or rawfs databases.

When you specify a rawfs directory, osarchiv logs transactions for all databases in that directory. It does not operate on databases that are in subdirectories unless you specify the -r option.

The group of databases for which you are performing archive logging is called the archive set.

If you do not specify at least one pathname, you must specify the -I (I) option with an import file name.

Options
-a archive-record-file 
Specifies the pathname of the file that osarchiv uses to record the segment change IDs for the archive set. The osarchiv utility updates this file each time it successfully records committed changes to the archive set - this is referred to as taking a snapshot. The archive record file is comparable to the incremental record file for osbackup.

-B size 
Specifies the size of the buffer used by each Server that osarchiv contacts. size is a number optionally appended with k, m, or g to indicate kilobytes, megabytes, or gigabytes, respectively. If no letter is specified, m is presumed. For example, -B 1024k, -B 1m, and -B 1 each specify a maximum buffer size of 1 megabyte. The default value is 1 MB.

-C 
Enables the interactive command-loop feature. This feature is disabled by default.

-i interval 
Specifies an integer that osarchiv uses as the interval between snapshots. By default, this interval is in seconds, but you can append m, h, or d to indicate minutes, hours, or days. For example, -i 60 and -i 1m both specify an interval of one minute.

When interval is not 0, osarchiv takes a snapshot immediately after being initiated and then every interval seconds (or minutes, hours, or days) thereafter.

When you do not specify an interval, it defaults to 0, which means that snapshots are not automatically taken. You can take a snapshot at any time that osarchiv is active by issuing the x command. See the command description for x.

-I import-file 
Specifies the name of a file that contains a list of either file or rawfs database pathnames. The osarchiv utility logs transactions for the databases in this list. The osarchiv utility cannot read such a list from stdin.

The list contains one pathname per line. Leading and trailing white space is ignored.

If you specify the -I (uppercase I) option, you can also specify additional pathnames on the command line. After you initiate the osarchiv utility, you can use the a command to add databases to the archive set. See a pathname. You cannot specify -I-.

-r 
Instructs osarchiv to descend into any rawfs directories specified on the command line, adding all rawfs databases found to the archive set. By default, only databases in the specified directory are backed up.

When archiving file databases, specifying the -r option has no effect. You must explicitly specify each file database.

After archive logging begins, you can add a rawfs directory to the archive set. If you specified -r when you initiated osarchiv, it applies to subsequently added rawfs directories.

You cannot specify the -r option for some directories and not for others. When specified, it applies to the entire archive set.

-s- size 
Specifies the maximum amount of data to write to an archive file. By default, this size is in megabytes. You can specify KB, MB, or GB by appending k, m, or g to size. For example, -s 1024k, -s 1m, and -s 1 each specify a maximum archive file size of 1 megabyte.

When an archive file is full, the osarchiv utility automatically starts using the next file in the archive file sequence. A particular snapshot is always in a single archive file; osarchiv never stores it across two files.

The default is 2 MB.

Commands

You can execute the following commands when you use osarchiv in interactive mode. The utility processes the command between snapshots.
a pathname 
Adds the specified file database or rawfs database or directory to the archive set.

h 
Displays on-line help.

i interval 
Interval - changes the interval between snapshots. Specify an integer for interval. You can append the letter m, h, or d to indicate minutes, hours, or days. For example, i 60 and i 1m both specify an interval of one minute. When interval is 0, snapshots are not automatically taken.

You can specify i without an integer to display the current interval.

You cannot take a snapshot of each transaction.

n

Next - closes the current archive file and starts saving snapshots in the next archive file in the sequence.

q or EOF

Quit - takes a snapshot immediately and then terminates the osarchiv utility.

r pathname 
Removes the specified file database or rawfs database or directory from the archive set.

t 
Table of contents - displays the pathnames of the databases and rawfs directories in the archive set.

x 
eXplicit - takes a snapshot as soon as you issue the command. This has no effect on snapshot intervals.

Description

When archive logging is active, ObjectStore takes snapshots of modifications to the archive set. An archive snapshot records all data modified by transactions that have committed since the last snapshot was taken.

When you start osarchiv, the first snapshot records data modified by transactions that committed since the last time the osbackup or osarchiv utility was run.

Tape device
You cannot perform archive logging to a tape device.

Archive
file format
The osarchiv utility places snapshots in archive files in the directory that you specified when you initiated the osarchiv utility. The utility uses the following naming convention for archive files:

YYMMDDHH.ext 
VariableMeaning
YY

Year

MM

Month

DD

Day

HH

Hour

ext

Extension of the form aaa, aab, aac, and so on

Switching
archive files
The osarchiv utility places consecutive snapshots in the same archive file until one of the following happens:

Ensure sufficient
disk space
You must ensure that there is sufficient disk space available to the osarchiv utility by periodically moving archive files to secondary storage. When osarchiv runs out of disk space for archive files, it notifies you and suspends activity. You must move archive files or allocate additional disk space to allow the utility to continue.

Adding to
archive set
When you add a database to a directory for which you are performing archive logging, the osarchiv utility does not automatically begin to take snapshots of that database. To enable archive logging for the additional database, you must use the a command to explicitly add the database to the archive set.

Deleting a database
When you are performing archive logging for a database, the Server keeps the database open. This has implications for deleting databases.

OS/2 and Windows
On OS/2 and Windows, you cannot delete a database for which you are performing archive logging until you invoke the osarchiv r command to remove the database from the archive set.

UNIX
On UNIX systems, when you remove a file the operating system removes its directory entry, but does not actually delete the file or free associated disk space until there are no applications that have the database open. Again, you must invoke the osarchiv r command to remove the database from the archive set.

On all systems, the r command does not take effect until the end of a snapshot.

Tradeoffs for Obtaining the Results You Need

Decreasing the time between snapshots decreases the number of transactions recorded in each snapshot. Shorter intervals between snapshots have the effect of keeping the archive more up to date and keeping the amount of data that needs to be archived smaller.

However, each snapshot causes information to be written to the archive file even if no data modifications are being recorded. Taking snapshots too frequently can consume space in the archive file unnecessarily. Longer intervals can reduce the amount of data being logged in cases where the same data is modified by multiple transactions. In such cases, only the most recent copy of the committed data needs to be logged.

Examples

In the following example, ./inc is the pathname of the file that osarchiv uses to record the segment change IDs for the archive set. The osarchiv utility updates this file each time it takes a snapshot. The directory in which to create the archive log files is /vancouver1/archives. The -i option indicates that snapshots should be taken every 30 seconds. The -r option instructs osarchiv to descend into any rawfs directories specified on the command line, adding all rawfs databases found to the archive set. Finally, vancouver::/ specifies a rawfs directory whose transactions you want to log.

% osarchiv -C -a ./inc -d /vancouver1/archives/ -i 30 -r vancouver::/
Writing backup volume #1 (/vancouver1/archives/96011216.aaa)...
Display archive set members
> t
vancouver::/foo.db
vancouver::/dbdir/bar.db
vancouver::/dbdir/foo.db
Take a snapshot now
> x
Archiving 452 sectors in database vancouver::/dbdir/bar.db.
Archiving 452 sectors in database vancouver::/dbdir/foo.db.
Archiving 452 sectors in database vancouver::/foo.db.
Add to archive set
> a /vancouver1/dbdir/foo.db
Display archive set members
> t
vancouver::/foo.db
vancouver::/dbdir/bar.db
vancouver::/dbdir/foo.db
vancouver:/vancouver1/dbdir/foo.db
If you press Enter while the osarchiv utility is taking a snapshot, the utility displays a message such as the following. If it is not taking a snapshot, the utility displays another prompt symbol.

> 
Archiving 452 sectors in database vancouver:/vancouver1/dbdir/foo.db.
Save snapshots in next archive file
> n
Closing volume #1 (/vancouver1/archives/96011216.aaa).
Writing backup volume #2 (/vancouver1/archives/96011216.aab)...
Display snapshot interval
> i
Snapshot interval is 5 seconds.
Change snapshot interval
> i 1m
> i
Snapshot interval is 60 seconds.
Remove member of archive set, display archive set members
> r vancouver::/foo.db
> t
vancouver::/dbdir/bar.db
vancouver::/dbdir/foo.db
vancouver:/vancouver1/dbdir/foo.db
Take a snapshot now
> x
Archiving 68 sectors in database vancouver::/foo.db.
Take a snapshot and terminate osarchiv utility
> q
Closing volume #2 (/vancouver1/archives/96011216.aab).
%
API
None.

osbackup: Backing Up Databases

The osbackup utility copies specified databases to another on-line location or to tape.

Syntax

osbackup [options] -f backup-image-file [-f backup-image-file]... 
pathname ...
-f backup-
image-file 
Specifies the location of the backup image.

You can specify a local file or a locally mounted file.

You can specify a tape device that is directly accessible from the host on which you are running osbackup. You cannot specify a remote tape device.

You can repeat the -f option with a new backup-image-file to create a multifile backup. When you do this, specify the -s option to indicate the size of each backup-image-file. For example:

osbackup -s 1m -f back1 -f back2 -f back3 db1 db2  db3 
ObjectStore tries to back up the databases to the back1, back2, and back3 files. The utility prompts for additional file names if 1MB per file is not sufficient.

On UNIX systems, you can specify -f - (hyphen) to indicate stdout. This allows you to pipe osbackup output directly to the osrestore utility.

On Windows NT systems, you specify a tape device with this syntax, \\.\Tape0, the standard Windows NT name for the first tape drive.

pathname... 
Specifies a database or directory to be backed up. You can specify one or more pathnames. Pathnames can be on different Servers.

Options
-a 
Aborts the backup operation if the utility cannot open the backup device. This raises an exception that indicates the problem.

The default is that if the backup utility fails to open the backup device, it displays a message and waits for you to correct the problem.

Examples of failure to open the backup device are having a write-protected tape or no tape loaded.

-b -blocking-factor 
Specifies a blocking factor to use for tape input and output. The blocking factor is in units of 512-byte blocks. This parameter is ignored for regular files. The default on UNIX is 126 blocks. The maximum blocking factor is 512 blocks.

-B size 
Specifies the size of the buffer used by the Servers contacted by osbackup. size is a number optionally appended with k, m, or g to indicate kilobytes, megabytes or gigabytes respectively. If no letter is given, m is presumed. For example, -B 1024k, -B 1m, and -B 1 each specify a maximum buffer size of 1 megabyte. The default value is 1 MB.

-i incremental-record-file 
Required. Specifies the incremental record file, a file that contains information about which databases have been backed up, and when they were backed up. The osbackup utility uses this information to determine which segments within a database have been modified since the last backup at a lower level. The utility then backs up only modified segments. The incremental record file is comparable to the archive record file for osarchiv.

Performing a backup at any level for which no previous information exists is equivalent to doing a level 0 backup for that database.

-I import-file 
Specifies the name of a file that contains a list of either file or rawfs database pathnames. The osbackup utility backs up the databases in this list. If you specify "-" as the import file name, osbackup reads from standard input.

The list contains one pathname per line. Leading and trailing white space is ignored.

If you specify the -I option, you can also specify additional pathnames on the command line.

-l- level 
Lowercase L specifies the level of the backup. Specify an integer from 0 to 9. Files that have been modified since the last backup at a lower level are copied to the backup image. For example, suppose you did a level 2 backup on Monday, followed by a level 4 backup on Tuesday. A subsequent level 3 dump on Wednesday would contain all files modified or added since the level 2 (Monday) backup.

Backup is incremental at the segment level, meaning that a segment is only backed up if it has been modified since the last backup at a lower level. A level 0 backup (the default) backs up all segments in all specified databases.

-r 
Instructs osbackup to descend into any rawfs directories specified on the command line, adding all rawfs databases found to the list of databases to be backed up. By default, only databases in the specified directory are backed up. When backing up file databases, specifying the -r option has no effect. You must explicitly specify each file database.

-s- size 
Sets the size of the volume being dumped to. The osbackup utility prompts you to insert a new tape or specify a new backup image file after it writes the amount of data specified by size.

You can specify k, m, or g to indicate that size is in units of kilobytes, megabytes (the default), or gigabytes. For example, -s 1024k, -s 1m, and -s 1 each specify a maximum backup image size of 1 MB.

You can use this option with the -f option to perform a multivolume backup.

This option is mainly for use when you are backing up to a tape device, since end-of-media cannot be reliably detected on some systems.

On Solaris 2, the -s option is not required because the end of the tape is reliably signaled to the application without any loss of data. On other systems, if you do not specify -s, the osbackup utility terminates when it reaches the end of the tape.

-S exec_command_name 
Specifies the pathname of a command to be executed when the osbackup utility reaches the end of the media. This command should mount the next volume before returning. The exit status from this command must be 0 or the backup operation aborts. Note that this option is an uppercase S.

Description

When backing up databases, ObjectStore takes advantage of any operations already being performed by the Server on behalf of various client applications. This reduces the cost of performing the backup. The osbackup utility gives priority to databases that are already open at the time the backup starts, and, within a database, to those sectors that are being actively used.

When backup starts, osbackup determines which segments require backup, builds a map that describes this data, and sets itself up to intercept read and write requests to and from these sectors. Any time the Server reads a sector of interest to the backup process that has not already been backed up, osbackup allows the read to proceed and makes a copy of the data at that time. Similarly, write requests are intercepted and delayed long enough for osbackup to retrieve the transaction-consistent data first. Otherwise, the backup process operates in the background, retrieving data as efficiently as possible.

Considerations

Incremental backup of file databases
To perform an incremental backup on a file database, you must have created the database using ObjectStore Release 4 or later. You cannot perform an incremental backup on file databases that were upgraded from releases of ObjectStore prior to Release 4.

You can mix file databases and rawfs databases in the set of databases to be backed up.

Backing up a
rawfs directory
When you specify a rawfs directory, osbackup backs up all databases in the directory. When you specify the -r option, osbackup also backs up all databases in all subdirectories, subsubdirectories, and so on.

Backing up file databases
When backing up file databases, you must explicitly specify the name of each database with the pathname argument or in an import file, specified with the -I (uppercase I) option.

Specifying an incremental record file
If you do not specify an incremental record file for your backup, osbackup creates one using a default pathname.

If a file of this name already exists, it is written over, and data in it is lost. For this reason, it is recommended that you use the -i option to provide a unique name for the incremental record file.

Compacted databases
When you run the oscompact utility on a database, it has the potential to modify each segment in the database. When you back up a database after compacting it, the osbackup utility copies each modified segment; this might be the entire database. Consequently, you might want to compact databases just before you perform a full backup.

Examples of Backing Up Databases

% osls -l vancouver::/foo.db
-rw-rw-r-- smith odi     231424 Dec 20 16:17 vancouver::/foo.db
%
Full backup of
rawfs database
to three files
% osbackup -i ./inc -f ./s1 -f ./s2 -f ./s3 -s 80k vancouver::/foo.db
Writing backup volume #1 (./s1)...
Archiving 452 sectors in database vancouver::/foo.db.
Closing volume #1 (./s1).
Auto switching to volume #2 (./s2).
Writing backup volume #2 (./s2)...
Closing volume #2 (./s2).
Auto switching to volume #3 (./s3).
Writing backup volume #3 (./s3)...
Closing volume #3 (./s3).
%
If you do not specify enough files, it prompts as follows:

% osbackup -i ./inc -f ./s1 -f ./s2 -f ./s3 -s 10k vancouver::/mdltst1.db
Writing backup volume #1 (./s1)...
Closing volume #1 (./s1).
Auto switching to volume #2 (./s2).
Writing backup volume #2 (./s2)...
Closing volume #2 (./s2).
Auto switching to volume #3 (./s3).
Writing backup volume #3 (./s3)...
Archiving 913 sectors in database vancouver::/mdltst1.db.
Closing volume #3 (./s3).
Please enter the pathname of the next file to use for backup.
Full backup of
rawfs database
to existing image
% osls vancouver::/
dbdir/
foo.db
% osls vancouver::/dbdir
bar.db
foo.db
% touch ./img <-- create file to demonstrate problem 
% osbackup -f ./img -i ./inc vancouver::/foo.db
Error encountered while opening file ./img (File ./img already exists. 
Cannot archive to an existing file.)
Do you wish to try_again? (yes/no): yes
Please enter the pathname of the next file to use for backup. ./img2
Writing backup volume #1 (./img2)...
Archiving 452 sectors in database vancouver::/foo.db.
Closing volume #1 (./img2).
Full backup of directory
% osbackup -f ./img -i ./inc -r vancouver::/
Writing backup volume #1 (./img)...
Archiving 452 sectors in database vancouver::/dbdir/bar.db.
Archiving 452 sectors in database vancouver::/dbdir/foo.db.
Archiving 452 sectors in database vancouver::/foo.db.
Closing volume #1 (./img).
Full backup of databases listed in import file
% cat ./import_file
vancouver::/foo.db
/vancouver1/dbdir/foo.db
%
% osbackup -f ./img -i ./inc -I ./import_file
Writing backup volume #1 (./img)...
Archiving 452 sectors in database vancouver:/vancouver1/dbdir/foo.db.
Archiving 452 sectors in database vancouver::/foo.db.
Closing volume #1 (./img).
%
Using an import
file and specifying
a pathname
% osbackup -f ./img -i ./inc -I ./import_file vancouver::/dbdir/foo.db
Writing backup volume #1 (./img)...
Archiving 452 sectors in database vancouver:/vancouver1/dbdir/foo.db.
Archiving 452 sectors in database vancouver::/dbdir/foo.db.
Archiving 452 sectors in database vancouver::/foo.db.
Closing volume #1 (./img).
%
Incremental backups of a rawfs database
% $OS_ROOTDIR/bin/osbackup -f ./img0 -i ./inc -l 0 
vancouver::/foo.db
Writing backup volume #1 (./img0)...
Archiving 452 sectors in database vancouver::/foo.db.
Closing volume #1 (./img0).
% $OS_ROOTDIR/bin/osbackup -f ./img1 -i ./inc -l 1 
vancouver::/foo.db
Writing backup volume #1 (./img1)...
Closing volume #1 (./img1).
% $OS_ROOTDIR/bin/osbackup -f ./img2 -i ./inc -l 2 
vancouver::/foo.db
Writing backup volume #1 (./img2)...
Closing volume #1 (./img2).
% osrm vancouver::/foo.db
Restoring from incremental backups
% $OS_ROOTDIR/bin/osrestore -f ./img0
Recovering from volume #1 (./img0)...
Restoring 452 sectors to database "vancouver::/foo.db"
Recovered to time Thu Jan 12 15:50:10 1996
Do you wish to restore from any additional incremental backups? 
(yes/no):
yes
Closing volume #1 (./img0).
Please enter the pathname of the next file from which to restore.
./img1
Recovering from volume #2 (./img1)...
Recovered to time Thu Jan 12 15:50:21 1996
Do you wish to restore from any additional incremental backups? 
(yes/no):
yes
Closing volume #2 (./img1).
Please enter the pathname of the next file from which to restore.
./img2
Recovering from volume #3 (./img2)...
Recovered to time Thu Jan 12 15:50:41 1996
Do you wish to restore from any additional incremental backups? 
(yes/no):
no
Closing volume #3 (./img2).
%
API
None.

oschangedbref: Changing External Database References

The oschangedbref utility changes the external database references for the specified database.

Syntax

oschangedbref db {from | -n name1} {to | -n name2}
FAT name
oschange
db 
Specifies the database for which you want to change references.

from 
Specifies the currently referenced database. This must be an absolute pathname that includes a Server host prefix.

-n name1 
Specifies a relative pathname for the currently referenced database. You must use this option for names beginning with a hyphen.

to 
Specifies the database to be referenced. This is a relative pathname or an absolute pathname, depending on how the original reference was defined.

-n name2 
Specifies a relative pathname for the database to be referenced. You must use this option for names beginning with a hyphen.

Description

Before using this utility, run ossize to display the cross-database references in the database whose references you want to change. Carefully examine ossize output to determine how that database defines relative pathnames. Use this information to specify the from and to arguments.

Examples

UNIX
ossize loon:/dbs/db_0
[ ...]
External database pointers:
Relative name db_1, resolves to loon:/dbs/db_1
External references:
Relative name db_3, resolves to loon:/dbs/db_3
To change the reference to db_3 to a reference to db_7, enter

oschangedbref loon:/dbs/db_0 loon:/dbs/db_3 loon:/dbs/db7
For the to argument, you can use the relative pathname instead of the absolute name. This is an equivalent command:

oschangedbref loon:/dbs/db_0 loon:/dbs/db_3 db7
Windows, OS/2
ossize me:h:\temp\t_1
[output omitted]

External database pointers:
Relative name t_3, resolves to me:h:\temp\t_3
Relative name t_0, resolves to me:h:\temp\t_0
Relative name t_2, resolves to me:h:\temp\t_2
External references:
Relative name t_3, resolves to me:h:\temp\t_3
To change pointers and references from t_3 to t_3.new, enter

oschange me:h:\temp\t_1 me:h:\temp\t_3 me:h:\temp\t_3.new
or

oschange me:h:\temp\t_1 me:h:\temp\t_3 t_3.new
Example of moving to same directory
/a/b/db1 contains a reference to /a/b/db2.

If you move both db1 and db2 to a different directory, for example, /e/f/g, the reference is still valid because the result is

/e/f/g/db1
/e/f/g/db2
Both db1 and db2 are still in the same relative pathname.

If you move db1 to /e/f and db2 to /e/f/g, the result is

/e/f/db1
/e/f/g/db2
In this case, they are no longer in the same relative path. You need to use the oschangedbref utility.

Example of moving to different directories
/a/b/db1 contains a reference to /a/b/c/db2. In this case, db1 refers to c/db2. So if you move db1 to /e/f and move db2 to /e/f/c the result is

/e/f/db1
/e/f/c/db2
The reference is still valid because db1 still refers to c/db2.

Example of moving to different Server
/a/b/db1 on Server green contains a reference to /a/b/db2 on Server green. You want to move db1 to Server red. If you move them both to the same directory on Server red, the reference is still valid.

If you want to move only db1 to Server red in directory /x/y, then you must use oschangedbref to change the reference from db1 to specify the full pathname, including Server name, for db2.

When you store references, no Server name is attached until you use oschangedbref to specify it. The exception to this rule is if you use os_database::set_relative_directory(). See the ObjectStore C++ API Reference.

API
Class: os_database
Method: change_database_reference

oschgrp: Changing Database Group Names

The oschgrp utility changes the group name of the specified databases and directories.

Syntax

oschgrp [-R][-f] group pathname ... 
group 
Specifies a group name or group number in a group ID file.

pathname 
...
Specifies the databases and/or directories whose group name you are changing. You can specify either rawfs paths of any kind or file database paths.

Options
-f 
Forces execution. Errors are not reported.

-R 
Indicates that ObjectStore should change the group name recursively for all specified directories. That is, it changes the group name for subdirectories and their contents, subsubdirectories and their contents, and so on.

Description

This utility operates on rawfs databases and file databases.

When you specify a file database, you cannot specify a remote file-server host name in the pathname of the file database. The oschgrp utility passes the operation to a local native utility. If you specify a remote file-server host name, ObjectStore informs you that you specified an illegal pathname.

oschgrp can perform wildcard processing. See "Wildcards".

UNIX
When operating on a rawfs database, you must enclose the wildcard in quotation marks ("") or precede it with a back slash (\) to keep the shell from interpreting wildcards.

oschgrp accepts a combination of rawfs pathnames and file pathnames.

/etc/group is the group ID file. You must be the owner of the database, or be the superuser.

API
Class: os_dbutil
Method: chgrp

oschhost: Changing Rawfs Link Hosts

The oschhost utility changes the host that a link in the rawfs points to.

Syntax

oschhost [-f][-R] newhost pathname ...
newhost 
Specifies the name of the new host for the specified rawfs link.

pathname ...
Specifies one or more rawfs links.

oschhost [server_host] old_link_host new_link_host
server_host 
Specifies the Server on which you are running oschhost. When you do not specify this argument, ObjectStore runs the utility on the local host.

old_link_host 
Specifies the name of the host the link currently points to.

new_link_host

Specifies the name of the host the link will point to.

Options
-f 
Forces execution. Errors are not reported.

-R 
Indicates that ObjectStore should change the host recursively for all specified directories.

Description

This utility operates only on rawfs links. The oschhost utility only changes the host component of the rawfs symbolic link, or all links in the rawfs. The utility does not physically move any databases or directories.

You can use oschhost to update the rawfs after you restore an entire file system from one Server to another.

Use the first form of the utility to change specified links, that is, links with particular pathnames.

Use the second form of the utility to change all links on a particular host (server_host) that point to a specified host (old_link_host) so that they point to a new host (new_link_host).

UNIX
You must be the superuser to change the host for a rawfs.

API
Class: os_dbutil
Methods: rehost_all_links and rehost_link

oschmod: Changing Database Permissions

The oschmod utility changes the permission mode for the specified databases and directories.

Syntax

oschmod [-R][-f] new_mode pathname ... 
new_mode 
Specifies the new permission mode for the specified databases and directories.

pathname 
...
Specifies the databases and directories whose permission you want to change. You can specify both rawfs and file pathnames.

Options
-R 
Indicates that ObjectStore should change the permission recursively for all specified directories.

-f 
Forces execution. Errors are not reported.

Description

To change the permission mode for a database, you must be the owner of the database or, on UNIX, the superuser.

The new_mode argument can be absolute or symbolic.

Absolute mode
An absolute mode is an octal number constructed from the OR of the following modes (note that execute is meaningful only for directories):
400

Read by user.

200

Write by user.

100

Execute (search in directory) by user.

040

Read by group.

020

Write by group.

010

Execute (search) by group.

004

Read by others.

002

Write by others.

001

Execute (search) by others.

Symbolic mode
A symbolic mode has the form

[ who ] op permission [ op permission ] ...

who is a combination of
u 
User permissions

g 
Group permissions

o 
Others

a 
All, or ugo

If you omit who, the default is a, but the setting of the file creation mask (on UNIX, see umask in sh(1) or csh(1) for more information) is taken into account. When who is omitted, oschmod does not override the restrictions of your user mask.

op is one of
+ 
Add the permission

- 
Remove the permission.

= 
Assign the permission explicitly (all other bits for that category, owner, group, or others, are reset).

permission is any combination of
r 
Read

w 
Write

x 
Execute

Omitting permission is useful only with =, to remove all permissions.

oschmod can perform wildcard processing. See "Wildcards".

When you specify a file database, you cannot specify a remote file-server host in the pathname of the file database. The oschmod utility passes the operation to a local native utility. If you specify a remote file-server host name, ObjectStore informs you that you specified an illegal pathname.

UNIX
When operating on a rawfs database, you must enclose the wildcard in quotation marks (" ") or precede it with a back slash (\) to keep the shell from interpreting wildcards.

API
Class: os_dbutil
Method: chmod

oschown: Changing Database Owners

The oschown utility changes the ownership of specified databases and directories.

Syntax

oschown [-R][-f] owner[.group] pathname ... 
owner 
Specifies the user name of the new owner of the specified databases and directories.

.group 
Specifies the group name of the specified databases and directories. Be sure to precede it with a period. Optional.

pathname 
Specifies the databases and directories whose owner you want to change. You can specify both file and rawfs pathnames.

Options
-R 
Indicates that ObjectStore should change the owner recursively for all specified directories.

-f 
Forces execution. Errors are not reported.

Description

This utility operates on rawfs databases and directories and file databases and directories.

When you specify a file database, you cannot specify a remote file-server host in the pathname of the file database. The oschown utility passes the operation to a local native utility. If you specify a remote file-server host name, ObjectStore informs you that you specified an illegal pathname.

oschown can perform wildcard processing. See "Wildcards".

UNIX
You must be the superuser to run this utility. The owner must be a user name in the password file, /etc/passwd. Only the superuser can change the owner of a directory or database. The group is a group name found in the GID file, /etc/group.

When operating on a rawfs database, you must enclose the wildcard with quotation marks ("") or precede it with a back slash (\) to keep the shell from interpreting wildcards. The -f and -R options are identical to the shell chown command's force and recursive options, respectively. The oschown utility accepts a combination of rawfs pathnames and file pathnames.

API
Class: os_dbutil
Method: chown

oscmrf: Deleting Cache and Commseg Files

The oscmrf utility instructs the Cache Manager on the specified host to delete the cache files and commseg files in its free pool.

Syntax

oscmrf [hostname] 
hostname 
Specifies the host of the Cache Manager that you want to instruct to delete cache and commseg files. Defaults to the local host.

Description

It is always safe to run this utility. The Cache Manager deletes only files that are not in use by any client.

After oscmrf runs, if an additional client appears, the Cache Manager must create new cache and commseg files. This is slightly slower than if it did not have to create these files.

Windows and OS/2
The Cache Manager does not use cache files or commseg files on OS/2 or Windows systems. However, you can use the oscmrf utility on these operating systems and specify hosts that do use cache files and commseg files.

Example

% oscmrf
Deleted 2 cache files and 2 commseg files.
% 
API
Class: os_dbutil
Method: cmgr_remove_file

oscmshtd: Shutting Down the Cache Manager

The oscmshtd utility shuts down the Cache Manager on the specified host.

Syntax

oscmshtd [hostname] [version]
hostname 
Specifies the host of the Cache Manager that you want to shut down. The default is the local host.

version

Specifies the version of the Cache Manager that you want to shut down. The default is 4.

Description

Be sure to notify users before you shut down the Cache Manager.

Example

% oscmshtd
Shutting down Cache Manager process
% 
API
Class: os_dbutil
Method: cmgr_shutdown

oscmstat: Displaying Cache Manager Status

The oscmstat utility displays status information about the Cache Manager process running on the specified host.

Syntax

oscmstat [hostname] [version-number]
hostname 
Specifies the name of the host of the Cache Manager for which you want information. The default is the local host.

version-number

Specifies the version of the Cache Manager for which you want information. The default is 4.

Description

The information provided by the oscmstat utility is useful for debugging the storage system.

If you do not specify a host name, the default is the local host.

The oscmstat utility prints one line for every Server to which the Cache Manager is connected. For each Server, it displays

UNIX
If the Cache Manager is running on a UNIX system, oscmstat also displays the names of cache and commseg files known to the Cache Manager. This is useful if you are trying to determine if files are in active use by ObjectStore, or are ObjectStore files no longer in use that can be deleted.

In oscmstat output, the second word of an ObjectStore file name is the name of the host that created and owns or owned the file. For example, for files named objectstore_doolittle_commseg_8 and objectstore_doolittle_cache_3, the host name is doolittle.

The command oscmstat doolittle displays the files that the Cache Manager daemon on host doolittle currently knows about. If your file is not on the list, it is no longer in use, and can be removed with oscmrf.

If oscmstat reports that there is no Cache Manager running, it is safe to delete the file, as long as you are certain that oscmstat did not fail due to temporary network failure or something similar.

Example

UNIX
Output on a UNIX workstation typically looks like the following:

kellen% oscmstat
ObjectStore Release 5.0 Cache Manager, Version 9.0.1
Process ID 6444. Executable is path.exe.2
Host "kellen". Started at Sat May 20 14:54:05 1995
Soft Allocation Limit 0, Hard Allocation Limit 0. 3
Allocated: free 80568320, used 5775360. 4 Server host: Client process ID: Status for this host: kellen 0 Initializing: constructor finished There is 1 client currently running on this host: Free files (cache): /tmp/ostore/objectstore_5_kellen_cache_1 (16777216) 5 /tmp/ostore/objectstore_5_kellen_cache_5 (8388608) /tmp/ostore/objectstore_5_kellen_cache_7 (8388608) /tmp/ostore/objectstore_5_kellen_cache_9 (8388608) /tmp/ostore/objectstore_5_kellen_cache_11 (8388608) /tmp/ostore/objectstore_5_kellen_cache_13 (8388608) /tmp/ostore/objectstore_5_kellen_cache_15 (8388608) /tmp/ostore/objectstore_5_kellen_cache_17 (8388608) /tmp/ostore/objectstore_5_kellen_cache_19 (8388608) In-use files (cache): /tmp/ostore/objectstore_5_kellen_cache_3 (8388608) Free files (commseg): /tmp/ostore/objectstore_5_kellen_commseg_18 (278528) /tmp/ostore/objectstore_5_kellen_commseg_16 (278528) /tmp/ostore/objectstore_5_kellen_commseg_14 (262144) /tmp/ostore/objectstore_5_kellen_commseg_12 (262144) /tmp/ostore/objectstore_5_kellen_commseg_10 (983040) /tmp/ostore/objectstore_5_kellen_commseg_8 (483328) /tmp/ostore/objectstore_5_kellen_commseg_6 (344064) /tmp/ostore/objectstore_5_kellen_commseg_4 (557056) /tmp/ostore/objectstore_5_kellen_commseg_2 (1622016) In-use files (commseg): /tmp/ostore/objectstore_5_kellen_commseg_20 (262144) Call Back Queue: Empty Notifications 6
Client PID

QueueSize

ReceivedFrom Server

ReceivedBy Client

Pending

Overflows

Notifier State

13149

100

1814

1796

0

18

waiting_for_
notification
13145

43

904

876

0

5

waiting_for_
notification
kellen% 1 Internal version number unrelated to ObjectStore release numbers.

2 Operating system process ID of the Cache Manager process. The executable that you are running oscmstat from is identified by path.exe.

3 The allocation limit parameters are as described in the parameter file.

4 Total sizes of the used pool and the free pool.

5 One line for each Server connection to the Cache Manager. This information is sometimes useful in debugging.

Cache file names end in odd numbers and commseg file names end in even numbers. The cache file whose name ends in 1 and the commseg file whose name ends in 2 go together. Likewise, the cache file whose name ends in 3 and the commseg file whose name ends in 4 go together, and so on.

One line for each client (ObjectStore application process) currently running on this host. For each client it gives the operating system process ID and user ID, the name of the client (assuming the client has called objectstore::set_client_name()), an internal version number that also has nothing to do with ObjectStore release numbers, and a virtual address within the Cache Manager that is useful in debugging the Cache Manager.

6 For each client, the oscmstat output displays the following notification information:

OS/2 or Windows
On Windows and OS/2 systems, there are no cache or commseg files so there is no mention of them in oscmstat output.

API
Class: os_dbutil
Method: cmgr_stat

oscompact: Compacting Databases

The oscompact utility removes deleted space in specified databases or segments.

Syntax

oscompact {-dbs_to_compact pathname ... | -segments_to_compact 
pathname segment_number [pathname segment_number] ... }
[-db_references pathname ...]
[-segment_references pathname segment_number
[pathname segment_number] ... ]
[-compaction_threshold percent_of_deleted_space]

Options
-dbs_to_compact 
pathname ...
Specifies one or more databases to compact. You must specify one of -dbs_to_compact or -segments_to_compact. You can specify both.

-segments_to_compact 
pathname segment_number 
Specifies one or more segments to compact. Identify each segment with its database pathname and segment number. You must specify one of -dbs_to_compact or -segments_to_compact. You can specify both.

-db_references pathname...
Specifies one or more databases that contain pointers or ObjectStore references to the databases and segments being compacted.

-segment_references 
pathname segment_number 
Specifies one or more segments that contain pointers or ObjectStore references to the databases and segments being compacted. Identify each segment with its database pathname and segment number.

-compaction_threshold 
percent_of_deleted_space 
Specifies the minimum percent of deleted space that a segment must have to be compacted. Segments with less than the specified percent of deleted space are not compacted. When you do not specify this option, oscompact compacts any segment that has internal deleted space.

Description

The oscompact utility runs as an ObjectStore client process. After you compact a database,

You can obtain segment numbers by running the ossize utility or calling the API function os_segment::get_number().

You can use the oscompact utility on both file databases and rawfs databases.

Compacting
file databases
The segments in file databases are made up of extents, all of which are allocated in the space provided by the host operating system for the single host file. When there are no free extents left in the host file, and growth of an ObjectStore segment is required, the ObjectStore Server extends the host file to provide the additional space. The compactor permits holes contained in segments to be compacted for return to the allocation pool for the host file. This frees that space for use by other segments in the same database. However, since operating systems provide no mechanism to free disk space allocated to regions internal to the host file, any such free space remains inaccessible to other databases stored in other host files. In other words, compacting a file database does not reclaim space for use by other databases. See also oscp in oscp: Copying Databases.

Database size
Compacting a file database does not decrease its size, and might increase it to a small degree.

Compacting rawfs databases
The ObjectStore rawfs stores all databases in a single region, on either one or more host files or raw partitions. Any space in a rawfs that is freed by the compaction operation can be reused by any segment in any database stored in the rawfs.

What the compactor does
The compactor compacts all C and C++ persistent data, including ObjectStore collections, indexes, and bound queries, and correctly relocates pointers and all forms of ObjectStore references to compacted data. ObjectStore os_reference_local references are relocated assuming they are relative to the database containing them. The compactor respects ObjectStore clusters, in that compaction ensures that objects allocated in a particular cluster remain in the cluster, although the cluster itself may move as a result of compaction.

Caution
When you have cross-database references, be sure to compact the databases together or use protected references. Not doing so can destroy references. If you run the oscompact utility on, for example, databaseA, then os_references from databaseB to databaseA are no longer valid. Alternatively, if you use protected references from databaseB to databaseA, then compacting databaseA does not cause a problem.

Backing up compacted databases
When you run the oscompact utility on a database, it has the potential to modify each segment in the database. When you back up a database after compacting it, the osbackup utility copies each modified segment; this might be the entire database. Consequently, you might want to compact databases just before you perform a full backup. However, as a safeguard against unexpected results, it is a good idea to back up databases just before you compact them.

Restrictions
You must observe the following data restrictions when using the compactor:

Schema protection
When developing an application, if you are running this utility on a protected schema database, ensure that the correct key is specified for the environment variables OS_SCHEMA_KEY_LOW and OS_SCHEMA_KEY_HIGH. If the correct key is not specified for these variables, the utility fails. ObjectStore signals

err_schema_key _CT_invalid_schema_key,
"<err-0025-0151> The schema is protected and the key provided did not 
match the one in the schema."
When deploying an application, if your end users need to use the oscompact utility on protected schema databases, you must wrap the utility in an application. This application must use the API to provide the key before using the os_dbutil class to call the utility. End users need not know anything about the key. For information about wrapping your application around an ObjectStore utility, see the class os_dbutil in the ObjectStore C++ API Reference.

API
Class: objectstore
Method: compact

oscopy: Copying Databases

The oscopy utility makes a copy of an ObjectStore database. A key benefit of oscopy is that it performs transaction-consistent database copying without incurring locking conflicts.

oscopy cannot be used to copy databases to file directories. It cannot copy segment-level permissions and it does not work with ObjectStore/Single. Instead, use the oscp utility.

Syntax

oscopy source target
source 
Specifies the ObjectStore file or rawfs database to be copied.

target 
Specifies the pathname for the copy. ObjectStore either creates this database or overwrites it. The target directory must be a rawfs file system.

oscopy -R source_dir target_dir
source_dir 
Specifies the pathname of the rawfs directory to be copied.

target_dir 
Specifies the target rawfs pathname. If this directory does not exist, ObjectStore creates it, if its base name exists. ObjectStore recursively copies the source directory into the target directory.

oscopy source ... target
source 
Specifies the rawfs or file databases to be copied.

target 
Specifies the rawfs directory to contain the copies.

Options
-R 
Instructs oscopy to copy a directory recursively. You must specify a rawfs directory for both the source and destination pathnames. The top-level name of the destination pathname must exist before you issue oscopy.

Description

This command has three forms. The first copies a file or rawfs database to a rawfs file system. The second recursively copies a rawfs directory and its contents to another location. The third copies a database or databases to a rawfs file system. You can specify either file or rawfs databases as sources. Copies must always be made to rawfs file systems.

Restrictions
You cannot specify wildcards in database pathnames.

Transaction consistency
When you specify more than one database as a copy source, oscopy ensures transaction consistency among the specified databases for a particular moment in time.

Database IDs
Native copy commands and the oscopy utility create copies with the same database ID as the original. This is important only if you have applications that rely on the uniqueness of these IDs. You can assign the copy a new, unique ID with os_database::set_new_id().

Schema protected databases
When you copy a schema-protected database without specifying the schema key, the copy has the same db_id as the original. If you supply the correct schema key, the copy has a new db_id. In both cases, the copy has the same schema key as the original, and the key is frozen in the copy if it is frozen in the original.

Copying a rawfs database to a file database
Copying a rawfs database to a file database results in the loss of segment-level access control information.

Database size might change
Your database might appear to have a different size after you use oscopy to copy it. This is because the Server might allocate the copy in a way that is different from the way it allocated the original database. Also, when you perform oscopy, the size of the database is set. The Server can make just the right amount of space available for the copy of the database.

Variables that affect pathname interpretation
There are many conditions that can affect pathname interpretation:

When you copy a file and the result is not what you expect, be sure to consider these conditions.

oscp: Copying Databases

The oscp utility makes a copy of an ObjectStore database.

Unlike oscopy, you can use oscp to copy ObjectStore/Single databases and to copy databases to file directories. Note, however, that oscp can produce an inconsistent database if other clients are updating the database while oscp is running. In these cases, use the oscopy command.

Syntax

oscp [-L server_log] source target
source 
Specifies the ObjectStore file or rawfs database to be copied.

target 
Specifies the pathname for the copy. ObjectStore either creates this database or overwrites an existing database of this name.

oscp -R [-i] source_dir target_dir
source_dir 
Specifies the pathname of the rawfs directory to be copied.

target_dir 
Specifies the target rawfs pathname for the copy of the rawfs directory. If this directory does not exist, ObjectStore creates it, if its base name exists. ObjectStore recursively copies the source directory into the target directory.

oscp [-Ri] source ... target
source 
Specifies the rawfs databases to be copied.

target 
Specifies the rawfs directory to contain the copies.

Options
-i
Instructs oscp to prompt you to confirm whether or not to overwrite databases or directories at existing pathnames. If target does not exist, you do not see this prompt.

-L server_log 
When specified, the named file is used for the Server log file. When unspecified, a temporary file is used.

This option is only applicable when you are running the utility as an ObjectStore/Single application. If the file already exists, it must be a properly formed Server log.

-R 
Instructs oscp to copy a directory recursively. You must specify a rawfs directory for both the source and destination pathnames. The top-level name of the destination pathname must exist before you issue oscp.

Description

Restrictions
Using native copy commands
The oscp utility contacts the Server to ensure that the database being copied is transaction-consistent and fully up-to-date. The native copy commands do not do this. Therefore, you should only use native copy commands if the Server for the file database you want to copy has been shut down. (You cannot, of course, use native copy commands to copy a database to or from a rawfs.)

Using native copy commands sometimes produces a database

Attempting to operate on an inconsistent copy fails, signaling err_inconsistent_db.

Database IDs
Native copy commands and the oscp utility create copies with the same database ID as the original. This is important only if you have applications that rely on the uniqueness of these IDs. You can assign the copy a new, unique ID with os_database::set_new_id().

Schema-protected databases
When you copy a schema-protected database without specifying the schema key, the copy has the same db_id as the original. If you supply the correct schema key, the copy has a new db_id. In both cases, the copy has the same schema key as the original, and the key is frozen in the copy if it is frozen in the original.

Copying a rawfs database to a file database
Copying a rawfs database to a file database can result in the loss of segment-level access control information. This can happen because file databases do not maintain this information. The oscp utility issues a warning after copying a database if the source database had segment-level protections that could not be copied. If the source database is not using segment-level access control, nothing is lost and a warning is not displayed.

Database size might change
Your database might appear to have a different size after you use oscp to copy it. This is because the Server might allocate the copy in a way that is different from the way it allocated the original database. Also, when you perform oscp, the size of the database is set. The Server can make just the right amount of space available for the copy of the database.

Variables that affect pathname interpretation
There are many conditions that can affect pathname interpretation.

When you copy a file and the result is not what you expect, be sure to consider these conditions.

Examples

These examples take two ObjectStore environment variables into account.

When OS_DIRMAN_HOST is set and OS_DIRMAN_USE_SERVER_PREFIX is set to Yes, ObjectStore applies the setting of OS_DIRMAN_HOST first, and then applies OS_DIRMAN_USE_SERVER_PREFIX.

Simple copy
Suppose neither variable is set and you invoke oscp as follows:

oscp /source/db1 /target/db2
If you do this on a host named atlas, then the full pathname of the copy is atlas:/target/db2, which is a Server-relative pathname. A Server-relative pathname is the operating system pathname as opposed to an ObjectStore rawfs pathname.

OS_DIRMAN_HOST
Now suppose that you set OS_DIRMAN_HOST to mars. If you execute the two commands below on atlas, each command produces the same result.

oscp /source/db1 /target/db2
oscp atlas:/source/db1 /target/db2
The first command line is the same as the simple copy example above, but the result is different from the previous example. ObjectStore interprets /source/db1 as mars::/source/db1. The full pathname of the copy is mars::/source/db1.

In the second command, ObjectStore interprets atlas:/source/db1 as mars::atlas:/source/db1 and then as mars::/source/db1. This is the default interpretation when OS_DIRMAN_USE_SERVER_PREFIX is not set.

OS_DIRMAN_USE_
SERVER_PREFIX
In the next example, OS_DIRMAN_HOST is not set, but OS_DIRMAN_USE_SERVER_PREFIX is set to Yes. Invoke the following command on atlas:

oscp mars::atlas:/source/db1 /target/db2
ObjectStore interprets mars::atlas:/source/db1 as atlas:/source/db1. The full pathname of the copy is atlas:/target/db2.

Both variables set
In the last example, OS_DIRMAN_HOST is set to mars and OS_DIRMAN_USE_SERVER_PREFIX is set to Yes. Invoke the following oscp command on atlas:

oscp atlas:/source/db1 /target/db2
ObjectStore interprets atlas:/source/db1 in two steps.

  1. ObjectStore applies the setting of OS_DIRMAN_HOST, so the result is mars::atlas:/source/db1.

  2. ObjectStore applies the setting of OS_DIRMAN_USE_SERVER_PREFIX, so it interprets mars::atlas:/source/db1 as atlas:/source/db1.

The full pathname of the copy is mars:/target/db2.

API
Class: os_dbutil
Method: copy_database

osdf: Displaying Rawfs Disk Space Information

The osdf utility shows the amount of used and available disk space for the rawfs on the specified Server.

Syntax

osdf hostname
hostname 
The name of the host for which you want to display rawfs disk space information.

Description

If one or more of the partitions that make up the rawfs are expandable, and there is free disk space in the file system holding such a partition, the rawfs grows as needed. In this situation, the osdf utility does not show how much growth room is available.

Example

osdf elvis
Filesystem
kbytes
use
avail 
capacity
elvis
95749 
 533
 95215 
 0%
API
Class: os_dbutil
Method: disk_free

osdump: Dumping Databases

The osdump utility dumps to ASCII a database or group of databases, and generates the source for a loader capable of creating equivalent database(s).

Syntax

osdump [-pseudo] [-emit] pathname ...
pathname ...

One or more pathnames, separated by spaces, specifying the database or databases to be dumped, or (if -emit is supplied) specifying the database or databases for which loader source is to be emitted.

The file name at the end of each path must have the form filename.db, that is, it must have the extension .db.

Options
-emit
Tells osdump to generate the source code for a loader executable for the specified databases. To generate an ASCII dump of the specified databases, do not specify emit.

-pseudo
When used with -emit, tells osdump to generate the files ldrcls00.h and ldrcls00.cpp, which contain pseudo declarations of the classes in the dumped databases. When used without -emit, this switch has no effect.

When you build the loader executable, you can specify the schema of the databases with either of the following:

If you want to use the original C++ code, do not supply -pseudo, and change #include "ldrcls00.h" in generated code to include the original .h files instead.

If you want to use ldrcls00.h and ldrcls00.cpp, supply -pseudo, and do not change the include lines.

Description

Each execution of osdump does one of the following:

For each database or group of databases you want to dump, execute osdump twice:

The dumped ASCII has a compact, human-readable format. It is editable with tools such as perl, awk, and sed. You can use edited or unedited ASCII as input to the loader.

The schema for the dumped databases is stored in a remote schema database associated with the dump.

See also osload: Loading Databases.

Default vs. Customized Dump and Load
You can use the default dump and load processes, or customize the dump and load of particular types of objects. Customization is appropriate for certain location-dependent structures, such as hash tables. To determine when to customize, see When is Customization Required? in this section. To learn how to customize, see Dump/Load Facility in the ObjectStore C++ Advanced API User Guide.

Databases with unions, pointers-to-members or multidimensional arrays cannot be dumped by this utility.

Generated ASCII Files
When invoked without -emit, osdump generates the following files in the current directory:

See the Default Dumper ASCII Format for a description of the layout of the generated ASCII file .

Generated Source and Makefiles
If osdump is invoked with -emit, it generates the following files in the current directory:

If files with these names exist in the working directory, osdump overwrites them.

To build the loader from these files, use one of the generated makefiles. On UNIX, use the make utility and the osdump-generated makefile makefile.unx. On Windows, use nmake and the osdump-generated makefile makefile.w32.

Default Equivalence

This section defines equivalence for databases dumped and loaded by the default dump and load processes.

Roughly speaking, two databases are equivalent if every object in one database has a corresponding, equivalent object in the other database, where two objects are equivalent if

More precisely, two databases, db1 and db2, are equivalent if and only if there is a 1-1 mapping, map(), between objects in db1 and objects in db2 such that for every object, o1, in db1, o1 is equivalent to map(o1).

Two objects, o1 and o2, are equivalent (according to map()) if and only if all of the following hold:

When is Customization Required?

In most cases customization is not required. If you have a database with objects whose structure depends on the locations of other objects, you might have to customize the dumping and loading of those objects.

A dumped object and its equivalent loaded object do not necessarily have the same location, that is, the same offsets in their segment. Among the implications of this are the following:

The default dumper and loader take into account the first implication, and the loader automatically adjusts all pointers in loaded databases to use the new locations.

The default dumper and loader also take into account the second implication for ObjectStore collections with hash-table representations. Since a dumped collection element hashes to a different value than the corresponding loaded element, their hash-table slots are different. So the facility does not simply dump and load the array of slots based on fundamental values (which would result in using the same slot for the dumped and loaded objects).

Instead, it dumps the collection in terms of sequences of high-level API operations (that is, string representations of create and insert arguments) that the loader can use to recreate the collection with the appropriate membership.

The default dumper and loader do not take into account the second implication for non-ObjectStore classes. If you have your own classes that use hash-table representations, you must customize their dumping and loading. Any other location-dependent details of data structures (such as encoded offsets) should also be dealt with through customization.

See Dump/Load Facility in the ObjectStore C++ Advanced API User Guide.

Performance

To enhance efficiency during a dump, database traversal is performed in address order whenever possible. To enhance efficiency during loads, loaders are generated by the dumper and tailored to the schema involved. This allows the elimination of most run-time schema lookups during the load.

Default Dumper ASCII Format

Each db_table.dmp file has the format for database_table described below.

For each dumped database, filename.db, filename.dmp has the format shown for database, below.

database_table ::= 
      databases [ number_databases ] 
            { database_entry [ database_entry ]* } 

number_databases ::= 
      the integer number of dumped databases

database_entry ::= 
      < 
            pathname 
            database_size 
            number_segments 
            odi_release 
            architecture (date) 
      > 

database ::= 
      database [database_index] pathname roots segments 

pathname ::= 
      the pathname of the database being dumped 

database_size ::= 
      the size of the database in an integral number of bytes 

number_segments ::= 
      the integral number of segments contained in the database 

odi_release ::= 
      the ObjectStore release information 

architecture ::= 
      the host architecture set for the database 

date ::= 
      the date the database was last modified 

database_index ::= 
      the index of this database within the list of databases being
      dumped (0-based)

roots ::= 
      roots [ number_roots ] { root [ , root ]* } 

root ::= 
      name ( Type ) id 

segments ::= 
      segments [ segment ]* 

segment ::= 
      segment segment_number [segment_size] 
            (pathname) data 

segment_number ::= 
      integral segment number of this segment within its database 

data ::= 
      ( objects | cluster )* 

cluster ::= 
      cluster [cluster_size] { objects } 

objects ::= 
      object* 

object ::= 
      id ( type ) value 

id ::= 
      <database_index,segment_number,offset> 

offset ::= 
      integral value denoting the byte offset of an object 
            within its segment 

type ::= 
      integral | real | pointer | reference | array | class 

value ::= 
      character | integral | floating_point | pointer_value | reference | 
            collection | string | array_elements | class_members 

integral ::= 
      char | signed char | unsigned char | signed short | 
            unsigned short | int | unsigned int | signed long | 
                  unsigned long 

real ::= 
      float | double 

pointer ::= 
      type* 

reference ::= 
      type& 

array ::= 
      array type [ size ] 

class ::= 
      ( class | struct | union ) name 

character ::= 'c' where c is any printable ascii character 

integral ::= any non-floating point decimal number 

floating_point ::= 
      any floating point decimal number 

pointer_value ::= 
      any hex unsigned integral number 

string ::= 
      "s" where s is any sequence of printable ascii characters 
            with '"' escaped as "\"" and '\' escaped as "\\". 

array_elements ::= 
      { value [ , value ]* } 

class_members ::= 
      { value [ , value ]* } 
Each object is emitted as a single line of text.

The special storage types cluster, segment, and database denote underlying ObjectStore storage structures. When a storage type appears, each object following is contained within that storage structure.

Other types denote C++ type constructs. Values appear as single values or as a bracketed comma separated list of values. Base class instances and other embedded subobjects are flattened into a class_members list.

ObjectStore references, collections, cursors , indexes, and queries are instances of Object Store types that require special treatment. The following special dump formats are used for them:

reference ::= 
      id

collection ::=        
      simple_collection | collection_with_representation_policy |
            dummy_cursor | cursor | cursor_with_index | 
                  cursor_with_range | collection_index | 
                        collection_element_load | collection_query 

simple_collection::= 
      [ behavior cardinality collection_type representation_enum ] 

behavior::= 
      integral

cardinality::= 
      integral

collection_type::= 
      string

representation_enum::= 
      integral

collection_with_representation_policy::= 
      [ behavior cardinality collection_type representation_enum ] 
            { [representaion_enum]* } 

dummy_cursor::= 
      [ D ]

cursor::= 
      [ C collection_reference safe_flag ]

collection_reference::= 
      reference

safe_flag::= 
      integral

cursor_with_index::= 
      [  C collection_reference safe_flag
            { I path_name_length path_name element_name_length
                  element_name } ]

path_name_length::= 
      integral

path_name ::= 
      string

element_name_length::= 
      integral

element_name::= 
      string

cursor_with_range::= 
      [  C collection_reference safe_flag
            { R range_type key_type low_condition low_value H 
                  high_condition high_value } ]

range_type::= 
      integral

key_type::= 
      integral

low_condition::= 
      boolean

low_value::= 
      integral

high_condition::= 
      boolean

high_value::= 
      integral

boolean::= 
      0 | 1

collection_index::=  
      [ [{ path_name_length path_name element_type_length 
            element_type_name }]* ]

collection_element_load::= 
      [ [element_reference]* ]

element_reference::= 
      reference

element_type_length::= 
      integral

element_type_name::= 
      string

collection_query::= 
      [ element_type_length element_type  < query_string_length
            query_string > < file_name_length file_name > 
                  < line_number > ]

query_string_length::= 
      integral

query_string::= 
      string

file_name_length::=
      integral

file_name::= 
      string

line_number::= 
      integral

osexschm: Displaying Class Names in a Schema

The osexschm utility lists the names of all classes in the schema referenced by the specified database.

Syntax

osexschm [-detail] pathname
-detail 
Describes the structure of every class in detail.

pathname 
Specifies a file or rawfs database.

Description

For each class, osexschm indicates whether an object of the class type can be persistently allocated.

Schema protection
When developing an application, if you are running this utility on a protected schema database, ensure that the correct key is specified for the environment variables OS_SCHEMA_KEY_LOW and OS_SCHEMA_KEY_HIGH. If the correct key is not specified for these variables, the utility fails. ObjectStore signals

err_schema_key _CT_invalid_schema_key,
"<err-0025-0151> The schema is protected and the key provided did not 
match the one in the schema."
API
None.

osgc: Garbage Collection Utility

Garbage collection frees storage associated with persistent objects that are unreachable. Applications can continue to use a database while garbage collection is in process.

The command line utility for collecting garbage is osgc. Invoke this tool with the following format:

osgc [ options ] database_name 
You can specify the following options:
Option Description
-seg segment_id 
Collects garbage from only the specified segment. By default, the osgc utility operates on the entire database.

-retries number 
Indicates the number of times the tool tries to resume the sweep phase of garbage collection after it waits for a lock. The default is 10.

-retryInterval interval 
Indicates the number of milliseconds the sweep operation waits between sweep attempts for a concurrency conflict to be resolved before it tries to resume the sweep. The default is 1000.

-lockTimeOut interval 
Indicates the number of milliseconds the sweep operation waits for a lock conflict to be resolved. If it is not resolved in the specified length of time, the tool aborts the current transaction and starts a new transaction. ObjectStore rounds this value up to the nearest second. The default is 1000.

-transactionPriority n 
Specifies the transaction priority associated with transactions started by the tool. The Server uses this specification when it must determine which transaction must be the victim in a deadlock. This number is intentionally low so that the garbage collection transaction is the deadlock victim of choice. The default is 0.

-displayGarbage level 
Displays information about the candidates for garbage collection instead of actually destroying the candidates. The level you specify determines the amount of information the tool displays. 1 lists the number of objects per segment that would be destroyed. 2 is not currently supported. 3 lists the location of each GC candidate. 4 lists the roots of garbage graphs. Level 4 can require intensive computations.

-statistics 
Displays statistics for the garbage collection operation. This includes the total number of reachable objects and the total number of garbage objects.

Performing Garbage Collection in a Database

The ObjectStore persistent garbage collector (GC) collects unreferenced objects and ObjectStore collections in an ObjectStore database.

Persistent garbage collection frees storage associated with objects that are unreachable. It does not move remaining objects to coalesce the free space. (See oscompact: Compacting Databases)

The GC performs its job in two major phases. In the mark phase, the GC identifies the unreachable objects. In the sweep phase, the GC frees the storage used by the unreachable objects.

A segment is the smallest storage unit that can be garbage collected. You can specify a segment or a database to be garbage collected.

C++ Usage note
Normally, databases resulting from ObjectStore applications written in C++ will not require garbage collection since all storage allocation is handled explicitly.

osgc can be useful as a debugging tool. For example, if unreferenced objects are being harvested, it's an indication of a persistent memory leak. The identity of these objects can be a clue to the root of the problem.

Restriction
Do not use osgc with applications that rely on cross-database pointers, The garbage collector operates on one database at a time. References to one database from another are not detected and objects pointed to by references from other databases are seen as unreferenced and therefore removed.

Applications can continue to use a database while persistent GC is in progress. GC locks portions of a segment as needed, just as if it were just another application. In this way, the GC minimizes the number of pages that are locked and the duration for which the locks are held. Also, the GC retries operations when it detects lock conflicts.

By default, the GC runs with a transaction priority of zero. Consequently, it is the preferred victim when the Server must terminate a transaction to resolve a deadlock. At a later time, the GC redoes the work that was lost when the transaction was aborted.

The GC uses read and write lock timeouts of short duration. This avoids competition with other processes for locks. If the GC cannot acquire a lock because of a timeout, it retries the operation at a later time.

osglob: Expanding File Names

The osglob utility performs ObjectStore file name expansion.

Syntax

osglob wordlist
wordlist 
Specifies strings, such as rawfs pathnames, containing wildcards that you want to expand into all matching pathnames.

Description

The osglob utility can perform wildcard processing similar to regular expression wildcards *, ?, {}, and [].

UNIX
When operating on a rawfs database, you must enclose the wildcard in quotation marks ("") or precede it with a back slash (\) to keep the shell from interpreting wildcards.

API
Class: os_dbutil
Method: expand_global

oshostof: Displaying Database Host Name

The oshostof utility displays the host of the specified database to standard output.

Syntax

oshostof pathname
pathname 
Specifies the database for which you want to display the host name.

Description

The oshostof utility can operate on file or rawfs databases.

Normal pathname syntax is supported, including the OS_DIRMAN_HOST compatibility feature.

When you specify a pathname that is a symbolic link oshostof displays the host of the database that the link points to.

When you specify the pathname of a Server-remote database the oshostof utility returns the name of the host where the database resides.

Examples

A typical use is as follows:

ossvrchkpt `oshostof a/b/c'
API
None.

osload: Loading Databases

To load a database or group of databases from an osdump-generated ASCII file, build the executable osload from the corresponding osdump-generated source.

On UNIX, use the make utility and the osdump-generated makefile makefile.unx. On Windows, use nmake and the osdump-generated makefile makefile.w32.

The utility osload creates a database or group of databases given osdump-generated ASCII as input. The resulting databases are equivalent to the ones from which the ASCII was produced.

Syntax

osload [ -cwd ] db_table.dmp pathname ...
db_table.dmp
Database table dump file generated by osdump. Records information about the dumped databases.

pathname ...
One or more pathnames, separated by spaces, specifying the ASCII dump files to be loaded.

Options
-cwd 
Tells osload to recreate databases in the current working directory.

Description

For given ASCII input, the databases created by osload have the same filenames as the databases from which the ASCII was generated (as stored in db_table.dmp).

If switch -cwd is not set, then the databases have the same pathnames (as stored in db_table.dmp). If files with the given paths already exist (for example, because the dumped databases are still in their original locations), osload aborts.

-cwd forces osload to ignore paths from db_table.dmp and create the databases in the current working directory.

osln: Creating Links in the Rawfs

The osln utility creates a symbolic link in the rawfs hierarchy.

Syntax

osln pathname linkname
pathname 
The pathname of the rawfs directory or database that you want to point to.

linkname 
The pathname of the rawfs directory or database that is the new link. It points to pathname.

Description

Different links can point to the same rawfs pathname.

To indicate hosts, specify pathnames in the form

host::/pathname

Limitation
To access a particular database or directory, a client can follow as many as 15 cross-Server links. For example, a client traverses a link to Server Q. Server Q sends the client to Server P. Server P sends the client to another Server or even back to Server Q. Each connection to a Server counts as one link. It does not matter whether or not the Server was previously connected to in the link chain. When the client reaches the sixteenth link, ObjectStore displays the error message err_too_many_cross_svr_links.

To access a particular database or directory in its rawfs, the Server can traverse as many as ten same-Server links. When the Server reaches the eleventh link, ObjectStore displays the error message err_too_many_links.

In a chain of links, a client can return to a Server that it contacted earlier in the chain. In this situation, the Server's count of links within its rawfs begins with one. It does not continue the count from where it left off during the previous connection. Each time a link sends the client to a Server, the Server can follow as many as ten links within its rawfs.

These limits allow ObjectStore to catch circular links. For example, A is a link to B, and B is either directly or indirectly a link to A.

When needed
Links within the rawfs are useful in many situations, including the following:

Removing a link
To remove a link, use the osrm utility. The syntax is osrm linkname.

See osrm: Removing Databases and Rawfs Links on page224.

Examples

In the following example, link_to_db in canard's rawfs points to real_db in web-foot's rawfs.

osln web-foot::/real_db canard::/link_to_db
In the next example, link_to_db points to real_db and both databases are in the same rawfs.

osln web-foot::/real_db web-foot::/link_to_db
API
Class: os_dbutil
Method: make_link

osls: Displaying Directory Content

The osls utility lists the contents of the specified directory.

Syntax

osls [-dlRsu] pathname ...
pathname 
...
Specifies one or more rawfs or native file directories for which you want to list the contents.

Options
-d 
Lists the information about the directory itself, rather than the contents. This option operates on rawfs directories only.

-l 
Displays information about directory contents in long format, including the size in bytes.

-R 
Recursively lists the contents of the specified directory.

-s 
Causes the size to be displayed in 1 KB blocks. This option operates on rawfs directories only.

-u 
Lists the user name of the owner of the contained databases. This option operates on rawfs directories only.

Description

When a pathname includes links, ObjectStore identifies the pathname as the pathname to which the symbolic link chain points. This is true even if an alternative name was specified at creation.

The osls utility ignores trailing and multiple slashes in pathnames. It accepts a combination of rawfs pathnames and file pathnames.

When you specify a local directory, you cannot specify a remote file-Server host in the pathname of the local directory. The osls utility passes the operation to a local native utility. If you specify a remote file-Server host name, ObjectStore informs you that you specified an illegal pathname.

This utility can perform wildcard processing using regular expression wildcards *, ?, {}, and [].

UNIX
When operating on a rawfs database, you must enclose the wildcard in quotation marks ("") or precede it with a back slash (\) to keep the shell from interpreting wildcards.

API
Class: os_dbutil
Method: list_directory

osmkdir: Creating a Rawfs Directory

The osmkdir utility creates a directory in the rawfs.

Syntax

osmkdir [-p] [-m octal-mode] directory

Options
-p 
Indicates that ObjectStore should create any missing directories that are needed to make the specified directory path exist.

-m octal-
mode 
Indicates that the new directory has the permission mode as specified by octal-mode. Specify the protection mode that you want the directory to have. The default mode is 0700.

directory 
Specifies a rawfs directory pathname.

Description

You can also use osmkdir to create a nonrawfs directory. When you create a nonrawfs directory, you cannot specify a remote file-server host in the pathname of the nonrawfs directory. The osmkdir utility passes the operation to a local native utility. If you specify a remote file-server host name, ObjectStore informs you that you specified an illegal pathname. If you specify the -p option, it works if the native utility supports that feature.

API
Class: os_dbutil
Method: mkdir

osmv: Moving Directories and Databases

The osmv utility moves a database, directory, or link.

Syntax

Rawfs
osmv [-fi] p1 p2
osmv [-fi] p1... pn dir
File databases
osmv [-fi] p1 p2 
p1 
p1 ... pn
Specifies the pathname of a file database or a rawfs database, link, or directory that you want to move.

p2 
Specifies the new pathname for the file database or rawfs database, link, or directory. If p2 is a link to a directory, ObjectStore places p1 in the pointed-to directory.

dir 
Specifies a rawfs directory into which you want to move the specified rawfs databases, links, or directories.

Options
-f 
Forces execution. Errors are not reported.

-i 
Specifies interactive mode. ObjectStore prompts you to confirm for each specified database that you really want to move it.

Description

The osmv utility moves rawfs databases, directories, and links within a rawfs or from one rawfs to another. It also moves file databases within the file system. A side effect of osmv is to rename a file or directory.

As shown in the Syntax section, there are three forms of the command line for the osmv utility.

In the first form, if p1 and p2 are rawfs databases or links, osmv moves (changes the name of) p1 to p2. If p2 already exists, the utility removes it and then moves p1 to p2. If p1 is a rawfs directory, then p2 must not already exist. osmv moves (changes the name of) the p1 directory to the p2 directory.

In the second form, osmv moves one or more databases, links, or directories into the last directory in the list. The utility maintains the original names of the moved entities. The directory into which you are moving items must already exist and you must have write permission for that directory.

In the third form, osmv moves (changes the name of) file database p1 to file database p2.

Procedure
When moving rawfs databases to another Server, the osmv utility moves an item by doing the following:

  1. Remove the destination, if it exists.

  2. Copy the source to the destination.

  3. Remove the source.

This allows for consistent databases in the event of a Server crash. If the Server crashes during the osmv operation, there might not be a destination database, but there would always be a source database. When moving rawfs paths on the same Server, ObjectStore directly renames the item.

Fix external pointers and references
After you move a database, you need to use the oschangedbref utility to fix external pointers and references. See oschangedbref: Changing External Database References.

Native move commands
While you can use native move commands to move ObjectStore file databases, you forfeit the database consistency protection that osmv provides. If the Server crashes before propagating all changes to the database, then the Server cannot find the changes at recovery time and the database is corrupted.

When you specify a file database, you can specify a host in the pathname of the file database.

osmv can perform wildcard processing using regular expression wildcards *, ?, {}, and [].

UNIX
When operating on a rawfs database, you must enclose the wildcard in quotation marks ("") or precede it with a back slash (\) to keep the shell from interpreting wildcards.

API
Class: os_dbutil
Method: rename

osprmgc: Trimming Persistent Relocation Maps

The osprmgc utility prevents PRM bloat.

Syntax

osprmgc [-q] [-r] [-n N] [-t keyword] database-name 

Options
-q 
Quiet mode does not print results after every segment, but provides a report of total ranges found and total pages collected.

-r 
Read-only mode calculates how many ranges can be collected, but does not do the collection. It runs MVCC.

Another circumstance that produces a read-only report is if you run the utility on a database for which you have only read permission. In such a case, however, the utility does not run MVCC unless -r was specified.

-n N 
Specifies that osprmgc examine segment N only. This allows you to take advantage of the PRM reduction in one segment without subjecting the entire database to garbage collection.

-t keyword 
This option accepts the following keywords as values.

remove_whole_ranges - Default. Removes whole unused PRMEs. This is the only possible setting for use with immediate address-space assignment.

shrink_ranges - Removes whole unused PRMEs and shrinks any remaining non-huge PRMEs.

This setting minimizes address space as much as possible without increasing the number of PRMEs.

split_ranges - Removes whole unused PRMEs. Any remaining PRMEs are split and the unused space is removed. This setting best minimizes address-space usage, but can increase the number of PRMEs in existence.

coalesce_ranges - This setting can increase address-space usage. It is the best setting to reduce the number of PRMEs.

Description

The osprmgc utility reduces the size of persistent relocation maps (PRMs) by removing unnecessary persistent relocation map entries (PRMEs). The PRM governs the translation of pseudoaddresses, for persistent pointers, to process addresses. While the PRMs are stored in a compact form, the maps useful to an application, transient relocation maps (TRMs), are larger and consume transient heap memory. Also, large TRMs have a significant impact on persistent address space in the process.

The PRM grows whenever you add a pointer that is not yet translated by an existing PRME. Outbound relocation adds the necessary entry. The issue is that although pointers come and go (and once they are gone, the corresponding PRME might no longer be needed), the PRMEs are not normally removed. The PRM does not normally shrink.

If you are using relocation optimization, unnecessary PRM expansion can occur, since the entire lot of translations is taken in the interest of performance. That is, when you use relocation optimization for the sake of better performance, the PRM size can increase dramatically.

The osprmgc command-line utility is a PRME garbage collector that shrinks the size of the PRM so that it translates only the pointers currently existing in the segment. To decide when an entry should be removed, the utility looks at every data page in the segment to ensure that the entry is no longer needed.

The benefits of running the osprmgc utility are that it

The utility uses one transaction per segment. It reports by segment and also provides a total number of ranges found and collected per database.

Additionally, an embedded form of the utility exists in an os_dbutil version. This is particularly useful for databases with discriminant unions. The format is as follows:

struct os_prmgc_options {
      os_boolean flag_quiet; // -q, default is false
      os_boolean flag_read_only; // -r, default is false
      os_boolean flag_one_segment; // -n, default is false
      os_unsigned_int32 one_segment_number; // the N in -n N
      os_prmgc_type prmgc_type; //-t default is remove_whole_ranges
};
Discriminant union considerations
If you have a database with discriminant unions, you must perform PRM garbage collection using the os_dbutil form, and link in the necessary discriminant functions.

Both the command-line and embedded versions of the utility use a streaming fetch policy. The embedded version ensures that the policy is restored to its original state (if different) to minimize impact on the application.

Environment Variables

By default, when a segment is put in use, the address space assignment for that segment is immediate when inbound relocation optimization is possible. The default becomes deferred assignment either because inbound relocation optimization is not possible, or because trying to get immediate assignment would increase the amount of assigned address space above half of OS_AS_SIZE.

The defaults are effective for the large majority of conditions, but for extreme cases, there are override mechanisms. The default behavior can be overridden by the following environment variables:

See ObjectStore Management for a description of the use of these environment variables. Note that the existing environment variable OS_RELOPT_THRESH does not affect this choice. It is only used to decide if outbound relocation optimization is allowed.

API
Class: os_dbutil
Method: osprmgc

osprop: Propagating Server Logs

An ObjectStore/Single utility that performs a Server checkpoint.

Syntax

osprop [-f] server-log-name ... 

Options
-f 
Instructs osprop to ignore errors.

Description

ObjectStore/Single
The osprop utility ensures that committed data in the Server logs is propagated to the affected databases. osprop is meaningful only when run as an ObjectStore/Single application.

osprop performs a function similar to ossvrchkpt. The difference is that ossvrchkpt propagates what it can, immediately. osprop propagates everything, guaranteed, and deletes the log when done.

It is not usually necessary to run this utility because an ObjectStore/Single application that terminates normally always conducts propagation and removes the log.

After osprop successfully propagates data in a log it removes the log. Running osprop twice on the same Server log is permissible.

Examples
osprop log1 log2 ...

Iteratively propagates committed data in the specified log files to the actual databases. Note that this is only meaningful if osprop is executed in an environment where the ObjectStore/Single version of libos is used.

osprop -f log3
Propagates committed data in the specified log file if the file exists and is a valid log. Otherwise, the file is ignored.

API
Class: objectstore
Method: propagate_log

osrecovr: Restoring Databases from Archive Logs

The osrecovr utility copies (rolls forward) database modifications from archive log files to the affected databases.

Syntax

osrecovr [options] [-f backup/log-file...] [pathname_translation...]
-f backup/
log-file...
Specifies an archive log file from which to recover committed database changes made since the last backup.

You can specify the -f option zero, one, or more times. The osrecovr utility processes the files in the order in which you specify them.

If you do not specify the -f option, you must specify the -F option.

You can mix specifications of -f and -F. The osrecovr utility processes them in the order in which you specify them.

Specifying a directory signals an error.

pathname_
translation... 
Specifies a pair of pathnames. The first pathname in the pair indicates the source of the database as recorded in the archive log or backup image. The second pathname indicates the target, that is, the pathname for the database after it is recovered.

You can specify zero, one, or more pathname translations. Each pathname can be a directory or a single database. However, you cannot specify a directory as the source and a database as the target.

If you do not specify at least one pathname_translation, all databases in the archive logs or backup images you specified are restored in their original locations.

Options
-c 
Directs the osrecovr utility to apply each archive log snapshot and each backup image in its own transaction. The default is for all changes to be applied in a single transaction.

-D date 
Specifies a date in the MM/DD/YY format. The osrecovr utility rolls forward all database changes committed before or on this date. The default is to roll forward to the last snapshot taken.

-F recover-file

Specifies the name of a file that contains a list of archive files or backup images from which to recover specified databases. If you specify "-" as the recover file name, osrecovr reads from standard input.

The list contains one file pathname per line. Leading and trailing white space is ignored.

If you specify the -F option, you can also specify -f with additional file names on the command line. You can mix specifications of -f and -F. osrecovr processes them in the order in which you specify them.

-n 
Normally, if a directory is specified as the source of a recovery operation, all databases in the directory and its subdirectories are recovered. Including the
-n option limits the recovery operation to databases contained in the named directory.

-r time 
Specifies a recover-to time in the HH:MM:SS format. The osrecovr utility rolls forward all database changes committed before or at this time. The default is to roll forward to the last snapshot taken.

-t 
Displays a list of databases contained in specified archive files.

Description

The osrecovr utility can apply changes up to the time of the last snapshot in the archive log, or to some earlier time that you specify.

The osrecovr utility can restore backups as well as recover data from archive logs, both in the same invocation.

When you run the osrestore or osrecovr utility, the operation is transaction-protected. This means that if the operation fails, ObjectStore rolls databases back to the state they were in before the operation started.

ObjectStore applications cannot access databases that are being restored until the entire restoration process has finished.

Specify a pathname_translation when you want to restore

When restoring data from tape, you must use the osrestore utility.

Run osrestore and then osrecovr
You must run the osrestore utility before the osrecovr utility if you performed both of the following steps:

  1. You used the osbackup utility to back up a database.

  2. You ran the osarchiv utility and used the same incremental backup record that you used for the osbackup utility.

Tradeoffs When Recovering in Several Transactions

You can specify the -c option to recover data in several transactions instead of one transaction. While this gives you flexibility, there is a tradeoff between the ability to roll back databases and the space needed in the log to record all modifications to databases being recovered.

For example, if you specify -c when you initiate osrecovr, ObjectStore recovers each snapshot in its own transaction. If the operation fails because of media failure while applying the last snapshot, ObjectStore rolls the databases back to the state they were in as of the last successfully applied snapshot.

However, suppose that each snapshot is 100 MB. This requires 100 MB of log space. If you ensure that the database does not exist when the recover operation starts, and if you apply all snapshots in a single transaction, then all recovered data bypasses the log and goes directly to the database. Now, if the operation fails, ObjectStore rolls all changes back, including the database creation.

The fundamental tradeoff is between the ability to roll back to a previous state, and the resources needed to log the changes so that rollback is possible. In cases where the size of the databases being recovered exceeds the size of the space available (or desirable) for logging, it is preferable

Examples

Listing archive contents
% osrecovr -f /vancouver1/archives/96011216.aaa -t
Recovering from volume #1 (/vancouver1/archives/96011216.aaa)...
vancouver::/foo.db
vancouver::/dbdir/bar.db
vancouver::/dbdir/foo.db
Closing volume #1 (/vancouver1/archives/96011216.aaa).
%
Recovering from
a single archive
% osrecovr -f /vancouver1/archives/96011216.aaa
Recovering from volume #1 (/vancouver1/archives/96011216.aaa)...
Target time: Thu Jan 12 17:28:22 1996
Recovered to time Thu Jan 12 16:25:27 1996
Recovered to time Thu Jan 12 16:25:57 1996
Recovered to time Thu Jan 12 16:26:11 1996
Restoring 452 sectors to database "vancouver:/vancouver1/dbdir/foo.db"
Recovered to time Thu Jan 12 16:26:41 1996
Recovered to time Thu Jan 12 16:27:13 1996
Recovered to time Thu Jan 12 16:27:43 1996
Recovered to time Thu Jan 12 16:28:14 1996
Closing volume #1 (/vancouver1/archives/96011216.aaa).
%
Recovering back to place
The next example restores all databases to their location and state as of 16:25:27 and January 12, 1996.

% osrecovr -f /vancouver1/archives/96011216.aaa -r 16:25:27
Recovering from volume #1 (/vancouver1/archives/96011216.aaa)...
Target time: Thu Jan 12 16:25:27 1996
Recovered to time Thu Jan 12 16:25:27 1996
Closing volume #1 (/vancouver1/archives/96011216.aaa).
Recovering from multiple archive files
% cat ./archive_list
/vancouver1/archives/96011216.aaa
/vancouver1/archives/96011216.aab
/vancouver1/archives/96011216.aac
% osrecovr -t -F ./archive_list
Recovering from volume #1 (/vancouver1/archives/96011216.aaa)...
vancouver::/foo.db
vancouver::/dbdir/bar.db
vancouver::/dbdir/foo.db
Closing volume #1 (/vancouver1/archives/96011216.aaa).
% osrecovr -F ./archive_list
Recovering from volume #1 (/vancouver1/archives/96011216.aaa)...
Target time: Thu Jan 12 17:27:01 1996
Recovered to time Thu Jan 12 16:25:27 1996
Recovered to time Thu Jan 12 16:25:57 1996
Recovered to time Thu Jan 12 16:26:11 1996
Restoring 452 sectors to database "vancouver:/vancouver1/dbdir/foo.db"
Recovered to time Thu Jan 12 16:26:41 1996
Recovered to time Thu Jan 12 16:27:13 1996
Recovered to time Thu Jan 12 16:27:43 1996
Recovered to time Thu Jan 12 16:28:14 1996
Closing volume #1 (/vancouver1/archives/96011216.aaa).
Auto switching to volume #2 (/vancouver1/archives/96011216.aab).
Recovering from volume #2 (/vancouver1/archives/96011216.aab)...
Recovered to time Thu Jan 12 16:28:21 1996
Recovered to time Thu Jan 12 16:28:35 1996
Recovered to time Thu Jan 12 16:28:37 1996
Recovered to time Thu Jan 12 16:28:38 1996
Recovered to time Thu Jan 12 16:28:40 1996
Recovered to time Thu Jan 12 16:28:41 1996
Recovered to time Thu Jan 12 16:28:49 1996
Recovered to time Thu Jan 12 16:28:55 1996
Recovered to time Thu Jan 12 16:29:01 1996
Recovered to time Thu Jan 12 16:29:06 1996
Recovered to time Thu Jan 12 16:29:12 1996
Recovered to time Thu Jan 12 16:29:17 1996
Recovered to time Thu Jan 12 16:29:23 1996
Recovered to time Thu Jan 12 16:29:28 1996
Recovered to time Thu Jan 12 16:29:34 1996
Recovered to time Thu Jan 12 16:29:39 1996
Recovered to time Thu Jan 12 16:29:43 1996
Recovered to time Thu Jan 12 16:29:44 1996
Recovered to time Thu Jan 12 16:29:49 1996
Recovered to time Thu Jan 12 16:29:55 1996
Recovered to time Thu Jan 12 16:30:01 1996
Closing volume #2 (/vancouver1/archives/96011216.aab).
Auto switching to volume #3 (/vancouver1/archives/96011216.aac).
Recovering from volume #3 (/vancouver1/archives/96011216.aac)...
Recovered to time Thu Jan 12 16:31:04 1996
Recovered to time Thu Jan 12 16:31:06 1996
Closing volume #3 (/vancouver1/archives/96011216.aac).
%
Recovering to a date and time
% osrecovr -F ./archive_list -D 1/12/96 -r 16:27:43
Recovering from volume #1 (/vancouver1/archives/96011216.aaa)...
Target time: Thu Jan 12 16:27:43 1996
Recovered to time Thu Jan 12 16:25:27 1996
Recovered to time Thu Jan 12 16:25:57 1996
Recovered to time Thu Jan 12 16:26:11 1996
Restoring 452 sectors to database 
"vancouver:/vancouver1/dbdir/foo.db"Recovered to time Thu Jan 12 
16:26:41 1996
Recovered to time Thu Jan 12 16:27:13 1996
Recovered to time Thu Jan 12 16:27:43 1996
Closing volume #1 (/vancouver1/archives/96011216.aaa).
%
Recovering to a time today
% osrecovr -F ./archive_list -r 16:27:43
Recovering from volume #1 (/vancouver1/archives/96011216.aaa)...
Target time: Thu Jan 12 16:27:43 1996
Recovered to time Thu Jan 12 16:25:27 1996
Recovered to time Thu Jan 12 16:25:57 1996
Recovered to time Thu Jan 12 16:26:11 1996
Restoring 452 sectors to database "vancouver:/vancouver1/dbdir/foo.db"
Recovered to time Thu Jan 12 16:26:41 1996
Recovered to time Thu Jan 12 16:27:13 1996
Recovered to time Thu Jan 12 16:27:43 1996
Closing volume #1 (/vancouver1/archives/96011216.aaa).
%
Recovering a single database
The next example makes vancouver::/bar.db equal to vancouver::/foo.db as of 16:27:43 today.

% osrecovr -F ./archive_list -r 16:27:43 vancouver::/foo.db \
vancouver::/bar.db
Recovering from volume #1 (/vancouver1/archives/96011216.aaa)...
Target time: Thu Jan 12 16:27:43 1996
Recovered to time Thu Jan 12 16:25:27 1996
Recovered to time Thu Jan 12 16:25:57 1996
Recovered to time Thu Jan 12 16:26:11 1996
Recovered to time Thu Jan 12 16:26:41 1996
Recovered to time Thu Jan 12 16:27:13 1996
Recovered to time Thu Jan 12 16:27:43 1996
Closing volume #1 (/vancouver1/archives/96011216.aaa).
% osls vancouver::/
bar.db
dbdir/
foo.db
%
% cat ./archive_list
/vancouver1/archives/96011216.aaa
/vancouver1/archives/96011216.aab
/vancouver1/archives/96011216.aac
% osrecovr -t -F ./archive_list
Recovering from volume #1 (/vancouver1/archives/96011216.aaa)...
vancouver::/foo.db
vancouver::/dbdir/bar.db
vancouver::/dbdir/foo.db
Closing volume #1 (/vancouver1/archives/96011216.aaa).
%

Examples of Recovery Failures

Nonexistent database
% osrecovr -f /vancouver1/archives/96011216.aaa -r 16:25:27 \
vancouver::/asdla.db vancouver::/as.db
Recovering from volume #1 (/vancouver1/archives/96011216.aaa)...
Closing volume #1 (/vancouver1/archives/96011216.aaa).
Recover failed: Database vancouver::/asdla.db does not exist in this 
backup image
%
Day not in
the archives
% osrecovr -F ./archive_list -D 1/11
Recovering from volume #1 (/vancouver1/archives/96011216.aaa)...
Target time: Wed Jan 11 17:29:08 1996
Closing volume #1 (/vancouver1/archives/96011216.aaa).
%
Day/year
not in archives
% osrecovr -F ./archive_list -D 1/11/95
Recovering from volume #1 (/vancouver1/archives/96011216.aaa)...
Target time: Tue Jan 11 17:29:51 1995
Closing volume #1 (/vancouver1/archives/96011216.aaa).
%
API
None.

osreplic: Replicating Databases

The osreplic utility replicates and maintains multiple copies of a database.

Syntax

osreplic[-r] [-v] [-x] [-i interval] [-p] [-B size] [-I import_file] -a archive_
rec_file src_path1 dest_path1 [src_path2 dest_path2 ...]

Options
 -a archive_
record_file 
Archive. Required. Specifies the archive record file. If the files does not exist, it is created.

-B size 
Buffer. Controls the amount of transient workspace available to the source server.

size is a number optionally appended with k, m, or g to indicate kilobytes, megabytes, or gigabytes respectively. If no unit is specified, m is presumed. For example, -B 1024k, -B 1m, and -B 1 each specify a maximum buffer size of 1 megabyte. The default value is 1 MB

-i interval 
Interval. Sets the interval between snapshots. The default is 600 seconds. A copy is made immediately after osreplic is initiated and then every interval thereafter.


Intervals are specified with integer values. Append m, h, or d to a value to indicate minutes, hours, or days, respectively. By default, values are interpreted as seconds. For example, -i 60 and -i 1m both specify an interval of one minute.

-I import_file 
Import (uppercase I). Source and destination databases and directories can be specified with a separate input file as well as on the command line. Each line in the input file should consist of a source path followed by a target path. If a directory is specified, its contents are added to the source set.

-p 
Permissions. Sets database ACLs on the replica to match those of the master (rawfs only).

-r 
Recursive. Enables recursive processing of rawfs directories.

-v 
Verbose. Enables verbose output.

-x 
Exclude. Prohibits clients from using the replica until osreplic terminates.

-y 
Yes. Confirms the restart of the replicator for an existing master/replica pair and bypasses the usual prompt after osreplic starts. When using this option, be sure beforehand that the replica is unchanged since the last update.

Description

The ObjectStore replicator produces a continuously updated copy (or replica) of one or more user databases. The utility works by coordinating the actions of a source ObjectStore Server running an archive logger, and of a target ObjectStore Server, providing a read-only (MVCC) copy of a database that is dynamically updated from the master database.

ObjectStore Release 4 and later databases and rawfs directories can be replicated, as well as ObjectStore Release 4 and later file databases. Native file system directories cannot be replicated.

When you start the replicator, you specify a set of sources and destinations for replicated databases on the command line or with a separate input file using the -I (I) option. The master/replica pair is specified by pathname with the src_path and dest_path arguments, including the target host if needed. The list of databases cannot be changed once the replicator is started.

src_path and dest_path can be a directory path or a database name. You can also use UNC pathnames (on Windows platforms), Server relative pathnames, or local pathnames. However, you cannot use UNC pathnames as destinations.

You must also specify an archive_record_file with the -a option. The osreplic utility uses this information to determine which segments within a database have been modified since the last replication. This file is identical to the archive record file for osarchiv.

At specified intervals, the replicator takes a snapshot of the databases and sends the changed data to the target host so the data is applied to the replica. All committed user data is replicated.

Database ACLs (Access Control Lists, including owner, group, mode) can also be copied for rawfs databases. However, neither rawfs directory ACLs nor segment-level permissions are copied. Additionally, no file database ACLs are copied.

Operations such as osrm are not propagated to the replica.

API
None.

osrestore: Restoring Databases from Backups

The osrestore utility copies databases from backup storage locations to your disk or rawfs. Backups must have been created with the osbackup utility.

Syntax

osrestore [options] -f backup-file [-f backup-file]... [pathname_
translation]...
-f backup-
file 
Specifies a file or tape device that contains a backup image from which to restore data-bases. You can specify the -f option one or more times. Required.

On UNIX systems, you can specify -f - (hyphen) to indicate stdin.

pathname_
translation... 
Specifies a pair of pathnames separated by a space. The first pathname in the pair indicates the source of the database as recorded in the backup image. The second pathname indicates the target, that is, the pathname for the database after it is restored.

You can specify zero, one, or more pathname translations. Each pathname can be a directory or a single database. However, you cannot specify a directory as the source and a database as the target.

If you do not specify at least one pathname_translation, all databases in the backup image are restored in their original locations.

Options
-a 
Aborts the restore operation if the utility cannot open the restore device. This raises an exception that indicates the problem.

The default is that if the restore utility fails to open the device, it displays a message and waits for you to correct the problem.

Examples of failure to open the device are having a write-protected tape or no tape loaded.

-b blocking-factor 
Specifies a blocking factor to use for tape input and output. This parameter applies only when you are restoring data from a tape. The blocking factor is in units of 512-byte blocks. The default on UNIX is 126 blocks. The maximum blocking factor is 512 blocks.

-n 
Normally, if a directory is specified as a source for osrestore, all databases in the directory and its subdirectories are restored. Including the -n option limits the operation to databases in the named directory.

-O 
Restores the database image specified with the -f flag and then exits. There is no prompt for additional volumes.

-p 
The -p (permissions) option causes osrestore to restore database ACLs for the rawfs stored in the archive log file for the database being restored.

-S exec_command_name 
Specifies the pathname of a command to be executed when the osrestore utility reaches the end of the media. This command should mount the next volume before returning. The exit status from this command must be 0 or the restore operation aborts. Note that this option is an uppercase S.

-t 
Displays a list of databases in the backup image.

Description

ObjectStore applications cannot access databases that are being restored until the entire restoration process has finished.

Specify a pathname_translation when you want to restore

Procedure
To restore databases, begin with a level 0 backup image. The osrestore utility prompts for incremental backup images you might want to apply after this. Not all incremental backups need to be applied. To determine which incremental backups to apply, list the backup levels in chronological order, starting with the level 0 backup. For example, suppose you performed the following backups:

Your list would look like this: 0, 5, 6, 2, 4.

Scanning the list from right to left, find the lowest incremental backup level greater than 0, in this case, the level 2 backup made on Thursday. To restore databases to their state as of the backup on Friday, apply the level 0 backup and the incremental backups made at levels 2 and 4, in that order.

Block size
The block size must be 512 bytes or less. The osrestore utility cannot work when the block size is greater than 512 bytes.

Comparing databases
You might want to have two copies of the same database for verification purposes - a restored version and the original version. Here is a sample command line for doing this. In this example, backup.img contains foo::/db. The pathname translation does the job in one step.

osrestore -f backup.img foo::/db foo::/restore.db
Windows to UNIX pathname translation example
You must specify a pathname translation when you restore or recover data on an architecture that is different from the architecture on which you are restoring the data. For example, here is a Windows NT to UNIX pathname translation. The backup image being restored is /tmp/my.img. The interaction is on a UNIX system. You do not need to do anything special when you make the backup on the Windows NT system.

In the first interaction, the command line specifies the -t option, which instructs the osrestore utility to list the databases in the specified backup image. Nothing is actually restored. The only database in the backup image is mckinley:e:\r4tsd_data\arch.0. This is a Windows NT database, and the following example shows that the osrestore utility on a UNIX system translates it to mckinley:e:/r4tsd_data/arch.0. The utility automatically translates back slashes (\) to slashes (/).

% osrestore -f /tmp/my.img -t
Recovering from volume #1 (/tmp/my.img)...
mckinley:e:/r4tsd_data/arch.0
Closing volume #1 (/tmp/my.img).
%
In the second interaction, the command line specifies the pathname translation mckinley:e:/r4tsd_data/ /recovery. This instructs the osrestore utility to copy all files in the backup image in the mckinley:e:/r4tsd_data/ directory to the /recovery directory on the local machine. In this example, this is only arch.0.

% osrestore -f /tmp/my.img mckinley:e:/r4tsd_data/ /recovery
Recovering from volume #1 (/tmp/my.img)...
Restoring 3175 sectors to database "vancouver:/recovery/arch.0"
Recovered to time Fri Mar 3 14:07:24 1995
Do you wish to restore from any additional incremental backups? 
(yes/no):
no
Closing volume #1 (/tmp/my.img).
%

Examples

The following examples illustrate some uses of osrestore. Although it is not shown, osrestore prompts you to indicate if you want to restore from incremental backups.

The examples are UNIX examples; however, they would be the same on any platform except for the file name format.

Listing databases
in backup image
This example displays a list of databases in the backup.img backup image.

% osrestore -t -f /backup.img 
::eudyp:/test/ 
::eudyp:/test:       data1.odb       data2.odb       data3.odb
::cleopat:/results/
::cleopat:/results:       r1.odb       r2.odb       r3.odb
This indicates that the backup image contains six file databases. Three are in the /test directory; they were backed up on host eudyp. Three are in the /results directory; they were backed up on host cleopat.

Copying backups
to new Servers
Restore all databases on Server eudyp to Server kellen, and all databases on Server cleopat to Server eudyp:

% osrestore -f backup.img   eudyp:/ kellen:/   cleopat:/ eudyp:/
restoring "::eudyp:/test/data1.odb" to "::kellen:/test/data1.odb"   
restoring "::eudyp:/test/data2.odb" to "::kellen:/test/data2.odb"
restoring "::eudyp:/test/data3.odb" to "::kellen:/test/data3.odb"
restoring "::cleopat:/results/r1.odb" to "::eudyp:/results/r1.odb"
restoring "::cleopat:/results/r2.odb" to "::eudyp:/results/r2.odb"
restoring "::cleopat:/results/r3.odb" to "::eudyp:/results/r3.odb"
Changing Servers
and directories
Restore all databases in the /test directory on Server eudyp into the /test-copy directory on Server kellen:

% osrestore -f backup.img   eudyp:/test   kellen:/test-copy
restoring "::eudyp:/test/data1.odb" to "::kellen:/test-copy/data1.odb"   
restoring "::eudyp:/test/data2.odb" to "::kellen:/test-copy/data2.odb"
restoring "::eudyp:/test/data3.odb" to "::kellen:/test-copy/data3.odb"
Restoring a single database
Restore the database eudyp:/test/data1.odb to /tmp:

% osrestore -f backup.img   eudyp:/test/data1.odb   eudyp:/tmp
restoring "::eudyp:/test/data1.odb" to "::eudyp:/tmp/data1.odb" 
Restoring to source with one exception
Restore everything in the /test directory on Server eudyp to its original location, except data1.odb, which gets restored in the /example directory on Server cleopat.

% osrestore -f backup.img eudyp:/test/data1.odb   cleopat:/example 
\ eudyp:/test   eudyp:/test
restoring "::eudyp:/test/data1.odb" to "::cleopat:/example/data1.odb"   
restoring "::eudyp:/test/data2.odb" to "::eudyp:/test/data2.odb"
restoring "::eudyp:/test/data3.odb" to "::eudyp:/test/data3.odb"
In this example, the order of the pathname translations is important. Specify specific pathnames before you specify directories that include those pathnames.

Restoring to source
Restore the entire backup image to its original location.

% osrestore -f backup.img
restoring "::eudyp:/test/data1.odb" to "::eudyp:/test/data1.odb" 
restoring "::eudyp:/test/data2.odb" to "::eudyp:/test/data2.odb" 
restoring "::eudyp:/test/data3.odb" to "::eudyp:/test/data3.odb" 
restoring "::cleopat:/results/r1.odb" to "::cleopat:/results/r1.odb" 
restoring "::cleopat:/results/r2.odb" to "::cleopat:/results/r2.odb" 
restoring "::cleopat:/results/r3.odb" to "::cleopat:/results/r3.odb" 
Restoring all to a local directory
Restore the entire backup image into the /examples directory on the local host (twinkie).

% osrestore -f back.img eudyp:/test /examples    cleopat:/results 
/examples
restoring "::eudyp:/test/data1.odb" to "::twinkie:/examples/data1.odb" 
restoring "::eudyp:/test/data2.odb" to "::twinkie:/examples/data2.odb" 
restoring "::eudyp:/test/data3.odb" to "::twinkie:/examples/data3.odb" 
restoring "::cleopat:/results/r1.odb" to "::twinkie:/examples/r1.odb" 
restoring "::cleopat:/results/r2.odb" to "::twinkie:/examples/r2.odb" 
restoring "::cleopat:/results/r3.odb" to "::twinkie:/examples/r3.odb" 
API
None.

osrm: Removing Databases and Rawfs Links

The osrm utility removes databases and rawfs links from Servers.

Syntax

osrm [-f][i][r] pathname... 
pathname...
Specifies the file or rawfs database or directory, or rawfs link, that you want to remove. You can specify one or more. You can specify both file and rawfs databases and directories and rawfs links in the same operation. You must specify the -r option if you want to remove a rawfs directory.

Options
-f 
Forces execution of the utility and does not display an error message if the specified database is not found or cannot be removed. This option is required when you want to remove nondatabase files from the native file system.

-i 
Specifies interactive mode. ObjectStore prompts you to confirm for each specified database that you really want to remove it.

-r 
Recursively removes all databases in the specified directory. On OS/2, this option works only on rawfs directories.

Description

To remove a database, you must have write permission to its directory, but you do not need write access to the database itself.

If you specify more than one database to be removed and for some reason ObjectStore cannot remove at least one of the databases, then ObjectStore does not remove any of the databases.

If a database is open when you remove it with the osrm utility, ObjectStore does not actually remove it until it is closed. Transactions can update the removed database until the database is closed.

The osrm utility can perform wildcard processing using regular expression wildcards *, ?, {}, and [].

For file databases, the osrm utility calls the native remove utility.

UNIX
When operating on a rawfs database, you must enclose the wildcard in quotation marks ("") or precede it with a back slash (\) to keep the shell from interpreting wildcards.

API
Class: os_dbutil
Method: remove

osrmdir: Removing a Rawfs Directory

The osrmdir utility removes a directory from the rawfs.

Syntax

osrmdir directory
directory 
Specifies the pathname of the directory that you want to remove from the rawfs.

Description

To remove a directory from the rawfs, the directory must be empty. Also, you must have write permission for the parent directory. You do not need write permission for the directory you are removing.

You can also use osrmdir to remove a local directory. When you specify a local directory, you cannot specify a remote file-Server host in the pathname of the local directory. The osrmdir utility passes the operation to a local native utility. If you specify a remote file-Server host name, ObjectStore informs you that you specified an illegal pathname.

Wildcards
The osrmdir utility can perform wildcard processing using regular expression wildcards *, ?, {}, and [].

UNIX
On UNIX, you must enclose the wildcard with quotation marks ("") or precede it with a back slash (\) to keep the shell from misinterpreting the asterisk as a shell wildcard.

API
Class: os_dbutil
Method: rmdir

osscheq: Comparing Schemas

The osscheq utility compares two schemas.

Syntax

osscheq [-quiet] db1 db2
db1
db2
Specifies the pathnames of the two databases you want to compare. Each database can contain an application schema, a compilation schema, or a database schema. If the database contains a database schema, it can be local or remote.

-quiet 
Returns a value of 0 if the databases are compatible. Returns a nonzero value if the databases are not compatible. When you do not specify this option, the utility displays messages explaining why the schemas are different. There is no other output.

Description

The osscheq utility is useful when you suspect that a change to a schema causes it to be incompatible with the other schemas in an application. It is best to detect an incompatibility as early as possible. When schemas are not compatible, execution of the application fails because of a schema validation error.

Example

For example, suppose the database test1 contains the following definition for C:

class C {
      int del ;
      int mod ;
} ;
Database test2 defines C as follows:

class C {
      int add ;
      char* mod ;
} ;
Invoke the osscheq utility as follows:

osscheq test1 test2
The result is the following output:

The following class definitions in test1 and test2 were inconsistent:
C ( C.del was deleted, the type of C.mod changed (from int to char*), and 
C.add was added)
Comparison technique
The comparison technique depends on the types of schemas being compared. When comparing compilation or application schemas, ObjectStore uses the technique used by the schema generator when building compilation or application schemas. When one of the schemas being compared is a database schema, the comparison technique is the same as that used to validate an application when it accesses a database.

Schema checking done by the schema generator is a stricter form of checking than that used to validate an application against a database. The latter form of checking is the minimal checking required to ensure that the application and the database use the same layout for all shared classes.

API
None.

osserver: Starting the Server

The osserver utility starts the Server. Starting the Server varies from platform to platform. Look for details in the chapter for your platform.

Syntax

osserver options 

Options

Ordinarily, you use Server parameters to control how the Server functions. However, you can also specify options when you execute the osserver utility.
-c 
Checkpoint. Forces all data to be propagated from the log to the database. The Server does not start after this checkpoint.

-d int 
Starts the Server in debug mode. Specify an integer from 1 through 50. The larger the number, the more information ObjectStore provides. You can also specify the -F option so that ObjectStore displays the information on the screen.

ObjectStore copies debug output to the standard Server output file, unless you redirect it to another file.

-F 
Foreground. Runs the Server process in the foreground. This reverses the normal behavior, where the Server runs as a background process. This option is not available on Windows.



-i 
Initializes the Server log file and the rawfs, if you have one, with a confirmation prompt. Use with caution.

-I 
(Uppercase I) Initializes the Server log file, and the rawfs if you have one, without a confirmation prompt. Use with extreme caution.

-p 
pathname 
Specifies a file containing Server parameter settings that override the default settings. If you do not specify this option, ObjectStore uses the default parameter file. This option is not available on Windows.

-v 
Displays Server parameter values at start-up.

API
None.

ossetasp: Patching Executable with Application Schema Pathname



The ossetasp utility patches an executable so that it looks for its application schema in a database that you specify.

Syntax

ossetasp -p executable
ossetasp executable database
-p 
Instructs ossetasp to display the pathname of the specified executable's application schema database. Do not specify database in the command line when you include -p.

executable 
Specifies the pathname of an executable. On Windows systems, this can also be the pathname of a DLL.

database 
Specifies the pathname of an application schema database. ObjectStore patches the specified executable so it uses this application schema.

Description

When the schema generator generates an application schema, ObjectStore stores the actual string given as the -asdb argument to ossg (or the -final_asdb argument, if specified). When the application starts, it uses that string to find the application schema database.

When you move or copy an ObjectStore application to a machine that is not running a Server, leave the application schema database on the Server host. Normally, the application schema database must always be local to the Server.

After you copy or move an application to another machine, you must patch the executable so that it can find the application schema database. Run the ossetasp utility with the absolute pathname of the application schema database. Be sure to specify the name of the Server host.

A locator file allows a database and its application schema to be on a machine other than the Server host. See Chapter 5, Using Locator Files to Set Up Server-Remote Databases.

Windows
On Windows systems, you can perform the ossetasp utility on any EXE or DLL that contains schema (that is, that has a schema object file produced by ossg linked into it).

Restrictions
This utility is available on all platforms except OS/2. On OS/2, as well as on all other platforms, you can use the API.

API
Class: objectstore
Methods: get_application_schema_pathname
and set_application_schema_pathname

ossetrsp: Setting a Remote Schema Pathname

The ossetrsp utility specifies a new pathname for the schema associated with a particular database.

Syntax

ossetrsp {-p | schema_db_path} db_path 
-p 
Displays the pathname of the schema database used by the database specified by db_path, if db_path is a database that stores its schema remotely. If it is not, ObjectStore displays a message informing you that db_path is not a database whose schema is stored in some other database.

schema_
db_path
Specifies a new pathname for the schema database used by the database specified by db_path.

db_path 
Specifies the pathname of a database whose schema database pathname you want to either change or display.

Description

A database can store its schema in a separate schema database. The schema database contains all schema and relocation metadata. The main database contains everything else.

When needed
If you move the schema database, you must execute ossetrsp or use os_database::set_schema_database() to inform ObjectStore of the schema database's new pathname.

If you copy the schema database with an operating system command or an ObjectStore utility, you can execute ossetrsp or use os_database::set_schema_database() to inform ObjectStore of the schema database's new pathname.

You cannot associate an entirely new schema database with the main database. You can only change the pathname of the original schema database by moving or copying the original schema database.

API
Class: os_database
Methods: get_schema_database
and set_schema_database

ossevol: Evolving Schemas

The ossevol utility modifies a database and its schema so that it matches a revised application schema. It handles many common cases of schema evolution. For more complicated evolutions, including the cases ruled out in this section, see ObjectStore Advanced C++ API User Guide, Chapter 9, Advanced Schema Evolution.

Use osbackup first
Running the ossevol utility changes the physical structure of your database. Consequently, you should back up your database before you run the ossevol utility.

Syntax

ossevol workdb schemadb evolvedb ... [keyword-options]

Options
-task_list filename 
Specifies that the ossevol utility should produce a task list and place it in the file specified by filename. Use "-" for stdout. When you specify this option, ObjectStore does not perform schema evolution.

The task list consists of pseudofunction definitions that indicate how the migrated instances of each modified class would be initialized. This allows you to verify the results of a schema change before you migrate the data.

-classes_to_be_removed class-name(s) 
Specifies the name of the classes to be removed.

-classes_to_be_recycled class-name(s) 
Specifies the name of the classes whose storage space can be reused. By default, the storage associated with all classes is recycled.

-drop_obsolete_indexes { yes | no }
Specifies whether or not obsolete indexes encountered in the course of the evolution should be dropped. The default is no, which means that they are not dropped.

-local_references_are_db_relative 
Specifies that all local references are relative to the database in which they are encountered. The default is no.

-resolve_ambiguous_void_pointers 
Resolves ambiguous void pointers to the outermost enclosing collocated object. The default is no.

-upgrade_vector_headers
Upgrades the representation of vector objects in the evolved database to a format that allows them to be accessed by clients built by different types of compilers.

You do not need to convert vector objects if the database will be accessed only by applications that were compiled with the same type of compiler. In other words, this option is for databases being used in an environment that includes multiple types of compilers. It is also useful if you are switching from OSCC (cfront), no longer supported in ObjectStore Release 5.0, to a native compiler that uses vector headers, such as SGI C++.

Use this option only with databases that meet at least one of these conditions:

-explanation_level n 
A number from 1 to 3; primarily an internal debugging aid.

OS/2 and AIX
It is possible to run the ossevol utility on

Description

When you specify two or more classes for an option, separate the class names with a space.

Changes can include
You can use the ossevol utility to evolve the following changes:

Changes cannot include
You cannot use the ossevol utility to evolve changes that include

Changes might include
You might be able to use the ossevol utility to evolve the following changes. In each item, the information after the first sentence indicates reasons why the ossevol utility might not be able to perform the evolution.

Evolution not required
These changes do not require schema evolution:

Except on Windows NT, the following two changes do not require schema evolution. On Windows NT, these two changes require schema evolution in some cases. You receive a schema validation message when you run the schema generator and schema evolution is required.

Transformer function required
These changes require application-specific transformer functions:

Schema protection
When developing an application, if you are running this utility on a protected schema database, ensure that the correct key is specified for the environment variables OS_SCHEMA_KEY_LOW and OS_SCHEMA_KEY_HIGH. If the correct key is not specified for these variables, the utility fails. ObjectStore signals

err_schema_key _CT_invalid_schema_key,
"<err-0025-0151> The schema is protected and the key provided did not 
match the one in the schema."
When deploying an application, if your end users need to use the ossevol utility on protected schema databases, you must wrap the utility in an application. This application must use the API to provide the key before using the os_dbutil class to call the utility. End users need not know anything about the key. For information about wrapping your application around an ObjectStore utility, see the class os_dbutil in the ObjectStore C++ API Reference.

API
For complete information about schema evolution, see ObjectStore Advanced C++ API User Guide.

ossg: Generating Schemas

The ossg utility is the ObjectStore schema generator. See ObjectStore Building C++ Interface Applications for complete information about how to use ossg. You must have a client development license and a Server development license to use this utility.

Syntax
Kind of SchemaSyntax for ossg Command
Application

ossg [compilation_options] [neutralizer_options] [-cpp_fixup] [-E]
[-final_asdb final_app_schema_db] [{ -mrlcp | -mrscp }]
[-no_default_includes] [-no_weak_symbols] [-rtdp {minimal | derived | full | maximal}] [{ -sfbp| -pfb }] [-store_member_functions]
[-weak_symbols]
{-assf app_schema_source_file or -asof app_schema_object_file.obj}
-asdb app_schema_database schema_source_file [lib_schema.ldb ...]

On OS/2, you must also specify -cd class_definition_file.

Library

ossg [compilation_options] [neutralizer_options] [-cpp_fixup] [-mrscp]
[-no_default_includes] [{ -sfbp| -pfb }] [-store_member_functions]
[-use_cf20_name_mangling | -use_cf30_name_mangling]
-lsdb lib_schema.ldb schema_source_file

Compilation

ossg [compilation_options] [neutralizer_options] [-cpp_fixup] [-mrscp]
[-no_default_includes] [{ -sfbp| -pfb }] [-store_member_functions]
-csdb comp_schema.cdb schema_source_file

Examples

See ObjectStore Building C++ Interface Applications.

Options
compilation_options 
Specifies any options that would be passed to the compiler if you were compiling a schema source file instead of generating schema from it. You should include any preprocessor options, such as include file paths and macro definitions, as well as compiler options that might affect object layout, such as packing options (for example, /Zp4 for Visual C++).

If you specify the /I, -I, /D, or -D option, do not include a space between the option and the argument. For example, on OS/2 the following is correct:

ossg /I$(OS_ROOTDIR)\include...
On UNIX, do not specify the -o option on the ossg command line.

On Windows, do not specify /Tp on the ossg command line.

On OS/2, when you specify an option that takes an argument do not put a space between the option and the argument.

Optional.

neutralizer_options 
Include any of the options described in -arch setn. These options allow you to neutralize a schema for a heterogeneous application. You can include them in any order.

Optional. The default is that neutralization is not done.

-cpp_fixup 
Allows preprocessor output to contain spaces inside C++ tokens. Specify this option if your preprocessor inserts a space between consecutive characters that form C++ tokens. For example, if your preprocessor changes :: to : :, you can specify this option so that the schema generator allows the inserted space and correctly reads the preprocessor output.

It is possible to generate an application schema from a compilation schema and library schemas. In this case, you do not need this option because there is no source code input to the schema generator, which means that the preprocessor is not involved.

Optional. The default is that ossg does not allow a space in a C++ token such as :: or .*.

-E 
Causes ossg to preprocess the schema source file and send the preprocessed output to standard output. This option is useful for debugging ossg parsing problems because it allows you to see the results of any preprocessing. It is also useful when reporting ossg problems to Object Design support representatives because it allows the problem to be reproduced by Object Design without the need to package your application's include files.

When you specify the -E option, you cannot specify schema databases on the same command line. You can only specify the schema source file and preprocessor switches.

-final_asdb final_app_schema_db 
Specifies a location for the application schema database that is different from the location you specify with the -asdb option. The schema generator writes the location you specify with the -final_asdb option into the application schema source file (application schema object file for Visual C++). Use this option when you cannot specify the desired location with the -asdb option. The -asdb option is still required and that is where the schema generator places the application schema.

This option is useful when you plan to store the application schema database as a derived object in a ClearCase Versioned Object. The schema generator cannot place the application schema database directly in a ClearCase VOB. If you specify the -final_asdb option with the desired location, you avoid the need to run the ossetasp utility, which patches an executable so that it looks for its application schema in a database that you specify.

After you run ossg with the -final_asdb option, remember to move the application schema to the database you specify with -final_asdb.

You must specify an absolute pathname with final_asdb.

Optional. The default is that the schema generator writes the pathname that you specify -asdb in the application schema source file (object file for Visual C++).

-mrlcp or -make_reachable_library_classes_persistent

Causes every class in the application schema that is reachable from a persistently marked class to be persistently allocatable and accessible.

This option is supplied for compatibility purposes only. The use of the -mrlcp option is discouraged. Specify
-mrscp instead.

When you specify this option, you cannot neutralize the schema for use with a heterogeneous application. If you are building a heterogeneous application, you must either mark every persistent class in the schema source file or specify the -mrscp option.

If you do not mark any types in the schema source file and you specify -mrlcp when you run ossg, then the application schema does not include any types. You must mark at least one type for there to be any reachable types.

Optional. The default is that only marked classes are persistently allocatable and accessible.

See also ObjectStore Building C++ Interface Applications, Determining the Types in a Schema.

-mrscp or
-make_reachable_source_classes_persistent


Causes every class that is both

-no_default_includes or -I-

When you specify this option, ossg does not automatically specify any include directories to the C++ preprocessor. However, the preprocessor can have default include directories built in to it and ossg does check these built-in directories. Typically, the preprocessor uses built-in include paths to find standard include files such as stdio.h. Except for these built-in directories, when you specify this option, you must explicitly specify directories that contain included files.

For example, on some UNIX systems, when you do not specify this option, the C++ preprocessor looks for include files in the /usr/include directory.

Note that if you want the schema generator to pass the ObjectStore include directory to the preprocessor as a directory for finding included files, you must always specify it. For example:

UNIX: -I$OS_ROOTDIR/include

Windows and OS/2: /I$(OS_ROOTDIR)\include

The -I- option is the letter I as in Include. Specifying -I- is the same as specifying -no_default_includes.

Optional. The default is that the preprocessor checks default directories for included files.

-no_weak_symbols 
Disables mechanisms that suppress notification about missing vftbls and discriminants. This option allows you to check whether any vtbl or discriminant function symbol referenced is undefined.

If you specify -rtdp maximal -no_weak_symbols, the linker provides messages about what is missing. You can use this information to determine which additional classes you need to mark. These missing symbols are only a hint about what you might consider marking. They might also be the result of a link line error.

If, in releases prior to 5, you used os_do_link with the -link_resolve_vtbls_and_disc option, you can now specify -no_weak_symbols to perform the same function.

This default option specifies that the schema generator notify you about missing vftbls and discriminants. To change this behavior, specify the option -weak_symbols.

-pfb or -parse_function_body

Causes ossg to parse the code in function bodies.

This option ensures that any types that are marked inside a function are parsed by ossg. If you do not explicitly use this option and you have any types marked inside functions, an error is reported. See ossg Troubleshooting Tips for further information.

Optional. The default is that the -sfbp option is in effect.

-rtdp or -runtime_dispatch
{minimal | derived | full |
maximal}

Specifies the set of classes for which the schema generator makes vftbls and discriminant functions available.

minimal specifies marked classes, classes embedded in marked classes, and base classes of marked classes.

derived specifies the minimal set plus classes that derive from marked classes and classes embedded in the derived classes.

full specifies the derived set plus the transitive closure over base classes, derived classes, and classes that are the targets of pointers or references. The full specification does not include nested classes or enclosing classes unless they meet one of the previous criteria.

maximal specifies the full set plus nested types. In previous ObjectStore releases, this was the default. If your application used an earlier release of ObjectStore and you do not specify this option, you might need to mark classes that you did not previously mark.

Optional. The default is derived.

-sfbp or -skip_function_body_parsing

Default. Specifies that code within function bodies is not parsed.

-store_member_functions 
Causes ossg to create an instance of os_member_function for each member function in each class in the schema source file. It then puts these instances in the list of class members, which includes member types and member variables.

This is useful when you intend to use the MOP to inspect the member functions. If you are not planning to inspect member functions, you should not specify this option because it wastes disk space.

When you generate an application schema, you might specify a library or compilation schema. If you want to capture the member functions from the library or compilation schema you must have specified the -store_member_functions option when you generated the library or compilation schema. You must also specify the -store_member_functions option when you generate the application schema.

Optional. The default is that ossg generates a schema that includes member types and member variables, but not member functions.

-weak_symbols
Enables mechanisms that suppress notification about missing vftbls and discriminants. This option overrides the default behavior described at -no_weak_symbols.

-assf app_schema_source_file or
-asof app_schema_object_file.obj

Specifies the name of the application schema source file or application schema object file to be produced by ossg. For all compilers except Visual C++, the schema generator produces a source file that you must compile. When you use Visual C++, the schema generator directly produces the object file.

Required when generating an application schema. No default.

-asdb app_schema_database 
Specifies the name of the application schema database to be produced by ossg. If the schema database exists and is compatible with the type information in the input files, the database is not modified.

This pathname must be local to a host running an ObjectStore Server.

The pathname should have the extension .adb. If you want to specify an existing application schema database with ossg, the application schema must have .adb as its extension.

Required when generating an application schema. No default.

-csdb comp_schema.cdb 
Specifies the pathname of the compilation schema database to be generated by ossg. Object Design recommends, but does not require, that the pathname end in .cdb.

This pathname must be local to a host running an ObjectStore Server.

Required when generating a compilation schema. No default.

-lsdb lib_schema.ldb 
Specifies the pathname of the library schema database to be generated by ossg. The pathname must end in .ldb.

This pathname must be local to a host running an ObjectStore Server.

Required when generating a library schema. No default.

schema_source_file 
Specifies the C++ source file that designates all the types you want to include in the schema. It should include all classes that the application uses in a persistent context.

Almost always required. No default. The schema source file is not required when you use a compilation schema to generate an application schema.

Also, you can omit the schema source file if you are generating an application schema and you specify one or more library schemas that contain all persistent types that your application uses.

lib_schema.ldb ...
Specifies the pathname of a library schema database. The name must end in .ldb. This can be an ObjectStore-provided library schema or a library schema that you created with ossg.

The schema generator reads schema information from the library schema database specified and modifies the application schema database to include the library schema information. You can specify zero or more library schema databases.

Optional. The default is that library schemas are not included.

-cd
OS/2 only

On OS/2 platforms, when you invoke ossg, you must specify the -cd option with the name of the class definition file for the application. The class definition file, also called the schema header file, contains the definitions for all classes that you want in the schema.

Neutralization Options
-arch setn 
The schema that is generated or updated is neutralized to be compatible with the architectures in the specified set. Applications running on these architectures can access a database that has the schema.

Required when you are neutralizing schema. No default. You can specify one of the following sets.

set1

Some 32-bit architectures:

HP HP-UX HP C++

IBM VisualAge C++ for OS/2

Intel Solaris 2 Sun C++

Intel Windows NT Visual C++

Intel Windows 95 Visual C++



RS/6000 AIX C Set ++

SGI IRIX SGI C++

SPARC Solaris 2 Sun C++



set2

set1 without some cfront architectures:

IBM VisualAge C++ for OS/2


Intel Solaris 2 Sun C++

Intel Windows NT Visual C++




Intel Windows 95 Visual C++

RS/6000 AIX C Set ++

SPARC Solaris 2 Sun C++

set3

Some cfront architectures:

HP HP-UX HP C++



set4

Some IBM architectures:

IBM VisualAge C++ for OS/2



set5

set1 plus AXP Digital UNIX DEC C++ 5.0, with the restriction that your schema cannot contain a data member of type long:



AXP Digital UNIX DEC C++ 5.0


HP HP-UX HP C++

IBM VisualAge C++ for OS/2

Intel Solaris 2 Sun C++

Intel Windows NT Visual C++

Intel Windows 95 Visual C++

RS/6000 AIX C Set ++

SGI IRIX SGI C++

SPARC Solaris 2 Sun C++

set6

set5 without some cfront architectures, and also with the restriction that your schema cannot contain a data member of type long:



AXP Digital UNIX DEC C++


IBM VisualAge C++ for OS/2

Intel Solaris 2 Sun C++

Intel Windows NT Visual C++

Intel Windows 95 Visual C++

RS/6000 AIX C Set ++

SPARC Solaris 2 Sun C++

-neutral_info_output filename
or -nout filename

Indicates the name of the file to which neutralization instructions are directed.

Optional. The default is that the schema generator sends output to stderr.

-noreorg or -nor

Prevents the schema generator from instructing you to reorganize your code as part of neutralization.

This is useful for minimizing changes outside your header file, working with unfamiliar classes, or simply padding formats.

When you include -noreorg, your application might not make the best use of its space. In fact, it is seldom possible to neutralize a schema without reorganizing classes.

When you use virtual base classes, it is very unlikely that you can neutralize your schema when you include this option.

Optional. The default is that the schema generator provides reorganization instructions.

-pad_maximal or -padm -pad_consistent or -padc

Indicates the type of padding requested.

-pad_maximal or -padm indicates that maximal padding should be done for any ObjectStore-supported architecture. This means all padding, even padding that the various compilers would add implicitly.

-pad_consistent or -padc indicates that padding should be done only if required to generate a consistent layout for the specified architectures.

Optional. The default is -padc.

-schema_options option_file
or -sopt option_file

Specifies a file in which you list compiler options being used on platforms other than the current platform. The options in this file usually override the default layout of objects, so it is important for the schema generator to take them into account. See ObjectStore Building C++ Interface Applications, Listing Nondefault Object Layout Compiler Options in Chapter 5, for details about the content of the option file.

Optional. No default.

-show_difference or -showd -show_whole or -showw

Indicates the description level of the schema neutralization instructions.

Optional. The default is -show_whole.

API
None.

ossize: Displaying Database Size

The ossize utility displays the size of the specified database and the sizes of its segments.

Syntax

ossize [options] pathname ] 
pathname 
Specifies the file or rawfs database whose size you want to display.

Options
-a 
Displays the total length of the information segment immediately after the length of the data segment.

-A 
Displays access control information.

-c 
Displays the type contents for each segment.

-C 
Displays the type contents for the entire database.

-f 
Displays information about the location of all free blocks of storage in a segment.

-n segment-
number 
Displays information only about the segment specified as segment-number. segment-number is a data segment number such as those displayed by the -a (/INFO) option. This option is useful with the -o (/DEBUG) and -c (/SEGMENT) options because it reduces the amount of output.

-o 
Displays a complete table of every object in the segment, showing its offset and size. The data in this table can be useful in debugging. Do not confuse this with the -0 option, described below.

-sn 
Displays the type summaries by the number of instances of each type.

-ss 
Displays the type summaries by the space used by the instances of each type. (This is the default.)

-st 
Displays the type summaries alphabetically by type name.

-w workspace-
name 
Runs ossize with the current workspace set to workspace-name, which must be the name of a workspace stored in the specified database. This allows you to examine the size (and contents, with -c (/SEGMENT) and -f (/FREE)) of a particular version of the database. If you do not provide this argument, the transient workspace is used as the current workspace (that is, the usual default). If there is a segment that is not known by the current workspace, ossize displays Error: there is no version of this segment in this work space.

-W 
Displays a list of all named workspaces that are stored in the specified database. When specified without other arguments, this option displays only workspace names, with no information about database size.

-0 (zero)

(Zero, not uppercase O) Causes ossize to include the internal segment 0 in type summaries. On UNIX and OS/2, this implies -c if neither -c nor -C is set.

Description

The ossize utility does not distinguish persistently allocated pointers (that is, pointers to pointers, such as new(db) thing* or new(db) thing*[100]) as separate types. They are displayed together.

The ossize utility displays the comment for each segment that has a comment with a nonzero length. See os_segment::set_comment() in the ObjectStore C++ API Reference.

PRM format
The ossize utility notes the type of PRM entries the database contains. The type can be standard or enhanced. For example:

ossize <database path> 
Name: /h/kellen/ctdb_1
Size: 74752 bytes (70 Kbytes)
Created: Fri Aug 23 14:59:40 1996
Created by: a SPARC-architecture CPU with 4K pages with Sun C++ 
4.x PRMs are in enhanced format
...
ossize <database path> 
Name: /h/kellen/ctdb_1
Size: 74752 bytes (70 Kbytes)
Created: Fri Aug 23 15:00:53 1996
Created by: a SPARC-architecture CPU with 4K pages with Sun C++ 
4.x PRMs are in standard format
Schema protection
When developing an application, if you are running this utility on a protected schema database, ensure that the correct key is specified for the environment variables OS_SCHEMA_KEY_LOW and OS_SCHEMA_KEY_HIGH. If the correct key is not specified for these variables, the utility fails. ObjectStore signals

err_schema_key _CT_invalid_schema_key,
"<err-0025-0151> The schema is protected and the key provided did not 
match the one in the schema."

Examples

Rawfs database
> ossize lame::/db1
Name: lame::/db1
Size: 44544 bytes (42 Kbytes)
Created: Thu Jan 26 17:19:11 1996
Created by: a SPARC-architecture CPU with 4K pages
There is 1 root:
Name: head   Type: note
There are no external database pointers.
There are no external references.
The schema is local.
There is 1 segment:
Data segment 2:
Size: 512 bytes (1 Kbytes)
>
-a
Specifying the -a option displays space use information for the information segment. For example:

Info segment usage:
Header/Ovrflw:

512 bytes

( 1 * 512)

Tag btree:

512 bytes

( 1 * 512)

Tag leaves:

3584 bytes

( 7 * 512)

Relocation map:

512 bytes

( 1 * 512)

Free Tree:

3584 bytes

( 7 * 512)

Hugespace:

2048 bytes

( 4 * 512)

Fixed Cluster:

512 bytes

( 1 * 512)

String Pool:

2048 bytes

( 4 * 512)

Unused:

1024 bytes

( 2 * 512)

Total Size:

14336 bytes

( 28 * 512)

-o
Specifying the -o option displays fixed cluster locations for each segment. For example:
Fixed Offset

Cluster Size

0

4096

0x1000

8192

0x3000

16384

0x7000

32768

0xf000

4096

0x10000

65536

0x20000

8192

0x22000

16384

0x26000

32768

0x30000

65536

API
Class: os_dbutil
Method: ossize

ossvrchkpt: Moving Data Out of the Server Transaction Log

The ossvrchkpt utility performs a checkpoint for a specified Server host. It ensures that all data is copied from the transaction log of the Server host to the database or databases that were changed.

Syntax

ossvrchkpt hostname 
FAT name
ossvrchk
hostname 
Specifies the name of the host of the Server whose log you want to propagate.

Description

This command does not return until the propagation is complete. It can return the following values:
0 
Success.

1 
There is an error when passing the command to the Server.

2 
The Server is unable to complete the checkpoint.

When needed
Run this utility when you want to

Example

ossvrchkpt hostess
Data in the Server transaction log on the host called hostess is copied to the databases that were modified.

API
Class: os_dbutil
Method: svr_checkpoint

ossvrclntkill: Disconnecting a Client Thread on a Server

The ossvrclntkill utility disconnects a client thread on the Server running on the specified host. This disconnects the client from the Server and releases the client locks.

Syntax

ossvrclntkill hostname -h client-host | -p client-pid | -n client-name 
[ -a ]
ossvrclntkill hostname client-pid
(The second form is supported for compatibility with earlier releases.)

Options
hostname 
Specifies the name of the host of the Server that is connected to the client process being disconnected.

-h client-host 
Specifies the name of the host of the client being disconnected, as determined with ossvrstat.

-p client-pid 
Specifies an unsigned number that is the process ID of the client process being disconnected.

-n client-name 
Specifies the name of the client process being disconnected. This name is set by objectstore::set_client_name().

-a 
Specifies that all clients matching the specified criteria should be disconnected.

Description

Run the ossvrclntkill utility on the Servers connected to the client that you want to kill.

You can use ossvrstat to determine the client-hostname and client-pid.

When needed
Use the ossvrclntkill utility when a client that no longer exists is still attached to the Server. This can happen because of network failure or when the client process terminates abnormally. In most cases, the operating system disconnects the client from the Servers gracefully, but some operating systems are not completely dependable in this regard.

UNIX
You must specify -h, -p, or -n. The -a option deletes all matching clients, otherwise a unique match is required.

If the Server's authentication is set to something other than NONE (authentication is SYS by default), the following rule applies:

Any user can disconnect clients that user owns. If the -a option is used (kill all clients matching the given search pattern), the user must own all matching processes, otherwise authentication fails and no clients are killed.

Otherwise, no authentication is required.

Example

ossvrclntkill hostess -h cupcake -a
This disconnects all clients on cupcake that are attached to the Server on hostess.

API
Class: os_dbutil
Method: svr_client_kill

ossvrdebug: Setting a Server Debug Trace Level

Syntax

ossvrdebug hostname n

Options
hostname 
Server host to be debugged.

n 
Number that specifies the trace level of the Server.

Description

Sets the Server debug trace level of the Server. Using this command is equivalent to starting the server with the -d n command-line option. The requested trace output is put into the /tmp/ostore/oss_out file (on UNIX) once the Server receives the message.

Example

ossvrdebug kellen 5 
Sets the Server debug trace level of the server kellen to 5.

API
Class: os_dbutil
Method: ossvrdebug

ossvrmtr: Displaying Server Resource Information

The ossvrmtr utility provides information about resource use for the Server process running on the specified host.

Obsolete
This utility actually calls the ossvrstat utility. Any information you can obtain by running ossvrmtr, you can also obtain by running ossvrstat. The ossvrmtr utility will not be supported in future releases.

Syntax

ossvrmtr hostname 
hostname 
Specifies the name of the host of the Server for which you want to display information.

Description

The ossvrmtr utility summarizes metering information for total clients and for logs for these intervals:

You can use the ossvrstat command to see the Server information and per-client information.

Example

See the ossvrstat example on Messages received.

API
Class: os_dbutil
Method: svr_stat

ossvrping: Determining If a Server Is Running

The ossvrping utility reports whether or not a Server is running on the specified host.

Syntax

ossvrping [ -v ] [ hostname ]

Options
hostname 
Specifies the name of the host on which you want to know whether or not a Server is running.

-v 
Indicates that you want more information when a Server is not running on the specified host.

Description

If you are having any problems with a Server, the first thing to do is run ossvrping to see if the Server is running.

If you do not specify a host, the default is the local host.

Examples

ossvrping elvis
The ObjectStore Server on host elvis is alive.
API
Class: os_dbutil
Method: svr_ping

ossvrshtd: Shutting Down the Server

The ossvrshtd utility immediately shuts down the Server running on the specified host. This is regardless of whether or not clients are connected to the Server.

Syntax

ossvrshtd [-f] hostname 

Options
-f 
Specifies that shutdown should be immediate. When you do not include this option, ObjectStore prompts you to confirm that you really want to shut down the Server. When you include this option, there is no confirmation prompt.

hostname 
Specifies the name of the host of the Server that you want to shut down.

Description

Before shutting down the Server, run the ossvrstat utility to determine if there are clients using the Server. If there are, notify them to exit.

Clearing the log
Shutting down the Server automatically propagates everything in the transaction log.

If any clients are connected to the Server when you shut it down, the next time those clients try to contact the Server they receive the message err_broken_server_connection. The client can call os_server::reconnect to try to reconnect.

When needed
ObjectStore needs to be shut down when you

UNIX
If the Server's authentication is set to NONE (authorization is SYS by default), you must be the user ID that owns the running Server process, or the superuser, to run this utility.

If the Server's authentication is set to something other than NONE, ossvrshtd must be run as root.

Windows NT
On Windows NT, you can shut down the Server using the Service Control Manager. Click on the Services icon in the Control Panel or issue the command net stop "ObjectStore Server R5.0".

OS/2
You must be the Administrator to run this utility.

Starting a Server
For instructions for starting a Server, see the chapter in this book for your platform.

Example

ossvrshtd hostess
Are you sure that you wish to shut down the server
on host hostess (yes/no) [no]: yes
API
Class: os_dbutil
Method: svr_shutdown

ossvrstat: Displaying Server and Client Information

The ossvrstat utility displays settings of Server parameters, Server use meters, and information for each client connected to the Server running on the specified host.

Syntax

ossvrstat hostname [options]

Options
hostname 
Specifies the name of the host of the Server for which you want information.

-meters 
Displays performance meters for the specified Server.

-clients 
Displays the state of each client connected to the specified Server, and shows which clients, if any, are contending for locks.

-parameters 
Displays Server parameter values. The following parameters are not displayed if they are not enabled: Allow NFS Locks, Allow Remote Database Access, and Host Access List.

-rusage 
UNIX only: displays Server process information.

Description

Specifying both -meters and -clients displays the use meters for all clients, as well as the information described above.

ObjectStore identifies each client by host name and then displays the program name (if there is one) with the process ID on the client host. Program names are set with objectstore::set_client_name. When there is no program name, ObjectStore displays default_client_name.

Numbers are relative
The table on the next few pages describes the information that ossvrstat supplies in terms of high and low numbers. This is entirely relative to your application. For high and low to have meaning, run ossvrstat to determine a baseline.

Display all
To display all information, do not include any options. The following table describes the meters that ossvrstat provides.
MeterDescription
Current log size 
Number of sectors in the transaction log. This meter appears in the middle of the list of Server parameters because it is most useful when you are determining how to adjust other Server parameters.

Messages received 
Number of messages the Server has received from clients. A message can be a request for an action such as opening a database, sending data, updating a database, committing a transaction, aborting a transaction, or closing a database. This indicates how often clients are communicating with the Server. When the number is low, the demand on the Server is less.

Callback messages sent 
Number of callback messages the Server sent to clients. A callback message is the message a Server sends to clientA when clientB requests data on a page that is locked by clientA. When this number is high, it means that an application is often modifying data that other clients also want to modify. This might mean that the program is poorly designed.

Callback sectors 
Number of sectors that have been called back in callback messages. This is not necessarily the same as the number of sectors for which locks have actually been shared or released. Also, the Server might send many callback messages but they might not be for a large number of sectors. Usually, callbacks are for pages (4 KB on most machines). Sometimes the Server calls back larger chunks.

Succeeded sectors 
Number of sectors for which the Server sent a callback message and the client with the lock (clientA) either shared the lock with clientB or relinquished the lock to clientB. If Callback sectors is comparable to Succeeded sectors, you know that clients are not waiting too long. If Succeeded sectors is much smaller, then more clients are being locked out of data they need.

KB read 
Number of kilobytes of data that the Server sent to clients to read. Monitor this statistic to help determine whether or not you need to enlarge client cache files. Compare kilobytes read for a given client with the number of commits and the size of the client cache file.

KB written 
Number of kilobytes of modified data that the clients have sent to the Server. Written data is data involved in a commit. It must be buffered and it is logged if it cannot go directly to a database because it is being written past the current end of a segment. When analysis is concerned with the number of transactions per second, the number of kilobytes written is an important factor.

Commits 
Number of committed transactions that the Server knows about. If a client does not modify data during a transaction, the client might not inform the Server that a transaction was committed. You can use this number to estimate the number of transactions per second.

Readonly commits 
Number of committed transactions that the Server determined did not involve any data changes. Typically, the client does not inform the Server about such commits, so this number should be low. An example of this is when the client releases ownership needed by another client. In this case, the client sometimes performs a commit even on a read-only transaction. Read-only commits are like simple aborts; the cost is near zero.

Aborts 
Number of aborted transactions that the Server knows about. If the client has not sent any changes to the Server, the client can abort a transaction without informing the Server. Most applications abort transactions only because of lock conflicts. In this case, you can use this number to determine the number of conflicts.

Two phase transactions 
Number of committed transactions that involved changes to databases on more than one Server. Typically, one Server is involved in a commit. A two-phase commit requires additional overhead, so it is useful to know how often it is happening.

Lock timeouts 
Number of times all clients fail to obtain a lock because a lock timeout time is set on the client and the lock needed was not released before the lock timeout elapsed.

Lock waits 
Number of times all clients had to wait to obtain a lock on a page because a lock by another client was already in place. The utility also provides the average time that a client waits for a lock. This appears in parentheses next to Lock waits and is in microseconds. ObjectStore divides the total time waiting for locks by the number of lock waits.

Deadlocks 
Number of times the Server chose a client to be a deadlock victim and notified it that it had to abort a particular transaction so that other clients could complete their transactions. If you specify -clients when you run ossvrstat, the utility displays information about which clients are waiting for locks and which clients currently have those locks.

Message buffer waits 
Number of times a message from a client to the Server must wait to use a message buffer. The Message Buffers Server parameter specifies how many message buffers the Server uses to communicate with clients. If the number of Message buffer waits is high, consider increasing the value specified for Message Buffers. See Message Buffers.

Notifies sent 
Number of notification messages the Server sent to all Cache Managers for delivery to clients on that Cache Manager's host. When the values for Notifies sent and Notifies received are both zero, ObjectStore does not print information for these two meters.

Notifies received 
Number of notification messages the Server received from all clients. The Server then sends these messages to the Cache Manager on the host of the client that the message is for. When the values for Notifies sent and Notifies received are both zero, ObjectStore does not print information for these two meters.

Log records 
Number of records written to a log record segment of the transaction log. Each committed transaction writes a record to the log. This is a throughput number. Space in the log is continually reused.

Record segment 
switches 
Number of times the Server switches from writing commit records in one log record segment to writing commit records in the other log record segment. For descriptions of the segments in the transaction log, see Log File Terms on page17.

To switch segments, the Server must ensure that all changes that are recorded in the log record segment being switched to have been propagated to the databases. When the Server needs to switch log record segments, if not everything is propagated then the Server forces the propagations to happen quickly.

Too large a number here indicates that the log record segments are not big enough. You can improve performance by increasing the log record segment initial size. See Log Record Segment Initial Size.

Flush data 
Number of flushes to disk of data that was in the data segment of the transaction log. A flush ensures that the data is on the disk. It does not free the space the data occupies in the log. ObjectStore determines when to flush data.

Flush records 
Number of flushes to disk of records that were in a log record segment of the transaction log. A flush ensures that the changes are on the disk. It does not free the space the records occupy in the log. ObjectStore determines when to flush records.

KB data 
Number of kilobytes of data that the Server wrote to the data segment of the transaction log. When this number is high, it means one of the following:

KB records 
Number of kilobytes of records that the Server wrote to a log record segment of the transaction log. Each committed transaction writes a record to the log. This is a throughput number. Space in the log is continually reused.

KB propagated 
Number of kilobytes of committed data that was propagated from the transaction log to the database it belongs in. The Server performs propagation in small chunks that do not interfere with client activity. Propagation can include writing to databases, flushing data and records that are in the log, and, sometimes, reading from the log data segment. After propagation, the space that the propagated data and records occupied in the log becomes available for new log entries.

The number of kilobytes propagated can be smaller than the number of kilobytes written if the same data is written multiple times. ObjectStore propagates the last modification and discards earlier modifications.

KB direct 
Number of kilobytes of uncommitted data that the Server stored directly in databases. A high number here is good because it means that the data did not have the overhead of going through the log.

The Server stores uncommitted data in the database when an application tries to write data past the end of the database segment in which it needs to be stored.

The Direct To Segment Threshold Server parameter controls how far the Server can write past the current end of the database segment before the Server writes the data directly to the database. See Direct to Segment Threshold.

Propagations 
Number of times the Server propagated data from the log to databases. The Server moves small chunks of data each time it performs propagation.

If the number of propagations per second is high, the Server is probably forced to propagate for one or more of the following reasons:

Example

ossvrstat kellen
ObjectStore Release 5.0 Database Server
Client/Server protocol version 1.8
Compiled by staff at 97-02-18 17:40:42 in 
/h/kellen/1/r5core/obj/sun4/opt/nserver
Allow Shared Communications:       Yes
Authentication Required:       SYS, DES, Name Password
Cache Manager Ping Time:      300
Cache Manager Ping Time In Transaction:      300
DB Expiration Time:       5 seconds
Deadlock Victim:       Work
Direct To Segment Threshold:       128 sectors (64KB)
Log File:                                    /kellen/log_file_DB
Current Log Size      43024 sectors (21512KB)
Log Data Segment Growth Increment:       2048 sectors (1MB)
Log Data Segment Initial Size:       2048 sectors (1MB)
Log Record Segment Buffer Size:       1024 sectors (512KB)
Log Record Segment Growth Increment:       512 sectors (256KB)
Log Record Segment Initial Size:       1024 sectors (512KB)
Max AIO Threads      3
Max Connect Memory Usage      unlimited
Max Data Propagation Per Propagate:       512 sectors (256KB)
Max Data Propagation Threshold:       8192 sectors (4MB)
Max Memory Usage      unlimited
Max Two Phase Delay      30
Message Buffer Size:       512 sectors (256KB)
Message Buffers:       4
Notification Retry Time:       60 seconds
Preferred Network Receive Buffer Size      16384 bytes
Preferred Network Send Buffer Size      16384 bytes
Propagation Sleep Time:       60 seconds
Propagation Buffer Size:       8192 sectors (4MB)
Server Machine Usage:
      User time:      58123.6 secs
      System time:      3151.1 secs
      Max. Res. Set Size:      6639
      Page Reclaims:      1400444
      Page Faults:      54510
      Swaps:      0
      Block Input Operations:      20339
      Block Output Operations:      387732
      Signals Received:      1
      Voluntary Context Switches:      775502
      Involuntary Context Switches:      645611
Server Meters:
Total since server start up:
      Client Meters:
            1314496       messages received       23575       callback messages sent
            211960       callback sectors           94240       succeeded sectors
            3192253      KB read                   3691926       KB written
            211351       commits       89572       readonly commits
            19749       aborts       0       two phase transactions
            0      lock timeouts      341      lock waits (average 7555 us)
            74       deadlocks       0       message buffer waits
            14896      notifies sent      14938      notifies received
      Log Meters:
            219314       log records       1514       record segment switches
            52115       flush data                 225902       flush records
            0       KB data                         0       KB records
            576954       KB propagated       201924       KB direct
            28302       propagations
Total over past 60 minute(s): 
      Client Meters:
            2135       messages received       0       callback messages sent
            0       callback sectors                0       succeeded sectors
            7416       KB read       2759       KB written
            40       commits       20       readonly commits
            91       aborts       0       two phase transactions
            0      lock timeouts      0      lock waits
            0       deadlocks       0       message buffer waits
            1843      notifies sent      1772      notifies received
      Log Meters:
            116       log records       10       record segment switches
            14       flush data       187       flush records
            0       KB data       0       KB records
            1799       KB propagated       787       KB direct
            102       propagations
Total over past 10 minute(s):
      Client Meters:
            1056       messages received       0       callback messages sent
            0       callback sectors       0       succeeded sectors
            3708       KB read       1383       KB written
            20       commits       10       readonly commits
            44       aborts       0       two phase transactions
            0      lock timeouts      0      lock waits
            0       deadlocks       0       message buffer waits
            1843      notifies sent      1772      notifies received
      Log Meters:
            57       log records       5       record segment switches
            7       flush data       73       flush records
            0       KB data       0       KB records
            901       KB propagated       395       KB direct
            51       propagations
Total over past 1 minute(s):
      Client Meters:
            0       messages received       0       callback messages sent
            0       callback sectors       0       succeeded sectors
            0       KB read       0       KB written
            0       commits       0       readonly commits
            0       aborts       0       two phase transactions
            0      lock timeouts      0      lock waits
            0       deadlocks       0       message buffer waits
            0      notifies sent      0      notifies received
      Log Meters:
            0       log records       0       record segment switches
            0       flush data       1       flush records
            0       KB data       0       KB records
            0       KB propagated       0       KB direct
            0       propagations
No active clients
Server machine usage
On UNIX systems, the ossvrstat output under the Server Machine Usage heading is provided by the getrusage utility. The output varies according to the platform on which the Server is running. For information about what the output categories mean, see the man page for getrusage on the Server machine.

On non-UNIX platforms, the Server fills in zeros for these output categories, which indicates that the measurement is not available on that platform.

Active clients
When there are active clients, the ossvrstat utility also displays something like the following:

                              Client connections awaiting a client message:
                              Client #3 (atiq/26896/(unknown))
                                    priority=0x8000, duration=4652 seconds, work=0, no transaction on server
                              Client #5 (nanook/1346/(unknown))
                                    priority=0x8000, duration=2 seconds, work=2, transaction in progress
                              Client #7 (yukiko/14916/(unknown))
                                    priority=0x8000, duration=136 seconds, work=0, no transaction on server
This is a list of the clients that have initiated a connection to the Server. In the previous example, the Server is waiting for the next message from each client. Next to the client number, the information in parentheses indicates the

The other information provided is as follows:
priority 
A hexadecimal number that indicates the priority assigned to this transaction with the os_transaction::set_transaction_priority() method. ObjectStore uses this to determine the victim if there is a deadlock. The transaction with the lower number is the victim.

duration 
The number of seconds since the last successful commit by the client.

work 
The amount of work done by the client, as measured by remote procedure calls to the Server during the current transaction. Each message to the Server counts as one work unit.

comment 
Indicates whether or not a transaction is in progress.

API
Class: os_dbutil
Method: svr_stat

ostest: Testing a Pathname for Specified Conditions

The ostest utility indicates whether or not a pathname meets a specified condition.

Syntax

ostest [option] pathname
pathname 
The pathname of a database or directory.

Options
-d 
pathname is a rawfs directory.

-f 
pathname is a rawfs database.

-p 
pathname is a file pathname.

-r 
I (requestor) have read access to pathname.

-s 
pathname is a database with a nonzero size.

-w 
I (requestor) have write access to pathname.

Description

You can specify one option when running this utility. The ostest utility returns an exit code of
0 
When the specified condition is true

Nonzero

When the specified condition is false

When you specify a file database, you cannot specify a remote file-server host in the pathname of the file database. The ostest utility passes the operation to a local native utility. If you specify a remote file-server host name, ObjectStore informs you that you specified an illegal pathname.

API
Class: os_dbutil
Method: stat

osupgprm: Upgrading PRM Formats

Upgrades a database's address-space format.

Syntax

osupgprm database-name ... 

Description

This utility changes the address space format for a database to use a PRM (persistent relocation map) format derived from deferred address space reservation.

Immediate assignment
Prior to Release 5, address space for a segment was always reserved immediately. Immediate reservation means that the first time any page in a segment is accessed, all of the address space required for that segment, including pointers out of the segment to other segments, is reserved. In some cases, this results in excessive use of address space.

Deferred assignment
Deferred assignment means that the first time that a page in a segment is accessed, the minimal amount of address space required for that page, including pointers out of that page, is reserved. Any new databases created with Release 5 are automatically created using enhanced PRM entries, unless you explicitly specify the standard format.

Upgrade is recommended
In order to take advantage of deferred address assignment, existing databases created from previous releases of ObjectStore should be upgraded to a new enhanced PRM format. Object Design recommends this upgrade in almost all cases. Release 4 clients can only access databases using the standard (old) PRM format.

The choice of immediate as opposed to deferred assignment for a segment is made every time the segment is put in use for the first time in a transaction. The type of assignment must remain constant for the duration of a transaction.

First run osprmgc
Before upgrading a database, use osprmgc to conserve currently reserved address space for the database.

Cross-database pointers
Release 5 clients can access databases that use either the standard (pre-release 5) or enhanced PRM format, but the cross-database pointers must be between databases that use the same PRM format. Specify the database you want to upgrade and its target databases in any order. Target databases are also upgraded to use deferred address space reservation.

osverifydb: Verifying Pointers and References in a Database

The osverifydb utility verifies all pointers and references in a database.

Syntax

osverifydb [options] pathname
pathname 
Specifies a file or rawfs database whose pointers you want to verify.

Options
-all 
Verifies all segments including the internal segment 0.

-end_offset integer 
Specifies the end offset (in bytes) within the segment where verification is done. Defaults to 0, which starts verifications at the end of the segment.

-ignore_references 
Suppresses verification of references.

-illegal_pointer_action 
{null | ask} 
When used with the -all option and null argument, sets the illegal pointer to null. With the ask argument, uses the reference value that is supplied in response to the query.

-info_sector_tag_
verify_opt option 
Checks that a database created on an SGI machine with a 16 K page size in an ObjectStore release prior to 5.0 can be used by heterogeneous ObjectStore applications. This option can also be used to upgrade such a database for use with heterogeneous applications if needed.

Valid option values are:

0 - Skips verifying info segment sector tags (default).

1 - Verifies info segment sector tags and reports. whether the database can be used heterogeneously.

2 - Upgrades the database for heterogeneous accessibility.

5 - Causes osverifydb to report information for this option only. Other verifications usually performed by osverifydb are not made.

6 - Performs an upgrade only. Other verifications usually performed by osverifydb are not made.

-L server-log-name 
When specified, the named file is used for the Server log file. When unspecified, a temporary file is used.

This option is only applicable when you are running the utility as an ObjectStore/Single application. If the file already exists, it must be a properly formed Server log.

-n segment-number 
Verify only the segment specified by segment-number.

-nocoll 
Suppresses integrity checks that ensure that the ObjectStore collections in the database are valid. Object Design recommends that you use this option only on databases that do not contain collections.

-o 
Displays each object in the database using the metaobject protocol.

-start_offset integer 
Specifies the start offset (in bytes) within the segment where verification is done. Defaults to 0, which means start verifying at the beginning of the segment.

-tag 
Displays the tag value on an error.

-v 
Displays the value for each pointer.

-whohas hex_address 
Lists objects that point to the object identified by the pointer.

Description

Verification means

When osverifydb detects an invalid pointer, it indicates the location and the value of the pointer. Whenever possible, it displays a symbolic path to the bad pointer, starting with the outermost enclosing object.

The osverifydb utility runs integrity checks to ensure that the ObjectStore collections in the database are valid. You can suppress verification of collections by specifying the -nocoll option when you run osverifydb.

Verifying references
Reference verification requires that the reference be resolved to an address before it can be verified. This requires additional address space resources. In some cases, the osverifydb utility might run out of address space. Turning off reference verification allows verification of a database in such circumstances.

You would not normally include the -ignore_references option unless you had already tried to verify the database and verification failed because the utility ran out of address space.

How often
How often you should verify database pointers and references depends on how often your data changes. Verifying databases before backups is a good practice, but verification can be time-consuming. You might want to verify databases every evening.

Schema protection
When developing an application, if you are running this utility on a protected schema database, ensure that the correct key is specified for the environment variables OS_SCHEMA_KEY_LOW and OS_SCHEMA_KEY_HIGH. If the correct key is not specified for these variables, the utility fails. ObjectStore signals

err_schema_key _CT_invalid_schema_key,
"<err-0025-0151> The schema is protected and the key provided did not 
match the one in the schema."

Example

osverifydb -all -illegal_pointer_action null vtest1.db
The null argument causes osverifydb to null all illegal pointers.

osverifydb -illegal_pointer_action ask vtest2.db
The ask argument permits selective repair; that is, it causes osverifydb to prompt for an alternative value for the illegal pointer in the format used by os_reference::load(). Here is some sample output from osverifydb in such a circumstance:

The object at 0x6020000 (</daffy/home/daffy/daffy0/dbs/verifydb1 | 2 | 
0>)(type "c1"), contains a pointer at 0x6020000(c1.m1) with the illegal 
value 0x1. It points to nonpersistent storage.
Enter replacement pointer value in reference dump format (<database path | segment number | hex offset>:
You can then press Enter, in which case the illegal pointer is set to null, or you can enter a valid reference string such as /daffy/home/daffy/daffy0/dbs/verifydb1 | 2 | 64 identifying an object at offset 64 in segment 2, in the database verifydb1. The new pointer value, if valid, is used as the replacement value for the pointer in the database.

Caution
It is very important to use the null option with caution because using it indiscriminately can result in a corrupted database.

The following output is the result of running osverifydb on a database that contains an object of type c1, with the bad pointers identified by the error messages.

beethoven% osverifydb /camper/van
Verifying database beethoven::/camper/van 
Verifying segment 2 Size: 8192 bytes
Pointer to nonpersistent storage.
Pointer Location: 0x6010000. Contents: 0x1.
Lvalue expression for pointer: c1::m1
Pointer type mismatch; the declared type is incompatible with the actual 
type of the object
Pointer Location: 0x6010004. Contents: 0x601003c.
Declared type c2*. Actual type: c3*.
Lvalue expression for pointer: c1::m2
Pointer to deleted storage
Pointer Location: 0x6010008. Contents: 0x6010040.
Declared type c2*.
Lvalue expression for pointer: c1::m3
Pointer type mismatch; the declared type is incompatible with the actual 
type of the object
Pointer Location: 0x601000c. Contents: 0x6010028.
Declared type c2*. Actual type: c1*.
Lvalue expression for pointer: c1::m4
Lvalue expression for pointed to object: c1::ma[5]
Pointer type mismatch; the declared type is incompatible with the actual 
type of the object
Pointer Location: 0x6010010. Contents: 0x6010044.
Declared type c2*. Actual type: char*.
Lvalue expression for pointer: c1::m5
Lvalue expression for pointed to object: char[0]
Pointer to nonpersistent storage.
Pointer Location: 0x6010068. Contents: 0x1.
Lvalue expression for pointer: void*[5]
Verified 5 objects in segment
Verified 5 objects in database
beethoven%
API
Class: os_dbutil
Method: osverifydb

osversion: Displaying the ObjectStore Version in Use

The osversion utility displays the version of ObjectStore that is in use on your machine.

Syntax

osversion

Examples

SPARCstation
elvis% osversion 
ObjectStore Release 5.1 for SPARC Solaris 2
Windows
[D\:] osversion
ObjectStore Release 5.1 for Windows NT Systems
OS/2
[D\:] osversion
ObjectStore Release 5.1 for OS/2
API
Class: os_dbutil
Methods: release_name
release_major
release_minor
release_maintenance

Also see the file include/ostore/osreleas.hh.



[previous] [next]

Copyright © 1997 Object Design, Inc. All rights reserved.

Updated: 03/26/98 20:38:55