!BASE HREF="http://www.rhic.bnl.gov/STAR/comp/train/tut/"> title
STAR   Computing Tutorials main page
Process for Defining New or Evolved DB Tables
Offline computing tutorial Maintained by Jeff Porter
Last modified Fri Jul 03 00:19:00 1998


The process for defining new database tables has several steps, many are predicated on making the data available to the offline infrastructure. The steps are listed here and detailed further down. Currently, the limitations on people performing these steps are those of access protection either in writing to the database or updating the cvs repository. Such permissions are given to specific individuals responsible per domain level (e.g. Tpc, Svt,...) .

Several of the steps involve running perl scripts. Each of these scripts require arguments. Supplying no arguments will generate a description of what the script will do and what arguments are needed/optional.

Schema Definition in Database

The data objects stored in DB are pre-defined by c-structs in header files (located in $STAR/StDb/include directories). The database is fed the information about these c-structs which it stores for use during later read and write operations. See Schema Definitions for the storage structure used during this process.

The process is performed by a set of perl scripts which do the following;

dbTableXml.pl Creates an XML version of table schema with added info that is not in the header file.
dbDefTable.pl Load schema & added information from XML into the database
dbMakeFiles.pl Queries the DB & produces either/both idl/header file representations of structure.
dbGetNode.pl Requests the list of named tables in the database
dbDefNode.pl Loads (updates) the list of named tables in db
dbGetConfig.pl Requests a hiearchical structure from a db
dbDefConfig.pl updates a hierarchical structure in db
These steps are done pointing to a specific database (and a specific MySQL server). That is, the schema used to store data in a database is kept within that database so there is no overlap between, say, a struct defined for the tpc Calibrations and that defined for the emc Calibrations.

Example of taking a new c-struct through to making it available in a Configuration (such as used in Offline).

Build xml file for daqDetector struct in "daqDetector.h" for insertion
into Configurations_daq database
default output is in xml/Configurations/daqDetector.xml relative to cwd
Step 1. dbTableXml.pl -f include/Configurations/daqDetector.h -d Configurations_daq

Use XML file to add daqDetector schema to DB in server onlsun1:3310
Step 2. dbDefTable.pl -f xml/Configurations/daqDetector.xml -s onlsun1.star.bnl.gov:3310

Request (for updating) the list of named references in the database
Step 3. dbGetNode.pl -d Configurations_daq -s onlsun1.star.bnl.gov:3306

After editing the produced file (daqNodes.xml) update to the db
Step 4. dbDefNode.pl -f daqNodes.xml -s onlsun1.star.bnl.gov:3310

Extract a hiearchy structure from the datdabase for updating -> output
will be a file named something like Configurations_daq_reconV0.xml
Step 5. dbGetConfig.pl -d Configurations_daq -s onlsun1.star.bnl.gov:3310

After updating the structure, add to the database
Step 6. dbDefConfig.pl -f Configurations_daq_reconV0.xml -s onlsun1.star.bnl.gov:3310

Extract idl or header file of structure referenced be name=daqDetector and
from Database (Configurations_daq) on server onlsun1:3310
and write into idlDirectory &/or headerDirectory relative to cwd
Step 7. dbMakeFiles.pl -n daqDetector -d Configurations_daq -s onlsun1.star.bnl.gov:3310 -o idlDirectory -c headerDirectory

Data Loading into DataBase

The data loading into the database can take many forms. The most typical is via the C++ API and can be done via a ROOT-CINT macro once the c-struct has been compiled and exists in a shared library. Examples are provided on the How To Page and more will follow.

Idl definitions in the Offline Repository

For data to be accessed in the offline chain, a TTable class for the database struct must exist. The current pre-requesite for this to occur is the existance of an idl file for the structure. This is the reason that the dbMakeFiles.pl was written. Once the dbMakeFiles.pl script is run for a specific structure to produce the idl file from the database record of the schema, this file must be placed in a repository which the Offline build scripts know about in order for the appropriate TTable to be produced. This repository is currently;

Since this repository is not sub-divided by domains, it can only be updated by a few of people. Such access will be expanded to more people as the process is put to further use.

For testing new tables in the database with the TTable class;

- Add table to database, Steps 1 and 2 in the example above
- check out database idls, cvs co StDb/idl
- add header from db to idl directory, Step 7 with -o StDb/idl option
- build the StDb_Tables.so shared lib, cons +StDb
- if all goes well & you have permission to update the cvs-repository, cvs ci StDb/idl
There are serveral constraints on c-struct definitions that originated from the migration of StAF-to-ROOT in the Offline software.

Constraints on DB-data definitions

  • _st is appended to table names in offline codes
  • Table names cannot be more than 16 chars
  • element names cannot be more than 31 chars

Table (Query) List for St_db_Maker

(Now Obsolete)

To access a set of data in the offline chain, The St_db_Maker needs to know what data to request from the database. This is predetermined as the St_db_Maker's construction and Init() phases. Specifically, the St_db_Maker is constructed with "const char* maindir" argument of the form,

which tells the object to make a request for a table list of name, "Geometry_hierarchy". (See comment on storage structure.) Any new table must be added to the list in order for it to be accessed. There is only 1-list per database so that "versioning" of the list to be retrieved is not done directly. Rather the list may change as a function of the timeStamp.

The structure of this list is kept only inside the St_db_Maker and stored within a binary blob inside the params database. Updates must be done only on "www.star.bnl.gov". An example macro for defining such a list for Calibrations is, Calib_v0.C . And then loading a structure into the "params" database is writeConfig.C .

Table (Query) List from StDbLib

The C++ API was written to understand the Configuration tables structure and to provide access to an ensemble of data with a single query. For example, the configuration list is requested by version name and provides "versioning" at the individual table level. It also prepares the list with individual element identifiers so that different rows (e.g. tpc-sectors, emc-modules, ...) can be accessed independently as well as whether the table is carries a "baseLine" or "time-Indexed" attribute for which provide information about how one should reference the data instances.

Storage of a Configuration can be done via the C++ API (see StDbManager::storeConfig()) but is generally done via the perl script dbDefConfig.pl. The idea behind the configuration is to allow sets of structures to be linked together in an ensemble. Many such ensembles could be formed and identified uniquely by name. However, we currently do not use this flexibility in the Offline access model hence only 1 named ensemble exists per database. In the current implementation of the Offline access model, the St_db_Maker requests this 1 configuration from the StDbBroker which passes the request to the C++ API. This sole configurations is returned to the St_db_Maker as a array of c-structs identifying each table by name, ID, and parent association.