The Luminosity Database Application is a large scale and complex project which, at it's core, relies upon a set of 37 database tables and 2 views stored in the D0 offline Oracle Database production instance (d0ofprd1) with a development version stored in the development instance (d0ofdev1).
The tables constitute an offline repository for information from many subsystems:
|Subsystem||Type/Use of information|
|Luminosity Monitor Detector||both fundamental and derived information needed for normalization calculation and cross checks|
|Trigger Framework||needed for normalization calculation and cross checks|
|Level 3||needed for normalization calculation and cross checks|
|Fermilab Tevatron Accelerator||needed for normalization calculation and cross checks|
|Offline Data Processing (SAM)||used to verify and certify data integrity through offline reconstruction processing and re-processing|
|"Group" assigned quality||quality flags assigned offline by detector groups on the quality of their subsystem for that LBN|
|"Consistency" determined quality||cross checks on internal luminosity detector information with that from systems outside the luminosity detector|
The Luminosity Database facilitates the following essential operations for the D0 experiment:
The program which delivers a Luminosity Normalization to the user is called lmAccess, previously a file based system. lmAccess is being rewritten to access the more complete set of Luminosity information (available in the database) than the current stage* files provides, which will allow for better customized luminosity normalization for analyses.
Central to the Offline Luminosity Database Application is a set of 37 database tables and 2 database views in the D0 offline software environment. The database tables were laid out using the Oracle Designer tool, which generated the ddl (database definition language) files which create the database entities (tables, views, indexes, keys...)
A description of the database, it's tables, and all of the columns in every table is described in the following document:
Many tables in the Luminosity Database contain information that is derived exclusively from data taken online. This section is dedicated to summarizing the programs that have been used when large amounts of data (over many months or years) need to be loaded from the online system.
A description of these programs can be found here:
On the online side, we use a special version of newValidate.py to make a python data file with the data Helena needs for her db loading program. Greg has been running this program and it is in /home/davisg/lm_work/lm_server/py/newValidateDB.py (and cvs, lm_server package). This puts the files in /mnt/d0olscrtchD/d0lum/db/. Since it is a special version of newValidate it reads the "scaler" files, which even in spite of d0olmon1's disk being nearly full, will always be available. They are at /luminosity/data/LBN-clean/. This program needs to be run on an online machine.
Helena's program then reads these python data files and creates files for loading into the database. This program is called lmDB.py and I am not sure where the most recent version is but there is a 5 month old version in the lm_server cvs package. I believe that lmDB.py is creating files in the /mnt/d0olscrtchD/d0lum/ area as well. Both these sets of files take a moderate amount of online disk space. Currently we are using d0olscrtchD, which we share with STT and which is slightly less than half full. Fortunately, if we need more space Stu is bringing up more disk online in the next few weeks.
To make this a production version we need to combine these tools so that they work with the real time daq. This is what Jeremy is working on. Hopefully we can skip the intermediate step of generating special python files that need to be loaded. Instead we want to just send the data from newValidate directly into a version of Helena's code. Of course, this will require some sort of buffer so we won't be able to eliminate the temporary files entirely. Additionally, there needs to be some way of checking and re-validating the data once it is in the DB. At least at first, the the data will be sent from the "new" newValidate to the db loader after every run. The best machine to run this on will hopefully be a dedicated machine somewhere in the d0ol60's, once Stu moves other apps to the d0ol70's. In the meantime we probably just want to develop on d0ol61, which also runs our examine, and something for the gm and fpd. Finally, the issue of re-validating the data once we put it into the db brings up the question of how we are going to validate all the old data that is in the DB already.
I think that this portion of the project should be finished by September. It will need some time to test, so it will be very useful to see this working before the shutdown.
Use cases and the software and data entries associated with handling changes in the Luminosity Constant should be described or linked here.
Operations Reports are produced using dynamic queries
of database tables in the Luminosity, Run Summary and Trigger
Initially, the goal for a 'production' mode is simply to duplicate the existing flat html files on the online system.
The documentation for the components producing the dynamically generated Operations Reports can be found here:
Official Links to the Operations Reports (pointing to the production database d0oflum1 unless noted otherwise):
Unofficial Developers Links to Operations Reports in development
of this document