0) HPSS access capability STAR data-sinking and data reconstruction is all stored into HPSS. Currently, our configuration is such that we have 6+6 9940A drives (data sinking and data analysis) with the intrinsic limitation of 60MB/sec for each class of service (COS). The current RCF planning and computing requirements includes an upgrade of the agregate theoretical data rate by migrating to the 9940B drives (x3 the speed) at the expense of one drive per COS (due to budget restrictions). Unfortunatly, the Y2 data analysis experience has proven that the reconstruction farm efficiency is strongly impacted by the number of drives (one limitting factor being based on how many files we can access at a time rather than the speed at which we access it). It has been proven/demonstrated that we are at least 50% short in the number of drive. In addition, an increase of the farm capacity by a factor of 2 (already planned and part of requirements) will make the situation worst (we will not be able to use the CPU and will idle most of the time). Finally, the reduced number of drives in the two COS does not allow for failure recovery while our computing projections includes a much higher reliability for accessing the HPSS pool (production mode data-transfer to remote facilities, computing model based on retreiving file from HPSS in a circular fashion (distributed disk model), more tools available accessing HPSS etc ...). In order to cope with those ultimatly major limitting factors, we need the following extension : 1+1 drives to recover from the 9940A to 9940B transition shortage and 3 more drives at minimum, to recover from the already known 50% missing capacity. Note once again that this request is only recovering for an already known deficit in our hardware backbone. Estimated Cost: T9940B Full Fabric DRV LIB MNT, + Cabinet tray, fiber cables & connectors 5 x 32.0k SILKWORM 3800 FC SWITCH, 2GB 1 x 20.3k GBit FC adapter 1 x 2.2k ========================================================================= Total $ 182.5k 1) Database Server Machines STAR database-data is served to user processes by several dedicated server machines which mirror the master database server. The number of servers needed to handle the user load (primarily STAR reconstruction) increases with the processing power in the RCF facility. To keep up with planned upgrades to the STAR's allotment in RCF, we will need to double the number of database servers from 4 to 8. Estimated Cost: 4 Linux 1U dual-cpu, 1GB memory, 200GB internal SCSI Disks 4 X 5.0k ========================================================================= Total $ 20.0k 2) Network upgrade of DAQ and Control rooms Online data analysis and monitoring rely on data flow from DAQ within the STAR network at the site. Proposed increased event rate and event processing capabilities expand our use of the local network bandwidth. We propose to alleviate any congestion by targeted use of Gigabit network connection for high use data paths such as in database connections and eventpool access. This will require additional Gigabit cards on our Catalyst switch as well as one or more stand-alone multi-port switchs. Estimated Cost: 2 18port GB Catalyst switch modules 2 X 8.0k 2 24-100BaseTx ports with 2-1000BaseTx port switches 2 X 4.0k ========================================================================= Total $ 24.0k 3) STAR Printing capabilities upgrade Most of the printers available to STAR users are over used, obsolete and no longer worth maintaining from a speed/reliability/usage perspective. We would therefore like to replace our current equipment (LaserJet 5SiMX, Tek350A) with up-to-date technology. Estimated Cost: Phaser 4400/DT color laser printer or equivalent 1 x 2.0k HP LaserJet 4100n or equivalent 1 x 1.6k ========================================================================= Total $ 3.6k Estimated Grand Total : $ 230.1k