Trigger Action at STAR ONL Commands

Discussion and Proposals

 
 

I have started this page as a point of discussion for what our "final" system will look like. It will mainly be used for integration with STAR ONL, and
in our case means integration between TRG ONL code and TRG VME code. At the moment we have a system which works for our purposes of moving
TRG data and tokens around. We will need to do some modifications to have a smooth running system that will be controlled by STAR ONL, and I
hope this page will help. The table below is a proposed division of TRG VME tasks per STAR ONL command. There may be some things which
are not presently brought up, but I hope will be with further discussion. The hope is to use this to more efficiently write necessary code that is needed
to be able to turn over control to ONL. Much of this came up in the meeting of Tuesday January 18, 2000. So this can almost be thought of as notes
from the meeting.
 
 
ONL Command Comments L1CTL Comments L1CTB Comments L1DC Comments L2CTL Comments RCC Comments
(script run during power-up) # Load file for L1 crate system
# File for trgfe2
 #
                                                                                  # initialize some variables
#
tonkoDbgMpic=0
ISPlogMsgEnable=0
DC_Win_Base=0x52000000
CB_Win_Base=0x50000000
#
# load all software
#
cd "../lib"
ld < sciLib
ld < memcpy32.o
cd ".."
cd "trglib"
#ld < date_str.o
ld < trgLogSocket.o
ld < logmessage.o
ld < string_id.o
ld < cnf_open.o
ld < trgScram.o
ld < trgdma.o
ld < dsm_config.o
cd ".."
cd "L1CTL/start"
ld < L1_Startup.o
cd ".."
cd "config"
ld < tcu_config.o
ld < L1_Hardware_Config.o
ld < L1_Software_Config.o
cd ".."
cd "io"
ld < L1_IO.o
cd ".."
cd "tokn_mngr"
ld < L1_Token_Manager.o
cd ".."
cd "hard_int"
ld < L1_Hardware_Interface.o
cd ".."
cd "control"
ld < L1Control.o
ld < L1Analysis.o
Only load code at startup. # Load file for CTB crate system.
# File for trgfe5
#
#
# initialize some variables
#
tonkoDbgMpic=0
ISPlogMsgEnable=0
DC_Win_Base=0x50000000
L1_Win_Base=0x52000000
#
# load all software
#
cd "../lib"
ld < sciLib
ld < memcpy32.o
#
# load trigger library code
#
cd ".."
cd "trglib"
#ld < date_str.o
ld < trgLogSocket.o
ld < logmessage.o
ld < string_id.o
 ld < cnf_open.o
ld < trgScram.o
ld < trgdma.o
ld < dsm_config.o
cd ".."
cd "L1CTB"
ld < L1CTB_Startup.o
ld < CTB_Hardware_Config.o
ld < L1CTBConfig.o
ld < L1CTBControl.o
# Load file for DC crate system.
# File for trgfe1
#
# initialize some variables
#
tonkoDbgMpic=0
ISPlogMsgEnable=0
#
# load all software
#
cd "../lib"
ld < sciLib
ld < memcpy32.o
#
#
# load trigger library code
#
cd ".."
cd "trglib"
#ld < date_str.o
ld < trgLogSocket.o
ld < logmessage.o
ld < string_id.o
ld < cnf_open.o
ld < trgScram.o
ld < trgdma.o
ld < dsm_config.o
cd ".."
 cd "L1DC/start"
ld < DC_Startup.o
cd ".."
ld < L1DCConfig.o
#ld < L1DCIO.o
#ld < L1DCTokenManager.o
ld < L1EventBuilder.o
ld < L1L2Interface.o
ld < L1TokenReturn.o
# Load file for L2CTL crate system.
# File for trgfe4
#
# initialize some variables
#
tonkoDbgMpic=0
ISPlogMsgEnable=0
#
# load all software
#
cd "../lib"
ld < sciLib
ld < memcpy32.o
#
# load trigger library code
#
cd "../trglib"
ld < trgLogSocket.o
#ld < date_str.o
ld < logmessage.o
ld < string_id.o
ld < cnf_open.o
ld < trgScram.o
#
# load L2CTL specific code
#
cd "../L2CTL/start"
ld < L2_Startup.o
cd "../"
ld < L2CTLConfig.o
ld < L2L1Interface.o
ld < L2Control.o
ld < L2QueManager.o
ld < L2Analysis.o
ld < L2EventBuild.o
#
# load TDI software
#
cd "tdi"
ld < trg_tdi.o
#ld < trg_first_daq.o
# this should be common
# 09-apr-99 je replace nfsMount to antares with startrg

routeAdd "0", "130.199.88.24"
hostAdd "startrg.star","130.199.88.142"
hostAdd "antares.star","130.199.88.155"
hostAdd "daq","130.199.88.55"

nfsAuthUnixSet("vxworks",1001,10,0,0)

#nfsMount "antares.star","/export/startrg/users","/home"
nfsMount "startrg.star","/export/startrg2/users","/home"

cd "/home/trg/trg_soft_dev"

tonkoDbgMpic=0
tonkoSuspendOnBerr=0

ld <rsh:wind_ppc/target/config/kern2306/uniDmaLib.o

# local difference
shellPromptSet("rccctl> ")

# users code after this point
cd "/home/trg/rcc_test"

 ld < rcc_test.o68k

Only load code to go from disabled to enabled state.
disableToEnable initialize variables
 

                                          initialize networks
 
 
 
 
 
 
 
 
 
 
 

                                          spawn tasks

hostAdd ("trgfe1", "130.199.88.177");
hostAdd ("trgfe5", "130.199.88.144");
#
#  initialize SCI nodes
#
L1_NodeId=sciHostGetByName("trgfe2");
DC_NodeId=sciHostGetByName("trgfe1");
CB_NodeId=sciHostGetByName("trgfe5");
#
#  initialize SCI
#
sci_init (L1_NodeId,0x50000000, 0x04000000, 0x60000000, 0x1000000);
sciRemoteMap (0x50000000,CB_NodeId<<16,0x0,0x02000000);
sciRemoteMap( 0x52000000,DC_NodeId<<16,0x0,0x02000000);
#
# spawn processes
#
L1_Startup ();
taskSpawn("L1_SC",90,0x0,0x20000,(FUNCPTR)L1_Software_Config,0,0,0,0,0,0,0,0,0,0);
taskSpawn("L1_TM",90,0x0,0x20000,(FUNCPTR)L1_Token_Manager,0,0,0,0,0,0,0,0,0,0);
taskSpawn("L1_HI",110,0x0,0x20000,(FUNCPTR)L1_Hardware_Interface,0,0,0,0,0,0,0,0,0,0,0);
taskSpawn("L1_CN",90,0x0,0x20000,(FUNCPTR)L1Control,0,0,0,0,0,0,0,0,0,0);
taskSpawn("L1_AN",90,0x0,0x20000,(FUNCPTR)L1Analysis,0,0,0,0,0,0,0,0,0,0);
These lines should be taken out of start-up 
script and placed somewhere in code. 
Candidates are trgServer (ONL socket 
process on VME processor), L1_Startup, or 
L1_IO. If placed in L1_IO, should return error 
code to ONL on failure. 
In general, these are done once per power 
cycle of crate. So there may be a need for flags 
to be set somewhere so ONL will know that 
we would not want to return to a "disabled" 
state. 
Another possibility is to configure hardware (DSM/TCU-?) here also. 
hostAdd("trgfe1", "130.199.88.177");
hostAdd("trgfe2", "130.199.88.178");
#hostAdd "trgfe5", "130.199.88.144"
#
#  define SCI nodes
#
L1_NodeId=sciHostGetByName("trgfe2");
DC_NodeId=sciHostGetByName("trgfe1");
CB_NodeId=sciHostGetByName("trgfe5");
#
#  initialize SCI
#
sci_init(CB_NodeId,0x50000000,0x04000000,0x60000000,0x1000000)
sciRemoteMap (0x50000000,DC_NodeId<<16,0x0,0x02000000);
sciRemoteMap (0x52000000,L1_NodeId<<16,0x0,0x02000000);
#
#  initialize message queues
#
L1CTB_Startup
#
# spawn processes
#
taskSpawn("CTB_SC",90,0x0,0x20000,(FUNCPTR)L1CTBConfig,0,0,0,0,0,0,0,0,0,0);
taskSpawn("CTB_CN",90,0x0,0x20000,(FUNCPTR)L1CTBControl,0,0,0,0,0,0,0,0,0,0);
#hostAdd("trgfe1", "130.199.88.177");
hostAdd("trgfe2", "130.199.88.178");
hostAdd("trgfe5", "130.199.88.144");
#
#  define SCI nodes
#
L1_NodeId=sciHostGetByName("trgfe2");
DC_NodeId=sciHostGetByName("trgfe1");
CB_NodeId=sciHostGetByName("trgfe5");
#
#  initialize message queues
#
DC_Startup();
#
#  initialize scramnet
#
init_scram(1);
setup_isr(0x86,7,1);
#
# spawn processes
#
task L1DCConfig
#sp DCTokenManager
taskSpawn("L1EVB",90,0x0,0x20000,(FUNCPTR)L1EventBuilder,0,0,0,0,0,0,0,0,0,0);
taskSpawn("L1L2I",90,0x0,0x200000,(FUNCPTR)L1L2Interface,0,0,0,0,0,0,0,0,0,0);
taskSpawn("L1TR",90,0x0,0x20000,(FUNCPTR)L1TokenReturn,0,0,0,0,0,0,0,0,0,0);
#hostAdd "trgfe4", "130.199.88.208"
#
#   the following is commented out when in loopback mode
hostAdd("tdi", "172.16.1.4");
#
#
#  define SCI nodes
#
L2_NodeId=sciHostGetByName("trgfe4");
#   the following is commented out when in loopback mode...
DAQ_NodeId=sciHostGetByName("tdi");
#
# and replaced by this for loopback
#
#DAQ_NodeId=sciHostGetByName("trgfe4")+1;
#
#  initialize SCI
#
sci_init(L2_NodeId,0x50000000,0x02000000,0x60000000,0x1000000);
sciRemoteMap(0x50000000,DAQ_NodeId<<16,0x40000000,0x10000000);
#
# configure L2CTL
#
L2CTLRdConfig ("L2CTL");
#
#  initialize message queues
#
L2_Startup();
#
#  initialize scramnet
#
init_scram (2);
setup_isr (0x87,7,2);
#
# spawn processes
#
taskSpawn("tTdi",90,0x0,0x20000,(FUNCPTR)trigger_daq_interface,0,0,0,0,0,0,0,0,0,0);
taskSpawn("L2L1I",90,0x0,0x20000,(FUNCPTR)L2L1Interface,0,0,0,0,0,0,0,0,0,0);
taskSpawn("L2TR",90,0x0,0x20000,(FUNCPTR)L2TokenReturn,0,0,0,0,0,0,0,0,0,0);
taskSpawn("L2CTL",90,0x0,0x20000,(FUNCPTR)L2Control,0,0,0,0,0,0,0,0,0,0);
taskSpawn("L2QM",90,0x0,0x20000,(FUNCPTR)L2QueManager,0,0,0,0,0,0,0,0,0,0);
taskSpawn("L2Ana",90,0x0,0x20000,(FUNCPTR)L2Analysis,0,0,0,0,0,0,0,0,0,0);
taskSpawn("L2EVB",90,0x0,0x20000,(FUNCPTR)L2EventBuild,0,0,0,0,0,0,0,0,0,0);
These lines would be added either to a TRG ONL process or a TRG VME process. RCC_Write(0x18000004,0x0);
RCC_Read(0x18000004);
RCC_Write(0x18000008,0x0);
RCC_Read(0x18000008);
RCC_Write(0x1800000c,0x6e);
RCC_Read(0x1800000c);
RCC_Write(0x18000010,0x0);
RCC_Read(0x18000010);
RCC_Write(0x18000014,0x6e);
RCC_Read(0x18000014);
RCC_Write(0x18000018,0x0);
RCC_Read(0x18000018);
RCC_Write(0x1800001c,0x0);
RCC_Read(0x1800001c);
RCC_Write(0x18000020,0x0);
RCC_Read(0x18000020);
RCC_Write(0x18000024,0x0);
RCC_Read(0x18000024);
RCC_Write(0x18000028,0x0);
RCC_Read(0x18000028);
RCC_Write(0x1800002c,0x0);
RCC_Read(0x1800002c);
RCC_Write(0x18000030,0x0);
RCC_Read(0x18000030);
RCC_Write(0x1800003c,0x0);
RCC_Read(0x1800003c);
RCC_Write(0x18000034,0x0);
RCC_Read(0x18000034);
These lines would go in some function either in TRG ONL or TRG VME process.
enableToReady configure software

                                          configure hardware
 

L1_IO(3)

L1_IO(4) - ?
L1_IO(5) - ?
 

Separate out software configuration from 
hardware configuration tasks? Software 
configuration is fast and mostly internal to 
trigger (debugging/monitoring). Do we make 
separate commands for different hardware 
configuration tasks? TCU vs. DSM 
configuration? Reason - it takes a long time to 
configure DSM's. Maybe a separate command 
for configuraing first/last vs. middle DSM's? 
Again, time savings is the issue. Will need a 
trigger task (not clear if this is in TRG ONL or 
TRG VME) to determine need to configure a 
piece of hardware between runs.
L1CTBConfig()

initMem(); - ?

initMem task does not exist at moment, but need to zero arrays in memory.  L1CTBConfig task exists and receives message from L1CTL task with command to configure. Again, need separate command for hardware configuration.  L1DCConfig();

initMem(); - ?

Here will need to zero arrays which is not being done at the moment. Would this task need to be in the readyToRunning task? Here also need separate command for hardware configuration. L2CTLConfig();

initMem(); -?

readyToRunning start run L1_IO(1,X) RCC_Write(0x1800003c,0x0); Need communication between TRG ONL and RCC.
runningToReady stop run L1_IO(0) Not clear on what happens here. At the 
moment, tokens are stopped from loading onto 
the TCU. Will also need to get accounting of 
all tokens in system. 
Anything else?
L1CTBtokenManager(); - ? Need info on stop run to check on status of tokens in this crate. Again, need to define communication protocol between either L1CTL and L1CTB or TRG ONL and L1CTB. L1DCtokenManager() - ? Needs stop run status to check token status in this crate. Again need to establish communication protocol between either TRG ONL and L1DC or L1CTL and L1DC. L2CTLtokenManager(); -? RCC_Write(0x1800003c,0x0);

 
 

The main issues as I see are:

1) Establish division of tasks, i.e. what does TRG VME do upon request of a certain command.
2) How to address configuration question. (DSM's take a long time and don't need to be configured run-to-run).
3) What needs to be done at "Stop Run" command.
4) Communication needs to be established between all processors and TRG ONL.

Anything else?