Statement of the problem or situation that is being addressed – typically one to three sentences

Systems for scheduling nuclear physics analysis of large, remote datasets are being developed rapidly in many separate experiments.  These separate efforts try to share implementation knowledge bases, but end up using different and non-interchangeable interfaces requiring, in turn, substantial expertise from users. 















Figure 1. The overall architecture of the proposed system for submitting and tracking abstract jobs to the data analysis grid.


Figure 1 shows the overall architecture of the proposed system.  The grid service only exposes a small interface to the client in order to parse requests and then returns tracking information.  On the back-end of the grid service requests are turned into instructions for the system-specific calls to the component services that make up the data analysis system.  A meta-scheduler component will coordinate the operations to carry out client requests including being able to interact with more than one actual system scheduler.  The architecture is scalable to many clients because each client will create a new instance of the service from a factory service.                    Summary: Tasks of Phase I

Task 1:    Define interface with community through PPDG

Within the umbrella of PPDG work with the HENP community to identify common back-end components needed by the Abstract Job Handling Service along with any common interfaces.  And in addition, work towards convergence on a generalized Abstract Job Handling Description Language to use as a communication mechanism for use with submission and tracking services across multiple virtual organizations.

Task 2:    Construct Service for current STAR analysis system

Construct and grid service interface in WSDL for passing requests in the existing STAR/JLab Job Definition Language to a service that hands it off to the current STAR/JLab Scheduler.  This will involve modifying the JobInitializer class of the scheduler software and writing a service implementation that wraps the STAR/JLab Scheduler.

Task 3:    Construct Client Application to use STAR analysis service

Construct a client application with the latest version of the Globus Toolkit to initialize a proxy and send a request.  The client will initially have a built-in editor to work with XML files written in the Job Definition Language.

Task 4:    Migrate Service to Process Abstract PPDG Submissions

Convert the grid service interface, service implementation, and client to use the agreed upon Abstract Job Handling Description Language and Abstract Job Handling Service interface.  (Tracking will be left for Phase II work) 

Task 5:  Write the final report                    Define interface with community through PPDG (Task 1)

An interface definition for a Grid service is ultimately expressed in the WSDL that describes the service method names, arguments, return types, and exceptions thrown.  However, embedded in the WSDL are XML schema definitions of any complex types that are used as arguments or return types for the service methods and these schema elements can be worked on outside of the Grid service WSDL and form a large part of the task of defining the interface.  Therefore, in this task, we will begin by working with the PPDG to produce an agreed upon Abstract Job Handling Description Language (AJHDL).  A reasonable starting point for the new AJHDL is the User Job Description Language (UJDL) proposed by the STAR/JLab collaboration as we have mentioned in section 2.2.2 above.   The current STAR Scheduler does not implement the UJDL schema, but rather a simpler version called Job Description Language (JDL) that had different tag names and less functionality.  As an example of what might be produced in this task we discuss a possible version of AJHDL that is based very closely on the UJDL work.  A rough schema for a request might then be: 


<xsd:element name="request">
      <xsd:element ref="application"/>
      <xsd:element ref="task"/>
      <xsd:element ref="dataset"/>
      <xsd:element ref="custom"/>


where we will give the definitions for the four element tags below one by one starting with the application tag.  An application is basically a name and a version number.  Thus, the schema for the application tag will include “name” and “ver” attributes and is given a type so that it can be extended if needed.

<xsd:element name="application" type="applicationType"/>
<xsd:complexType name="applicationType">
  <xsd:attribute name="name" type="xsd:string"/>
  <xsd:attribute name="ver" type="xsd:string"/>

The submission system can use that name to check whether that application is present, and dispatch it to the correct version, or return an error if the application is not present. In the future, the scheduler can also contact an application distribution and installation service to install the application if it's not present.  Examples of instances of the application tag are:


<application name="root4star" ver="2.0" />
<application name="root" />
<application name="csh" />


The next element is the task element.  Tasks describe what the user is going to do with the application and the data. The task description is not strictly application dependent, since different application can use the same task: different application versions or different applications that map to the same application repackaged to include different reconstruction algorithms.  In general the task tag is defined with the following schema snippet:


<xsd:element name="task" type="taskType" />
<xsd:complexType name="taskType">
  <xsd:attribute name="type" type="xsd:string"/>


In practice there are many common task types and the task tag will be often extended.  Applications that are already standardized and experiment/site independent, makes it possible to achieve a request definition that is experiment/site independent.  Thus, for those requests for applications which are used across experiments (i.e. shell scripts, root) the AJHDL will define a standard more detailed request description. For other applications, such as specific Monte Carlo simulations or bulk data processing applications tailored for given experiments, users will need to use more a more general set of request targets.  One common example of a useful extension would be for requests to run the “ROOT” application.  The “task” tag can be extended to make a “rootTask” tag with the following schema XML:


<xsd:complexType name="rootTask"> 
    <xsd:extension base="taskType"> 
        <xsd:element name="macro">
          <xsd:attribute name="file" type="xsd:string"/>
               <xsd:element name="arg">
                 <xsd:attribute name="value" type="xsd:string"/>


An example of the “rootTask” tag might be:


  <macro file="myMacro.C">
    <arg value="10.0"/>


Other applications will have similar extensions.  For example, one might construct task tags for java applications or shell script applications with the following being examples:


  <mainclass class="MyJob" />
  <arg value="-h"/>
    <pathelement location="~/jars/myjar.jar" />
    <jar url="" />
  <script type="sh">
    echo "My host is $theHost"
  <script type="csh" file="" arguments="arg1 arg2" />


The next element is the dataset element.  As was the case for the task tag, the dataset tag doesn't define much per se either. We will share further syntax by defining types of datasets that are common among different experiments.

<xsd:element name="dataset" type="datasetType"/>
<xsd:complexType name="datasetType">
  <xsd:attribute name="name" type="xsd:string"/>


Examples of possible dataset extensions are:




The last element is the custom element.  This custom section should include all the information that the submission system can use to make its decisions. While the previous three sections are enough to provide the provenance of the result, they don't include any specific information that would help the scheduler translate the request in the best way.

In addition to the AJHDL that handles submission but not necessarily tracking in Phase I of the project, we will work with the PPDG to construct service method signatures for the interface.  Possible methods would be “int submit(AbstractJobRequest job)” and “int estimate(int jobReference)” that return job reference integers; as well as “boolean execute(int jobReference).” Tracking could be added later in Phase II with a method such as “AbstractJobTracking getStatus(int jobReference)”.  Here the “AbstractJobRequest” object will have a one-to-one correspondence to the AJHDL in that all the information contained in an instance AJHDL file can be represented by an instance of the AbstractJobRequest object.  One last part of the service method signatures is that appropriate exceptions should also be agreed upon, so that clients can easily and gracefully handle problems.   The types of methods needed for a standard submission service have been hinted at through discussions at the recent PPDG collaboration meeting, so this task should be attainable by using the mailing lists, already scheduled phone meetings, and regular community workshops and gatherings.


.  A sample interface for a scheduling system, for example, is given below.
















1.1.3       Performance schedule


Months after contract initiation








1.   Define interface with community through PPDG







2.   Construct Service for current STAR analysis system







3.   Construct Client Application to use STAR analysis service







4.   Migrate Service to Process Abstract PPDG Submissions







5.   Write the final report