0th-level STAR HBT PWG plans for MDC3

Lanny Ray's very nice pair-wise correlator code hbt_event_processor takes files in STAR Geant Text format, diddles the momentum of "particles of interest" and then outputs the events also in STAR Geant Text format. Out of the box, it can correlate at most two different particles of interest (typically pi+ and pi-).

He has switches (read in at runtime) that control the effective radius, and whether one uses 3D Bertsch-Pratt, Yano-Koonin, etc. Switches also control what are the particles of interest (i.e. which PIDs to correlate, and which PIDs to just leave unaltered in the event.)

We have modified Lanny's event-correlator code to be able to correlate particles according to a two-proton correlator (obtained by interpolating numbers generated for various sourcesizes by the code of Scott Pratt) and Lambda-proton correlator (obtained by emperical Gaussian parameterization to the published calculations for p-Lambda correlations of Fuqiang Wang and Scott Pratt). Mercedes has shown that the code produces reasonable results if the particle multiplicities per event are >1000
Some results are on her page

If all the multiplicities were high enough, the idea would be to run Lanny's mevsim fast multiplicity generator to make "plain" events, and then run his hbt_event_processor to correlate the pi+ and pi-. Then, take the output of that and run it again through hbt_event_processor to correlate the protons and Lambdas, etc.

However, we have seen that Lanny's code fails under some conditions:

Instead of fully exploring this space of operating conditions, we simply are thankful that it works fine for p-p, p-Lambda, pi+-pi+, pi+-pi-... (i.e. the stuff we want) when we run batches of 25 events/batch, and particle-of-interest multiplicities of 2500.

So, correlating pions for central collisions is no problem, multiplicity-wise. But what about protons (mult ~100) and kaons (mult ~ 200)? There are a couple of ways to handle that.

  1. Always generate ~2500 particles of all types in every event, and just randomly throw away most of the protons, Lambdas, etc. to get to the right multiplicities.
  2. (Outlined in more detail below)

I think method #1 is probably "more correct", but method #2 is more efficient, and it is what I propose. The reason I think #1 is in principle better is that the particles are correlated event-by-event. This is the unique and important feature of Lanny's code (what makes it useful for MDC3, for example, when Pratt's code is irrelevant).

So protons from event #2 are uncorrelated with those from event #1, and making correlation function backgrounds via event-mixing should be good. (As good as experiment, anyway.)

But with the method #2 of above, particles of different events (after the redistribution) would have correlations with each other. At the very least, this will be an effect along the lines of the familiar "residual correlations", but from a different (and unnatural) source.

However, I think it is a small price. If Ndivide is the number of second-stage events into which the the first-stage events are broken, and Nevent is the total number of second-stage events, then the effect should go as Ndivide/Nevent. So we have to keep that ratio small.


What I propose:

Say we want (in our "final" or "second-stage" events), Mprot protons, Mpi+ pi+, etc on average (forget multiplicity fluctuations for now...). And say we want Nevents total events.

First stage

Let's run 3 sets of batches. Each batch will have 25 "first-stage" events in it (since we know hbt_event_processor likes batches of 25 events), generated by mevsim, and then passed to hbt_event_processor for correlation.
(Numbers in blue assume Nevents=100000, Mprot=Mlambda=100, Mkaon+=Mkaon-=Mkaon0s=250.)
  1. a batch with only charged pions in the events. Each event will have Mpi of each type of pion. We will need [Nevents/25] (4000) of these batches.
  2. a batch with only protons and Lambdas in the events. Each event will have 2500 protons and 2500 Lambdas. We will need [Nevents*Mproton/(2500*25)] (160) of these batches.
  3. a batch with only neutral and charged in the events. Each event will have 2500 kaons of each type. We will need [Nevents*Mkaon/(2500*25)] (400) of these batches.
    NOTE: Each of these kaon batches must be run through hbt_event_processor twice, once to do the charged kaons, and once for the K0's.

Second stage

Each first-stage pion event is the base of a second-stage event (which is what we send on to GSTAR). We "just" randomly add in protons and Lambdas from the proton/Lambda events, and kaons from the kaon events, in the right proportion.

The only thing to be careful of is: make sure that all protons added to a second-stage event come from the same first-stage proton event. Also, the Lambdas all must come from the same first-stage event. This is pretty obvious (no deep thinking involved), but let's not screw it up.

Just one Uber-collection of events at the end?

In the above, the HBT parameters (radius, etc) are assumed to be the same for all events (although different for different particle types). Probably for 100K events, this is fine for Helen's K0 correlation functions, since she will probably need all events to do anything at all.

But for the pion folks, this is kinda stupid. We should try in MDC3 to look at changing source size for different impact parameter. So instead of one large Nevents=100000 collection, I would think of 4 collections.
(Again, blue text assumes Nevents=100000)

  1. a collection of ~Nevents/5 (20000) events with <Npi>=2500
  2. a collection of ~Nevents/3 (30000) events with <Npi>=1500
  3. a collection of ~Nevents/2 (50000) events with <Npi>=1000
(so the same number of pions (not pairs) from each collection)

The pion multiplicity in the events would fluctuate with Poisson statistics-- this is already in Lanny's code.

Each collection would have a different HBT source radius for the pions. Probably makes little sense to bother with changing the source radius for the k0's. Maybe the charged kaons?

"Third" stage

Of course, the events from the collections would then have to be mixed together and then run through GSTAR, and in MDC3 we could see if we see the "impact parameter" dependence.
Mike Lisa
Last modified: Fri Jan 21 19:10:25 EST 2000