Event Texture Analysis of heavy ion collisions -- SPS experience and RHIC prospects


Mikhail Kopytine, SUNY at Stony Brook
Talk at the PHENIX Global Physics Working Group meeting June 27, 2001, BNL
Abstract : NA44 used a Si pad array covering 1.5 < eta < 3.3 to study charged hardon production in 158 A GeV/c Pb+Pb collisions at the CERN SPS. We applied a multiresolution analysis, based on a Discrete Wavelet Transformation, to probe the texture of particle distributions in individual events by simultaneous localization of features in space and scale. Scanning a broad range of multiplicities, we looked for evidence of a phase transition in the power spectra of local density fluctuations. Measured results are compared with detailed simulations of detector response, using as input heavy ion event generators, and with a reference sample created by event mixing. An upper limit is set on the probability and magnitude of dynamical fluctuations.
With RHIC applications in mind, I will review the prospects of addressing topical themes of the field by extending the analysis with pT and PID information.


Sometimes people talk about fluctuations and mean global fluctuations. That is however not what we deal with in this work. Here is one of the reasons. The problem is to see if in history of a typical collision, there has been a special event -- a phase transition. The RMS fluctuation is an average which can be taken on a set of observable events. Unlike the condensed matter physicists, we only have access to the frozen-out stage of the time evolution, and a global fluctuation measured on an ensemble of such events is not that informative. Our case is more like that of cosmology. In case of the Universe, we, first, have only one event realization; second, we observe it after freeze-out. Nevertheless, there are ways to test dynamical hypotheses about its origin. These are based on the study of texture, or local fluctuations. One can easily tell the actul Universe from the man-made image of the Universe shown by the gift-wrapper below. Trying to formulate the difference, we intuitively evoke a texture analysis procedure which tells us on the basis of the visual information that the different scales play different roles in the two images.


This example shows how a phase transition manifests itself in texture. The images show how a collection of spins in a 2D Ising model goes through the critical point. The large correlation length, or large scale texture enhancement, can serve as an experimental signature of the critical point. Similar models for QCD exist where the hadron field, rather than magnetisation, plays the role of an order parameter in the second order phase transition. Instead of a temperature scan, we do a multiplicity (related to the barion chemical potential) scan.


Commentary: RQMD event in the GEANT simulation of the detector response. Magnetic field and delta electron generation are on. Target thickness is 1.15 g/cm**2. We only use delta-free side of the detector.


Double differential multiplicity distributions of charged particles plotted as a function of azimuthal angle tex2html_wrap_inline2731 (with different symbols representing different rings) and of pseudorapidity tex2html_wrap_inline2729 (with different symbols representing different sectors). The tex2html_wrap_inline2731 and tex2html_wrap_inline2729 are in the aligned coordinates.


Commentary: DWT (Discrete Wavelet Transformation) decomposition of the three different kinds of an image: a chess board, a smooth gradient surface, and a set of 1000 random white noise field samples. The first two cases are opposite in the scale localization of the information they carry. The third case has a remarkable property of scale-independence. In the words of Norbert Wiener ("Generalized Harmonic Analysis"): "... the energy of a haphazard sequence of impulses is uniformly distributed in frequency. ... Theoretically this equipartition of energy might be used in the absolute calibration of acoustical instruments."


The texture analysis with DWT is based on the multiresolution analysis theorem. Any piece-wise continuous function with a finite integral can be accurately enough represented by a histogram. The histogram undergoes a number of rebinnings and the scale gets coarser. The right column shows the information lost in each rebinning step, which is the difference between the two rebinning stages. It is clear that if you keep doing this, as much information as is desired can be lost. You also notice that the right histograms have very simple structure: they consist of juxstaposed pairs of box functions of equal amplitude and different sign. These are Haar wavelets. Because all the lost information is found in the right column, this means that the original function can be RECONSTRUCTED as accurately as possible from the wavelets. The theorem states that an orthonormal basis to decompose and reconstruct the given function with any degree of accuracy can be formed by the set of wavelets.

The family of 2D functions, obtainable from the three colorful functions by translations (characterized by integers i and j) and dilation/contraction (integer m), form an orthonormal basis in the space of piece-wise continuous functions whose second power in integrable. [1] Decomposition in a wavelet basis can be used to extract power spectra of fluctuations in the density fields (see work [2] where the method is explained in 1D case). In this work, we use WAILI library of wavelet software. The bottom formula defines the power spectrum in 2D.


Power spectra for one of the multiplicity bins we use. The first striking thing is that the "true" power spectra (red points) are indeed enhanced on the coarse scale. One has to clarify the reason for this enhancement. Obviously, one of the reasons is the shape of the double differential d2N/d_eta/d_phi distribution. The averaged event, formed by summing amplitude patterns of all the measured (namely, 7374 in this case) events in a given multiplicity bin, and dividing by the number of events, has a much reduced texture because fluctuations get averaged out. However they do retain some of the texture associated with the d2N/d_eta/d_phi shape, with the dead channels and the finiteness of the beam's geometrical cross-section. Still better way to get rid of the "trivial" texture is to use mixed events (shown). Our event mixing is done by event number scrambling (not to be confused with channel number scrambling !). Therefore, the mixed events preserve the texture associated with the geometrical distortion, the inherent dN/d_eta shape, the dead channels, while fluctuations are not statistically suppressed. I call this static texture; its opposite is dynamic texture which we are after, and which is not coupled with the detector.

Background texture (static+dynamic):

Source Treatment Irreducible remainder estimate** (quoted for diagonal texture correlation in the
379< <463 bin),

subtract
empty
target
event mixing * MC
subtract
mixed
events
preserve
sectors
shape, dead pads, detector offset
dx=1.1mm, dy=0.3mm
irrelevant yes ! OK yes ! ~0.
finite beam
Xsection: 1X2mm
irrelevant no OK yes ! 0.14 for the coarsest scale, negligible otherwise
background hits yes ! yes ! yes ! no >0.07, but < 0.36, generally decreases with scale fineness
channel Xtalk 9% for neighbours; negligible otherwise irrelevant yes ! yes ! no
statistical fluctuations irrelevant yes ! irrelevant yes ! 0
* event mixing = event NUMBER scrambling, NOT channel scrambling
** info for orientation only, see the data plots for details and all cases







The multifireball event generator was used to study the sensitivity of the method (including detector response simulation) to the presense of local fluctuations/texture of varying "grain coarseness". In the model, a final state of certain particle multiplicity Np is constructed by making a superposition of isotropic fireballs undergo a longitudinal expansion. The longitudinal expansion is simulated by Lorentz-boosting the fireballs along the longitudinal axis. Total momentum of the system is fixed at 0, and total transverse momentum of every fireball is fixed at 0. The grain coarseness is controlled by changing the Np/fireball parameter of the model. The smaller it is, the more fireballs one has to generate in a given event in order to deliver a particular total multiplicity, and the smoother the texture becomes.





Q: Why use DWT to study power spectrum ?
A: because DWT, unlike Fourier transform, can do scale decomposition of a single event without being disturbed by the inherent spikyness /discreteness of the observable /binning of the detector.



Event texture : physics + what it takes

topic : PID
observable insight
elliptic/directed flow EOS must must helps helps
collective exotica -- nutcracker stiff QGP EOS must may help want heavy particles
QCD droplets,
deflagration
1st order Ph.Tr.
use must
large scale eta texture 2nd order Ph.Tr.
helps must
low pT critical
texture
2nd order Ph.Tr. use use low selects sigmas
near 2 pion
threshold
mostly pion anyway
jets, minijets partons probe
the medium
must must must
DCC chiral restoration use use or charged
classical pion field multiparticle coherence use use use or charged
P,CP-odd vacuum
bubbles
basic QCD must must + or -


Summary:

References :
[1] Ingrid Daubechies, "Ten Lectures on Wavelets" [ back to the context ]
[2] L-Z.Fang, J.Pando, "Large-scale Structures revealed by Wavelet Decomposition", astro-ph/9701228 29 Jan 1997 (find it!) [ back to the context]
Back to Main