Noise studies in EEMC


I have been updating my diagnostic program and analyzing a number of runs to investigate the amount of noise problems we get under various contitions. The initial hypothesis was that there was a rate effect so I compiled a list of minidaq root file runs and their start time, number of events and rate using a program that reads the .root files.

Here are the error spectra for run 2003 on 3/22/03. This is the run where we triggered STAR. The error spectrum has been greatly expanded. Key is as follows

  • //-2 mismatched tokens
  • //-3 crate ID not 3, 4 or 5
  • //-4 crate ID out of order
  • //-5 n*256 events
  • //-6 too many zeros (pedestal missing)
  • //-7 too few zeros (one card = 32 ch expected)
  • //-8 zeros and nx256 at same time
  • //-9 missing zeros and nx256 at same time
  • //-10 40 or more channels with adc >=50 ghost ped.
  • //-11 gh. ped and nx256 together
  • //-12 extra 0's and gh. ped. together
  • //-13 missing 0's and gh. ped. together
  • //-14 gh. ped & too few 0's & n256
  • //-15 gh. ped & too many 0's & n256
  • //-20 or less is above with bad header -5->-20 etc.
  • The first histo gives an error spectrum with the key above the graphs. This shows that quite often multiple errors occur at the same time. This also is true for the ghost pedestal which we have seen happens in FEE. This may point to some common clock distribution problem. The crate sums and event sums were formed as a possiblity for selecting out the ghost pedestal. The number of 0's, n*256 and ghost pedestals (50 Below is a 2D plot of crate 4 adc sums vs n*256 and the number of ghost pedestals. These were made mutually exclusive since they often occur at the same time. From these plots it appears that the peak in the crate sum after the main peak is from ghost pedestals and the highest stuff is from the n*256.

    The above plots are in fact for the run where we were generating the trigger. In an effort to understand the rate at which we see such data corruption errors I analyzed a number of runs in the same way.

  • the above run 2003 on 3/22/03 48Hz my progrom 55 Hz STAR web browser error rate 5.3% STAR Run 4081031
  • diag above run 1027 on 3/20/03 24 Hz my program 22 Hz STAR web browser error rate 4%. STAR Run 4079057 Note that in the n over 50 or ghost pedestal spectrum the peak around 20. This was not enough to be counted in the error spectrum. Also it does not appear in the correlation with crate 4 sum. Because of the way the code was written the fact that there is no peak ~20 in the 2D spectrum implies that it occured ONLY with n*256 values.
  • diag above run 1013 on 3/19/03 71 Hz my program 64 Hz STAR web browser error rate 1.9%. STAR Run 4079002
  • diag above run 1012 on 3/19/03 40 Hz my program 38 Hz STAR web browser error rate 1.7%. STAR Run 4078052
  • diag above run 1000 on 3/18/03 29 Hz my program 27 Hz STAR web browser error rate .6%. STAR Run 4077023
  • diag above run 0006 on 3/16/03 21 Hz my program 30 Hz STAR web browser error rate .7%. STAR Run 4075044
  • diag above run 0012 on 3/15/03 18 Hz my program Hz STAR web browser error rate .6%.
  • diag above run 0001 on 3/13/03 19 Hz my program ?? Hz STAR web browser error rate 1.3% (add 2% token mismatch).
  • diag above run 1001 on 3/1/03 ?? Hz my program ?? Hz STAR web browser error rate 5.6%
  • diag above run 0002 on 3/3/03 ?? Hz my program ?? Hz STAR web browser error rate 2.9%
  • I conclude there is a rate dependence for the data corruption. However the later 2 runs (first 2 in the list) seem to be under a different condition where the corruption rate is much higher. We also remeber a worse data rate from earlier in the month and I am currently scanning back looking for runs that show this. The rate data are summarized below.