Gene Van Buren - BNL
19 Jan 2007
The detector element size is not actually the area in which tracking will look for matching hits, but as an exercise, let's calculate a few quantities:
(A) Occupancy (hits/cm^2) |
(B) Probability of at least one hit in an element |
(C) Probability of at least two hits in an element |
(D) Probability of at least two hits in an element given that we have >=1 hit |
(E) Average number of hits in an element given that we have >=1 hit (Signal+Noise) |
(F) Average number of hits in an element given that we have >=1 hit weight by hits (Signal+Noise seen by tracks) |
(G) Signal/Noise |
(H) Signal/Noise with inefficiency |
---|---|---|---|---|---|---|---|
0.030000 | 0.000720 | 0.000000 | 0.000360 | 1.000360 | 1.000720 | 1388.889 | 3.985652 |
0.100000 | 0.002397 | 0.000003 | 0.001200 | 1.001200 | 1.002400 | 416.66667 | 3.952569 |
0.300000 | 0.007174 | 0.000026 | 0.003596 | 1.003604 | 1.007200 | 138.88889 | 3.861004 |
1.000000 | 0.023714 | 0.000283 | 0.011952 | 1.012048 | 1.024000 | 41.66667 | 3.571429 |
3.000000 | 0.069469 | 0.002471 | 0.035568 | 1.036432 | 1.072000 | 13.88889 | 2.941176 |
10.00000 | 0.213372 | 0.024581 | 0.115205 | 1.124795 | 1.240000 | 4.166667 | 1.818182 |
These were calculated using Poissonian distributions as follows:
mu = occ*area;
(A) res[0] = occ;
(B) res[1] = 1-TMath::PoissonI(0,mu);
(C) res[2] = 1-TMath::PoissonI(0,mu)-TMath::Poisson(1,mu);
(D) res[3] = (1-TMath::PoissonI(0,mu)-TMath::Poisson(1,mu))/(1-TMath::PoissonI(0,mu));
//sum=0;for(i=1;i<100;i++) sum+= i*TMath::PoissonI(i,mu); //definition of mu
(E) res[4] = mu / (1-TMath::PoissonI(0,mu));
sum2=0;for(i=1;i<100;i++) sum2+= i*i*TMath::PoissonI(i,mu);
//sum=0;for(i=1;i<100;i++) sum+= i*TMath::PoissonI(i,mu); //definition of mu
(F) res[5] = sum2 / mu;
(G) res[6] = 1.0/(res[5]-1.0);
(H) res[7] = eff/(res[5]-eff);
`
The signal+noise must be weighted by the hits as there is a correlation
between the two: multiple tracks in one area will each see the elevated occupancy due
to the fact that they are in the same area. However, it is worthwhile to note here
two mistakes I have made for which I have not yet determined exactly how to correct:
1. I have weighted by hits, while what I really want is to weight by tracks (which should
be something like hits/eff, but I'm not sure exactly where in the math this goes.)
2. I should have calculated signal+noise for the condition that I expect a hit,
not that I have a hit. Again, this is because I know that I have a track and
therefore expect a hit and have looked at the detector,
but I do not necessarily have that hit in the detector.
Both mistakes make my signal-to-noise slightly worse than it should be.
NOTE: Before anyone gets too confused, I've figured out that my math for no inefficiency is actually exactly the same as Howard's formula. The hit-matching efficiency essentially comes out to be 1/(1+mu), where mu is the expected number of hits <n> in an area defined by the ellipse of area 2pi sigma_x sigma_y. My math got there a little more complicated in that I determined signal+noise = <n^2>/<n> , so signal/(signal+noise) = 1/(signal+noise) = <n>/<n^2> = mu / (mu + mu^2) = 1 / (1+mu).
Anyhow, we do learn that a moderate inefficiency for hit reconstruction takes a big hit on the signal-to-noise ratio (the hit-matching efficiency = signal/signal+noise cannot exceded the hit reconstruction efficiency). This means that any simulation of the IST performance must include the hit reconstruction efficiency if it is not very close to 1.0.
area = 3 cm x 0.15 cm = 0.45 cm^2,
where 0.15 cm is approximately the search window in r-phi.
efficiency = 0.7.
(A) Occupancy (hits/cm^2) |
(B) Probability of at least one hit in an element |
(C) Probability of at least two hits in an element |
(D) Probability of at least two hits in an element given that we have >=1 hit |
(E) Average number of hits in an element given that we have >=1 hit (Signal+Noise) |
(F) Average number of hits in an element given that we have >=1 hit weight by hits (Signal+Noise seen by tracks) |
(G) Signal/Noise |
(H) Signal/Noise with inefficiency |
---|---|---|---|---|---|---|---|
0.030000 | 0.013409 | 0.000090 | 0.006735 | 1.006765 | 1.013500 | 74.07407 | 2.232855 |
0.100000 | 0.044003 | 0.000983 | 0.022331 | 1.022669 | 1.045000 | 22.22222 | 2.028986 |
0.300000 | 0.126284 | 0.008332 | 0.065982 | 1.069018 | 1.135000 | 7.407407 | 1.609195 |
1.000000 | 0.362372 | 0.075439 | 0.208182 | 1.241818 | 1.450000 | 2.222222 | 0.933333 |
3.000000 | 0.740760 | 0.390785 | 0.527547 | 1.822453 | 2.350000 | 0.740741 | 0.424242 |
10.00000 | 0.988891 | 0.938901 | 0.949448 | 4.550552 | 5.500000 | 0.222222 | 0.145833 |
Here we learn that we can reproduce ballpark numbers for Yuri's signal-to-noise ratios (about 2.0 for occupancies of about 0.03-0.10 for the 3 SVT barrels in minbias CuCu62) by assuming properties similar to what is used for the SVT in his study and a notable hit reconstruction inefficiency (it would only take a little tweeking of the hit reconstruction efficiency to get closer to his signal-to-noise results), so there is unlikely to be any significant problems with his analysis.
area = (0.03 cm x 2.25 sigma) x ((3.8/sqrt(12)) cm x 2.25 sigma) = 0.1666 cm^2
and let's assume a hit reconstruction efficiency of 0.8
(A) Occupancy (hits/cm^2) |
(B) Probability of at least one hit in an element |
(C) Probability of at least two hits in an element |
(D) Probability of at least two hits in an element given that we have >=1 hit |
(E) Average number of hits in an element given that we have >=1 hit (Signal+Noise) |
(F) Average number of hits in an element given that we have >=1 hit weight by hits (Signal+Noise seen by tracks) |
(G) Signal/Noise |
(H) Signal/Noise with inefficiency |
---|---|---|---|---|---|---|---|
0.030000 | 0.004986 | 0.000012 | 0.002497 | 1.002501 | 1.004998 | 200.0781 | 3.902476 |
0.100000 | 0.016522 | 0.000137 | 0.008307 | 1.008353 | 1.016660 | 60.02342 | 3.692419 |
0.300000 | 0.048752 | 0.001208 | 0.024782 | 1.025198 | 1.049980 | 20.00781 | 3.200250 |
1.000000 | 0.153463 | 0.012429 | 0.080989 | 1.085613 | 1.166602 | 6.002342 | 2.182205 |
2.000000 | 0.283375 | 0.044594 | 0.157367 | 1.175837 | 1.333203 | 3.001171 | 1.500366 |
3.000000 | 0.393351 | 0.090145 | 0.229172 | 1.270633 | 1.499805 | 2.000781 | 1.143176 |
10.00000 | 0.811002 | 0.496127 | 0.611746 | 2.054270 | 2.666016 | 0.600234 | 0.428721 |
Now, admittedly, I have chosen the search window width of 2.25 sigma in each direction to try to reproduce Howard's results, and I have achieved that if you look at the bold numbers for the signal-to-noise with no inefficiency: for occupancies of {0.3,1.0,2.0} hits/cm^2, I find signal-to-noise of {20,6,3}, which converts to hit-matching efficiencies of {95%,86%,75%}, agreeing well with Howard's graph for these occupancies. This implies that my formulas can do a rather good job of matching both Howard's and Yuri's analysis results.
It is also important that the pointing resolution of tracks to the IST be understood. In Yuri's analysis, the resolution is notably worse in r-phi than Howard assumes. In part this is because Yuri includes some tracks which do not have an SSD hit to further constrain the pointing. This probably needs to be incorporated into Howard's analysis.
As a concluding statement, I believe that the IST strips by themselves would be severely detrimental to tracking if the signal-to-noise gets anywhere near 1.0 (hit-matching efficiency of 50%). Using a search area which matches Howard's results, I find that this will happen if the occupancies reach 6.0 hits/cm^2 for perfect hit reconstruction, 4.75 hits/cm^2 for 90% efficiency, and 3.6 hits/cm^2 for 80% efficiency. I am unsure of what hit reconstruction efficiency should be expected out of the IST detectors, and I think that Howard's r-phi resolution is a bit optimistic, so I consider Howward's current numbers as an upper bound on the performance of the strips alone. I provide below a plot of hit-matching efficiency vs. occupancy which demonstrates that we are considering a technology which will operate (maximum occupancy of perhaps 1-2 hits/cm^2) near the knee of these curves. Perhaps this is where we want to be (we don't want to over-design by a factor of 10 spending more than we need), but we are close to where we may have only a factor of 2 margin.