% Mesuring Resolution of new BPM Electronics
%\documentclass[twoside]{revtex4}
\documentclass[twoside]{article}
\flushbottom
\pagestyle{headings}
\setlength{\topmargin}{0.0in}
\setlength{\textwidth}{5.5in}
\setlength{\oddsidemargin}{.5in}
\setlength{\evensidemargin}{.5in}
\setlength{\textheight}{8in}
\addtolength{\parskip}{2pt}
\begin{document}
\title{Strategies For Measuring Resolution of new BPM Electronics}
\author{M. Fischler}
\maketitle
\begin{abstract}
A statement in the requirements for the new BPM electronics is that the
differential resolution not exceed 7 microns (one $\sigma$).
This note discusses how such a measurement can be done, and how accurately
$\sigma$ might be measured, given the availability
of only one prototype of the new BPM design.
The result is that $\sigma$ can be measured to 15\% accuracy using a sequence of
35 3-bump measurements, or if 30\% accuracy is adequate, a sequence of
14 3-bump measurements.
\end{abstract}
\tableofcontents
\section{Overview}
A statement in the requirements for the new BPM electronics is that the
differential resolution not exceed 7 microns (one $\sigma$).
Before proceeding with production of the new BPMs, it is prudent to
assess whether the proposed design will meet this requirement.
This note discusses how such a measurement can be done, and how accurately
$\sigma$ might be measured, given the availability
of only one prototype of the new BPM design.
It also discusses the extent to
which a second prototype might (or might not) improve such measurements.
(There is some debate about whether the physics of beam tuning is such that
this requirement is overly stringent, and thus could be relaxed. This note
does not address that issue; at any rate, if one knows a meaningful requirement
for resolution, one still faces the question of how to verify that this
requirement will be met, using only the prototype electronics.)
\subsection{Defining the Resolution}
``The differential resolution must not exceed 7 microns (one $\sigma$).''
This can be a confusing statement; let's clarify what we will consider it to
mean. This definition met with general acceptance at the BPM Upgrade meeting of
1/8/04:
Any given change in circumstances (a stray field, shift is some position, or
what have you) will result in an acutal shift in beam position at location
$P$, of $x$ microns. Consider an experiment of measuring the position at
$P$, using a BPM located at $P$, twice--once before and
once after after the change in position.
This experiment will likely indicated some change in position, even if
the actual change is quite small
(since the {\em precision} of the new BPM devices
is at the level of about 2 microns). However, for sufficiently
small change $x$, it is
not certain that the direction of the measured change will correctly match
the direction of the actual change. There is some probability $F(x)$
that the change will be resolved,
that is, that a change will be observed {\em and that it will have
the correct sign}.
For $x \rightarrow 0$, $F(x) \rightarrow .5$. Assuming that the BPM is not
worthless, for large $x$, $F(x) \rightarrow 1$.
Assuming that the sum of various uncertainties in the BPM measurement takes the
shape of a Gaussian, $F(x)$ will have the form of an error fucntion. More
precisely, the Gaussian assumption implies that
\begin{equation}
F(x) = \frac{1}{2}\left[1 + \mbox{erf}(\frac{x}{\sqrt{2}\sigma})\right]
\end{equation}. And $\sigma$ is the standard deviation of the Gaussian and
the resolution of the device.
The definition given above differs with the picture people had in mind
when stating the requirement. That picture is that of a bunch of repeated
measurements of the same position, yielding a Gaussian shape, and taking the
deviation of that distribution from its mean to be the resolution.
There are five ways in which the definitions differ:
\begin{itemize}
\item
The reason the definition given above was considered more satisfactory is
related to the discrete step size of BPM measurements: The gaussian-shape
definition says nothing about how to deal with that, and in fact would be
grossly misleading if the step size were comparable to the ``measurement
noise,'' while the propopsed definition is completely explicit and consistent
in its treatment of the discretized data.
Since the step size is fairly small, this difference in definition is largely
a matter of principle.
\item
A less important difference is that
the proposed definition follows more closely the logical concept of
``resolving'' a differnce in measurements.
\item
There is a practical consideration of setting the calibrating scale of
the measured resolution. In the Gaussian definition, the controlling
uncertainty is how well you know the slope of output versus actual distance
in the {\em new BPM} readings. This involves the $\beta$-function
uncertainty and calibration of the new electronics.
In the proposed definition, the
controlling uncertainty is how well you know the response in position change to
a given DFG current. This involves the $\beta$-function uncertainty, and the
calibration scale for the {\em existing BPM} readings.
\item
The most important difference is that while the gaussian dfinition deals with
a single measurement, the proposed definition deals with the differnce between
two measurements. Therefore, ignoring the small discretization differences
and calibration differences, {\bf the proposed definition defines a resolution
which is larger than that of the gaussian definition by a factor of
$\sqrt{2}$}.
\end{itemize}
In consequence, when the new definition is adopted, it is important that the
requirements document reflect this by restating the resolution requirement as
$10 \mu$ rather than $7 \mu$. This is not a change in requirements, only a
restatement of the way a number is to be measured.
In this note I discuss $7 \mu$ resolutions, but the same reasoning applies
if the target is $10 \mu$, and in fact if the issue is how to get a
relative accuracy in measuring $\sigma$ of 15\% or 30\%, the results for
$7 \mu$ and $10 \mu$ are very similar. The studies will be re-done in the
near future for the actual target of $10 \mu$.
\subsection{The Intent of This Note}
Given this definition, the question is how best to measure $\sigma$ (or to
verify that $\sigma$ does not exceed 7 microns), given just one
prototype of the new BPM electronics at one fixed location.
There are four challenges in determining $\sigma$:
\begin{enumerate}
\item
How to set up $x$ for a given trial.
\item
How to determine an approximation to $F(x)$.
\item
How to estimate $\sigma$, either from that approximation to $F(x)$ or from
other considerations involving measurements.
\item
How to optimize the process, in the sense that we want to answer the
question of whether $\sigma$ exceeds 7 microns without planning more
measurements than necessary.
\end{enumerate}
In answering the above challenges, we can assume the following resources:
\begin{enumerate}
\item
Beam time for some small number of trials of setting DFG values and taking
measurements.
\item
Availability of a reasonably centered beam for a starting point.
\item
We assume that the $\beta$-functions at BPM points are known to some
relative accuacy $\epsilon$. Hopefully, as long as $\epsilon$ is small,
our conclusions will not depend strongly on its exact value.
\item
The DFG currents can be set with a precision and accuracy which is adequate
to reproducibly induce small changes (small compared to the 7 micron goal)
in beam position. And we can assume these currents do not drift significantly
in the time spans during which we will do our measurements to determine
$\sigma$.
\item
We are not assuming perfect knowledge of the relationship between DFG current
and position movement in response, but we do assume linearity in the region
near our central measurement point.
\item
In addition to the measurements taken at both the new BPM,
we have at our disposable the measurements taken at the
existing BPM's. These have associated disretization errors on the scale of
150 microns, and other uncertainties which have not been measured but appear
(from looking at fluctuations) to also be on the order of 100 microns.
{\em (The procedure recommended will not use these other BPM measurements,
unless they are needed to pre-calibrate the scale of displacement versus
DFG currents.)}
\end{enumerate}
It will be sufficient to measure the resolution at the single available BPM
at point $P$,
and when the beam is nearly centered on its ideal orbit.
It is acceptable that the error on
the estimate of resolution be as large as 15\%, since if we could say that
the resolution were going to be 6-8 microns, that would satisfy the requirement
for all practical purposes. And, in designing our measurement, if we have a
scheme which would have greater errors if the resolution is far from 7 microns,
that is fine--the answers 14-28 microns or 1-4 microns both answer our
question as to whether the specified requirement of 7 is met.
However, there is
a soft but important cost constraint: Each experiment of setting DFG
currents and measuring positions takes some time, and beam study time is
a very precious commodity. The total number of measurements must be minimized,
and in fact if we were to find that you can't answer the 7 micron question
using a reasonable number of measurements, we would have to re-assess whether
this requirement is worth verifying at all.
\section{Measurements Using a Single New BPM}
The idea, of course, will be to set up a series of 3-bump trials, noting
the displacement measured at $P$, and to deduce from that data the value of
$\sigma$. We take aedvantage of the fact that by doing $N$ displacement
measurements, we effectively can extract $N(N-1)$ 2-measurement comparisons.
The measurement program logically consists of three steps:
\begin{enumerate}
\item
Calibrate the beam movement per unit DFG current by doing a series of
3-bumps with displacements up to several times the current BPM precision,
and fitting to find the slope.
\item
By making small adjustments in the DFG currents, take a series of measurements
in the new BPM which can be plotted against the ``known'' displacements.
\item
Analyze the data from that series of measurements to evaluate the best
estimate of $\sigma$.
The statistics of this step are greatly simplified if current adjustments
in the previous step were taken to have equal step sizes.
\end{enumerate}
A series of calcualtions and perhaps simulations should allow us to
evaluate in advance
the step sizes and ranges to use in (1) and (2) to minimize the number
of trials, while determining $\sigma$ to decent accuracy.
In fact, step (1), the calibration of beam movement per unit DFG current,
is probably not necessary, because this is already known to better than
the 15\% accuracy needed to address the issue of whether $\sigma$ is smaller
than 7 microns.
\subsection{Determination of $\sigma$}
The idea behind determining $\sigma$ for the measurement given by the new BPM
at $P$ is to do a sequence of trial measurements, and use these to form an
ensemble of two-measurement trials. In each two-measurement trial, we ask
the question {\em``Did the BPM measure a change in the correct direction?''}
Since the separation between the two known displacements is about $x$,
where $x$ varies depending on which two measurements form the trial,
this allows us to form a estimate of $F(x)$, and from that, we can derive
$\sigma$.
In detail, one workable scheme is:
\begin{itemize}
\item
Choose a number of trials $N_t$.
For example, one could decide to do 21 trials.
\item
Choose a step size $\rho$
(in displacement, whch via our calibration becomes a
step size in current) between each trial. $\rho$ must be small enough
that we would not worry that a displacement of $N_t \rho /2$ would put us in
a nonlinear region. But more crucially, $\rho$ must be small compared to
the 7 micron scale we wish to investigate, and $N_t \rho$ must be large
compared to that scale.
For example, with $N_t = 21$, one might choose $\rho = 1 \mu$ and thus go out
to separations of up to 21 microns.
\item
Starting at a current expected to produce a displacement of $-\rho (N_t-1)/2$,
create a sequence of 3-bumps using DFG's near $P$. For each 3-bump, increase
the current to move the expect displacement by the step size $\rho$.
Note the measurements on the new BPM.
This gives a sequence of $N_t$ measurements $y_i$.
(Use the type of measurement for which
you are intersted in determining $\sigma$.
For example, do not average multiple
readings if you would not be doing so in actual BPM usage.)
\item
Form an ensemble of $N_t(N_t-1)/2$ pair-trials by pairing each of the $N_t$
measurements with another one. Each pair-trial represents measuring two
points, separate by distance $x$ which some multiple of $\rho$. The ensemble
will contain $N_t - j$ pairs where the distance is $j \rho$.
\item
For each value of $j$ from 1 to $N_t-1$, gather together the $N_t - j$ pairs
of measurements, and count how many $C_j$ have the correct relation among the
$y$ values (that is, for how many pairs do the $y$ values differ in the correct
direction). Let the fraction $C_j/(N_t - j)$ be called $f_j$.
For example, $f_1$ will assumedly be near to $1/2$ (because at a separation of
one step the resolution is worthless) while $f_{N_t-1}$ will be near to $1$
because at that distance the resolution is very good.
\item
Assign to each $f_j$ an error according to the binomial distribution, of
$\sqrt{(N_t - j)f_j(1-f_j)}$.
\item
Fit the function $F(j) = 2f_j-1$ versus $j$, with the given errors in $f_j$,
to the form $F(j) = \mbox{erf}(j/\mu)$.
\item
The best estimate for $\sigma$ is then $\rho \mu / \sqrt{2}$.
\end{itemize}
The only thing to decide becomes the optimal choices of step-size $\delta$,
and number of trials $N_t$. These are dictated by the need for accuracy in
$\sigma$ which, remember, is about 15\% when $\sigma$ comes out to 7 microns.
\subsubsection{Discarding Small Intevals}
Because the ``ties fail'' rule
seriously distorts the fraction $f_j$ near or below $\lambda$, it may be
best
to exclude points below, say, $1.5 \lambda$ from the fit. That is, such
points will often exhibit a fraction significantly below one half, yet
the erf function is rigorously .5 at the $x = 0$.
Disregarding small displacements turns out to
dramatically reduce the bias in the estimate for $\sigma$,
but actually increases the RMS error. It's unclear whether this
is a worthwhile tradeoff.
\subsubsection{Modification of Current Steps}
A small refinement in the selection of the sequence of currents may be
advantageous.
If one chooses a uniform step size $\rho$ and a number of steps $N$
which is will give an accurate value of $\sigma$ when the actual $\sigma$
is around 7 microns,
then it may have no accuracy at all in the case that the BPM resolution is
not as good as we thought, with an actual $\sigma$ that is much higher.
For example, if we were to choose $N = 21$ and $\rho = 1$, then if the
true resolution were to be 25, the measurements would only tell us that
the true resolution is somewhat more than or around 20.
To improve the situation, one might wish to use a slowly increasing step size.
However,
if we wish to retain the idea of grouping several fits all separated by the same
distance $x$, then we cannot arbitrarily select step sizes, since then few
pairs will share the same separations.
The suggested refinement is to choose, out of the $N$ measurements,
some $B$-th measurement. Until that $B$-th measurement, the
current increatse by $\rho$ each time; after $B$, the current increases by
some small integer multiple of $\rho$ (sy $3 \rho$). This
will extend the range of meaningful answers, while not significantly
affecting the accuracy when the true $\sigma$ is small.
\subsubsection{Alternative Modification of Current Steps}
Naively, one might wish to use a slowly increasing step size,
with each step greater than the previous one by some factor which is
slightly larger than 1. Unfortunately,
that complicates the analysis of results, since you no longer have a set
of reasonably precise fractions to wich to fit the
$\mbox{erf}$ curve.
The analysis can still be done. Instead of a least-squares fit to find the best
$\mbox{erf}$ curve, $\mu$ is found by maximizing the likelihood function for
the points of data, against $\mu$ in the probability function
$\mbox{erf}(x/\mu)$. For each pair for which the correct sign of difference
is obtained, the likelihood function gets a factor of
$.5(1+\mbox{erf}(x/\mu))$,
while for each pair for which the correct sign of difference
is obtained, the likelihood function gets a factor of
$1-.5(1+\mbox{erf}(x/\mu)) = .5(1-\mbox{erf}(x/\mu))$.
\subsection{Estimating $\sigma$ -- How Accurate Will the Estimate Be?}
This estimate of $\sigma$ will be off for three reasons:
\begin{itemize}
\item
The statistical error in determination of $\sigma$ from
the finite number of independent pair-measurements at varying values of $x$.
\item
A systematic inaccuracy induced by the fact that the pair-measurements
are themselves derived from the original sequence, and thus are not truly
independant.
\item
Inaccuracy that stems from inaccuracy in the calibration of current to
displacement.
\end{itemize}
The calibration inaccuracy is unimportant, assuming only that we know the
relation between current and displacement to 15\%, which is quite a conservative
assumption.
The statistical error and the systematic inaccuracy
induced by the fact
that the pair-measurements re not truly independant
can be studied together, using a series of simulations.
In each simulation, we make an assumption about the actual value of $\sigma$,
and for some given $N$ and $\rho$ generate a series of simulated measurements
and do the analysis to find a sample value for $\sigma_{\mbox{estimated}}$.
We repeat this
a large number of times $L$, and from this we can determine both the systematic
bias (which will manifest itself as an incorrect mean value for the
estimate of $\sigma$) and the accuracy in $\sigma$. There are some notorious
statistical subtleties involved
in estimating the variance of a sample, but here were are not doing that:
We will have $L$ independant measurements of a quantity, and it will be valid
to discuss the mean and variance of that collection in the usual way.
In the course of doing the simulation, there is choice of how to generate each
data point:
\begin{itemize}
\item
We can simply generate a value for $j \rho$ based on the assumed $\sigma$.
\item
We can generate that value and then round to the nearest multiple of
$\lambda = 150/64 = 2.34$ microns. This would reflect the discretization in the new BPM's,
which have an additional six bits of accuracy as compared to the old 150 micron
step size.
\end{itemize}
The latter method
is more honest in assessing the resolution power, in that it
rolls in the effect of lucky/unlucky discretization and of ``ties''
counting as non-resolved displacements.
Thus it will result in the simulation yielding
a $\sigma$ estimate which is just a bit higher than the former.
However, there is no doubt that using the discretization in our simulation is
the right thing to do, becuase the question we are answering is ``how should
we measure the displacement resolution of the {\em actual} system.''
That is, when the actual measurements are done to estimate $\sigma$,
the result will be a combination of the true gaussian noise fluctations
and the discretization error. For discrete bin size $\lambda$, we will find
$\sigma = \sqrt{\sigma_{\mbox{noise}}^2+\lambda^2/12}$. Since this is
the relevant error quantity when using the BPM data to smooth the beam,
it is appropriate that this $\sigma$ (rather than just $\sigma_{\mbox{noise}}$)
be used in the simulation as well.
Once we note that what is being approximated is the true $\sigma$
(and not $\sigma_{\mbox{noise}}$), we now can
know the bias inherent in the sequence measurement technique, by noting the
mean difference between the estimated $\sigma$ and the actual value.
\subsection{Results of the Simulation: Bias and Accuracy of $\sigma$}
The rules for the simulation become simple: For each value of $N$ and $\rho$
we wish to investigate, perform $L$ trials of the following form:
Each trial consists of generating $N$ gaussian random numbers
with means ranging from $0$ to $(N-1) \rho$.
The deviations of these random numbers should not be the assumed net $\sigma$,
because the $\sigma$ appearing in the definition of resolution is greater than
the noise in these random numbers for two reasons:
\begin{enumerate}
\item
The actual $\sigma$ is $\sigma = \sqrt{\sigma_{\mbox{noise}}^2+\lambda^2/12}$.
\item
The actual $\sigma$ appearing in the definition of the resolution is the
difference in the deviations of the two numbers in the pair. That is, ignoring
the $\lambda$ effect, if
we chose the noise deviation to be $h$ micron, then $\sigma$ would be
$\sqrt{2}h$.
\end{enumerate}
Thus the correct value to use for the deviation of the random numbers is
$\sigma_{\mbox{noise}} = \sqrt{(\sigma_{\mbox{assumed}}^2 - \lambda^2)/2}$.
Generate $N$ numbers with means ranging from $0$ to $(N-1) \rho$ and with
this $\sigma_{\mbox{noise}}$, round the $N$ numbers
to the nearest $\lambda$, and then feed them into the analysis engine.
(The actual
study will likely go from $-(N-1) \rho/2$ to $+(N-1) \rho/2$ but that is
equivalent for the purposes of studying resolution to starting from 0.)
The analysis engine will form the ensemble of $N(N-1)/2$ pairs, and
evaluate the results
to form the correct resolution fractions $F(x)$, and assign weights based on
the binomial distribution to those function values. (In cases where the
resolution is perfect, or always wrong, assign a weight based on
a hypothetical finding of 1/2 a trial with the other result, rather than the
infinite weight the binomial distribution would tell you to assign.)
It will then fit to $2f_j-1$ to the form $\mbox{erf}(j/\mu)$, which gives a best-fit
value for $\mu$. The measured value of $\sigma$ for this trial is then
$\sigma = \rho \mu / \sqrt{2}$.
The set of simulations to try to find the optimal beam study for assessing
$\sigma$ has four dimensions:
\begin{enumerate}
\item
The value of $N$. We want to keep $N$ as small as possible without
too badly affecting our estimate of $\sigma$, since $N$ represents how many
measurements we will really be taking.
\item
The value of $\rho$. The test will be sensitive in a range of a few times
$\rho$ up to about $\rho N/3$. If $\rho$ is too large or too small
we will have no chance of accurately estimating $\sigma$ when $\sigma$ is near
7 microns.
\item
The value of $B$, such that the step size past the $B$-th step becomes
$3 \rho$.
\item
The assumed value of $\sigma$. Of course we want to know how accurately we
will measure $\sigma$ if it is around 7 microns. But we also want to know how
accurate the proposed test will be if $\sigma$ is somewhat off from that.
\end{enumerate}
The suite of assumed $\sigma$ values I use in the set of simulations is
4$\mu$, 7$\mu$, 10$\mu$, 15$\mu$, 20$\mu$.
The results are a tad disconcerting, at least if one insists on an estimate
which is trustworthy to 15\% accuracy:
In order to expect to estimate $\sigma$ with a probable error of one micron
if the actual value of $\sigma$ is 7 microns, one would have to do about
{\bf 35 measurements}.
The optimum strategy seems to be to do 35 measurements,
separated by 2.2 microns each (that is, to do 35 steps ranging from -37.4$\mu$
to +37.4$\mu$ in predicted displacement. This procedure will estimate
$\sigma$ to about 15\% accuracy whether $\sigma$ is 4, 7, 10, 15, or 20 microns.
The procedure will
deliver a slightly biased estimate, due to two effects:
\begin{itemize}
\item
Imperfect independance of the pairs of data (they are derived from a single
sequence, rather than independant pairs of measurements).
\item
`Ties'' as considered as incorrect resolutions of the difference. This is
reflected by reducing the noise sigma but subtracting (in quadrature)
the step size over $\sqrt{12}$. That correction is not a perfect compensation
for the effect.
\end{itemize}
\noindent
However, the bias is a small fraction of a micron and can thus safely be
corrected for in interpreting the results (or even ignored).
\begin{verbatim}
N = 35, rho = 2.2
Sigma Sigma(Noise) Estimate RMS Error Bias
4 2.29395 4.07826 0.564083 0.0782645
7 4.665 7.16711 0.986588 0.167109 *****
10 6.87475 10.1571 1.47837 0.157105
15 10.4767 14.8629 2.33477 -0.137089
20 14.045 19.5328 3.31009 -0.467224
\end{verbatim}
What if one is willing to settle for 30% accuracy ($7 \pm 2$)?
Then one can make estimate $\sigma$ measurement using only 14 measurements, with
a spacing of 2.7$\mu$. And by increasing the spacing to 8.1$\mu$ for the last
four points, one can even get reasonable accuracy if $\sigma$ is out to 15 or 20
microns.
\begin{verbatim}
N = 14, B = 10, rho = 2.7
Sigma Sigma(Noise) Estimate RMS Error Bias
4 2.29395 4.12501 1.04791 0.125005
7 4.665 7.14363 1.90382 0.143629 *****
10 6.87475 10.0565 3.10319 0.0565442
15 10.4767 15.0069 5.18455 0.00692129
20 14.045 19.9268 10.6554 -0.0731733
\end{verbatim}
\subsubsection{Simulation Output}
Each simulation used 1000 trials, which seems to give reproducibility of the
RMS error result at the scale of the second decimal place. The key number
is the RMS error for an actual $\sigma$ of 7 microns.
\begin{verbatim}
Lambda = 2.34
N = 20, B = 20, rho = 2.3
Sigma Sigma(Noise) Estimate RMS Error Bias
4 2.29395 4.12072 0.756898 0.120716
7 4.665 7.25442 1.43506 0.254416
10 6.87475 10.0632 2.22865 0.0632282
15 10.4767 15.2822 4.3081 0.282193
20 14.045 20.5305 7.53201 0.53047
[
N = 20, B = 20, rho = 2.0
Sigma Sigma(Noise) Estimate RMS Error Bias
4 2.29395 4.13176 0.775578 0.131756
7 4.665 7.2064 1.43785 0.206398 *********
10 6.87475 10.0883 2.31203 0.088268
15 10.4767 15.1837 5.01361 0.183737
20 14.045 20.7415 9.14923 0.741475
N = 20, B = 20, rho = 1.6
Sigma Sigma(Noise) Estimate RMS Error Bias
4 2.29395 4.03845 0.74782 0.0384473
7 4.665 7.28921 1.6335 0.289209
10 6.87475 10.049 2.61978 0.0489792
15 10.4767 15.6747 6.68759 0.6747
20 14.045 24.1372 28.1995 4.1372
N = 20, B = 20, rho = 1.3
Sigma Sigma(Noise) Estimate RMS Error Bias
4 2.29395 4.11997 0.748427 0.119971
7 4.665 7.3398 1.82566 0.339801
10 6.87475 10.4123 3.32788 0.412257
15 10.4767 17.696 20.1824 2.69596
N = 20, B = 20, rho = 1
Sigma Sigma(Noise) Estimate RMS Error Bias
4 2.29395 4.12879 0.782815 0.128791
7 4.665 7.57343 2.2545 0.573431
10 6.87475 11.4029 6.56861 1.40294
15 10.4767 20.5499 26.6345 5.54991
20 14.045 30.7018 50.3048 10.7018
N = 20, B = 10, rho = 1
Sigma Sigma(Noise) Estimate RMS Error Bias
4 2.29395 4.11584 0.898101 0.11584
7 4.665 7.09822 1.82658 0.0982154
10 6.87475 9.70755 2.6509 -0.292455
15 10.4767 14.3715 5.10924 -0.628488
20 14.045 19.6283 10.941 -0.371723
------------------------
N = 25, B = 25, rho = 1
Sigma Sigma(Noise) Estimate RMS Error Bias
4 2.29395 4.12863 0.662772 0.128631
7 4.665 7.25178 1.51272 0.251777
10 6.87475 10.4734 2.88141 0.473425
15 10.4767 16.9075 12.5682 1.90752
20 14.045 23.6257 18.8668 3.62572
------------------------
N = 30, B = 30, rho = 2.2
Sigma Sigma(Noise) Estimate RMS Error Bias
4 2.29395 4.06153 0.567861 0.0615253
7 4.665 7.148 1.11106 0.147999
10 6.87475 10.1477 1.6548 0.147665
15 10.4767 14.7701 2.61622 -0.229919
20 14.045 19.4981 4.01707 -0.501872
N = 30, B = 30, rho = 2.0
Sigma Sigma(Noise) Estimate RMS Error Bias
4 2.29395 4.08322 0.604192 0.0832204
7 4.665 7.15113 1.08552 0.151128 ****
10 6.87475 10.0463 1.63488 0.0463032
15 10.4767 14.7025 2.84273 -0.297457
20 14.045 19.5242 4.22875 -0.475807
N = 30, B = 30, rho = 1.8
Sigma Sigma(Noise) Estimate RMS Error Bias
4 2.29395 4.07108 0.576913 0.0710804
7 4.665 7.16681 1.12141 0.166814
10 6.87475 10.0651 1.67522 0.0650821
15 10.4767 14.8756 2.88059 -0.124387
20 14.045 19.6829 4.49619 -0.317066
N = 30, B = 15, rho = 1
Sigma Sigma(Noise) Estimate RMS Error Bias
4 2.29395 4.08468 0.698616 0.0846843
7 4.665 7.13025 1.31467 0.130253
10 6.87475 9.77334 2.00232 -0.226659
15 10.4767 14.0454 3.10638 -0.954601
20 14.045 18.5547 4.54355 -1.44531
N = 30, B = 30, rho = .5
Sigma Sigma(Noise) Estimate RMS Error Bias
4 2.29395 4.17619 0.718023 0.176191
7 4.665 7.71693 2.52577 0.716929
10 6.87475 12.4525 14.2588 2.45254
15 10.4767 21.4614 25.2098 6.46139
20 14.045 28.1866 37.333 8.18657
---------------------------------------------------
N = 35, B = 35, rho = 2.3
Sigma Sigma(Noise) Estimate RMS Error Bias
4 2.29395 4.04106 0.547013 0.041056
7 4.665 7.14681 1.02354 0.14681
10 6.87475 10.0374 1.4789 0.0374429
15 10.4767 14.8679 2.40904 -0.132082
20 14.045 19.6721 3.40111 -0.327948
N = 35, B = 35, rho = 2.2
Sigma Sigma(Noise) Estimate RMS Error Bias
4 2.29395 4.07826 0.564083 0.0782645
7 4.665 7.16711 0.986588 0.167109 *****
10 6.87475 10.1571 1.47837 0.157105
15 10.4767 14.8629 2.33477 -0.137089
20 14.045 19.5328 3.31009 -0.467224
N = 35, B = 35, rho = 2.1
Sigma Sigma(Noise) Estimate RMS Error Bias
4 2.29395 4.08379 0.54589 0.0837896
7 4.665 7.22475 0.98814 0.22475
10 6.87475 10.1281 1.47298 0.128147
15 10.4767 14.8804 2.37916 -0.119581
20 14.045 19.5986 3.28884 -0.401422
N = 35, B = 35, rho = 2.0
Sigma Sigma(Noise) Estimate RMS Error Bias
4 2.29395 4.1003 0.56438 0.100303
7 4.665 7.13677 1.01498 0.136773
10 6.87475 10.1414 1.46738 0.141444
15 10.4767 14.7257 2.29323 -0.274263
20 14.045 19.2648 3.5122 -0.735169
---------------------------------------------------
N = 40, B = 40, rho = 2.5
Sigma Sigma(Noise) Estimate RMS Error Bias
4 2.29395 4.07242 0.529969 0.07242
7 4.665 7.19897 0.950067 0.198973
10 6.87475 10.1332 1.26498 0.13317
15 10.4767 14.8435 2.02634 -0.156514
20 14.045 19.6748 2.92626 -0.325182
N = 40, B = 40, rho = 2.0
Sigma Sigma(Noise) Estimate RMS Error Bias
4 2.29395 4.08691 0.502204 0.0869131
7 4.665 7.18831 0.929609 0.188309 *****
10 6.87475 10.092 1.37636 0.0920412
15 10.4767 14.7654 2.14373 -0.234614
20 14.045 19.4553 3.08455 -0.544733
N = 40, B = 40, rho = 1.5
Sigma Sigma(Noise) Estimate RMS Error Bias
4 2.29395 4.10752 0.483877 0.10752
7 4.665 7.20806 0.942227 0.208056
10 6.87475 10.1264 1.36652 0.126446
15 10.4767 14.9508 2.39919 -0.0492463
20 14.045 19.6368 3.59537 -0.363184
N = 40, B = 40, rho = 1.0
Sigma Sigma(Noise) Estimate RMS Error Bias
4 2.29395 4.08407 0.497232 0.08407
7 4.665 7.2164 1.00879 0.216403
10 6.87475 10.1819 1.57399 0.181863
15 10.4767 15.1103 3.05256 0.110318
20 14.045 20.231 5.43581 0.231037
N = 40, B = 20, rho = 1
Sigma Sigma(Noise) Estimate RMS Error Bias
4 2.29395 4.07017 0.555451 0.0701727
7 4.665 7.12142 1.10394 0.12142
10 6.87475 9.87747 1.58434 -0.122528
15 10.4767 14.3544 2.64673 -0.645559
20 14.045 18.6587 3.57477 -1.34135
N = 40, B = 40, rho = .8
Sigma Sigma(Noise) Estimate RMS Error Bias
4 2.29395 4.06335 0.489429 0.0633478
7 4.665 7.20757 1.03638 0.207567
10 6.87475 10.1968 1.79374 0.196782
15 10.4767 15.492 3.92596 0.492027
20 14.045 21.4448 8.8844 1.44482
N = 40, B = 40, rho = .75
Sigma Sigma(Noise) Estimate RMS Error Bias
4 2.29395 4.10686 0.506805 0.106861
7 4.665 7.16496 1.06393 0.164962
10 6.87475 10.2316 1.85805 0.231617
15 10.4767 15.5351 4.66116 0.535062
20 14.045 21.546 8.64922 1.54598
N = 40, B = 40, rho = .5
Sigma Sigma(Noise) Estimate RMS Error Bias
4 2.29395 4.07752 0.575696 0.077519
7 4.665 7.13517 1.16759 0.135168
10 6.87475 9.6472 1.66239 -0.3528
15 10.4767 13.9893 2.97361 -1.01074
20 14.045 18.2343 5.10839 -1.76569
-------------
N = 50, B = 50, rho = 1.5
Sigma Sigma(Noise) Estimate RMS Error Bias
4 2.29395 4.06496 0.442941 0.0649556
7 4.665 7.1995 0.826773 0.199501
10 6.87475 10.0738 1.18978 0.0738001
15 10.4767 14.9383 1.81942 -0.0616957
20 14.045 19.857 2.54129 -0.142998
N = 50, B = 50, rho = 1.5
Sigma Sigma(Noise) Estimate RMS Error Bias
4 2.29395 4.06842 0.449385 0.068423
7 4.665 7.18546 0.825454 0.185457 ****
10 6.87475 10.1505 1.19271 0.150462
15 10.4767 14.8953 1.81766 -0.104663
20 14.045 19.7894 2.90863 -0.210592
N = 50, B = 25, rho = 1
Sigma Sigma(Noise) Estimate RMS Error Bias
4 2.29395 4.09588 0.496471 0.0958808
7 4.665 7.09844 0.91457 0.0984441
10 6.87475 9.95736 1.3842 -0.0426448
15 10.4767 14.4448 2.21003 -0.555167
20 14.045 18.8127 2.9749 -1.18733
N = 50, B = 50, rho = 1
Sigma Sigma(Noise) Estimate RMS Error Bias
4 2.29395 4.06662 0.425365 0.0666171
7 4.665 7.21871 0.866704 0.218705
10 6.87475 10.1595 1.27897 0.159459
15 10.4767 15.0363 2.23764 0.0363131
20 14.045 19.7479 3.78972 -0.252122
\end{verbatim}
\subsection{Calibrating Beam Movement and DFG Current (if Necessary)}
Almost certainly, the relation between DFG current and beam displacement is
known well enough that it need not be calibrated just for this study. If not,
this sub-section discusses how it can be measured, using existing BPMs.
Actually, it might be superior to rely on the new BPM, but I'm uncomfortable
with the possibility of circular reasoning in doing that.
The calibration step does not have to be very precise, because it will only
set the scale of the final resolution answer. For the purposes of addressing
the resolution requirement, if we can in the end say that the resolution is
6-8 microns, that will be fine.
So the calibration points can be chosen so as
to produce just an accuarcy of 15\%.
Also, one has to worry in principle about non-linearities in position
response to changes in DFG current. But the same series of measurements
allows you to get a handle on those non-linearities, and thus to verify that
they, too, contribute to the answer at below the 15\% level.
The object is to get accurate enough calibration in as few measurements as
possible. Assuming we did not know this already (and thus do not have the
calibration in hand by doing no trials at all), here is how we could
measure it:
The tricky part of planning the calibration step is that the current BPMs
have discrete outputs at the level of 150 microns, and that we don't have a
handle on the size of their random (non-reproducible) fluctuations. Also,
there are three major assets which we might
want to utilize make the calibration accurate and quick:
\begin{itemize}
\item
We can set up the bump to cover some number of BPMs, not necessarily just the
two affected by the shortest possible 3-bump.
\item
We can take advantage of the fact that the $\beta$-functions
are slightly different at different BPMs.
\item
If we can do so without {\em a priori} knowledge of its resolution, we are
free to make use of the more precise measurements from the new BPM.
\end{itemize}
The strategy will consist of making $2k+1$ 3-bump measurements,
each using currents in the 3 DFGs which are separated by
$q_c$, the current (in that DFG) needed for a 3-bump which would be predicted
to produce a movement of roughly $\eta$ microns. The central point in this
sequence could be zero.
Thus we would measure with $q = m q_c$ where $m$ ranges from
$-k$ to $k$. We want $k \eta$ to be large compared to discretization error
and fluctuations, but small enough that we don't have to worry about moving
the beam too far or about large non-liniiarities. For example, $k \eta$
could be half a millimeter.
Given these measurements, we fit to a linear form, and that fit will have
associated error estimates for the slope and origin. The slope gives us our
calibration at the remote BPM points; it is trival to use the form for
the orbit oscillation plus the $\beta$-functions at the other BPM's and at
$P$ to translate that to a calibration of displacement at $P$ versus current.
The error estimate for the slope tells us the uncertainty in our calibration.
Actually, given $n$ BPM readings, we would have $n$ such calibrations, and
all that remains is to check that they are consistent, within the purported
errors in each, and that the overall error is not more than 15\%. We could
then take the average value to be the calibration needed.
\section{Measurements Using a Two New BPMs}
How much would the situation inprove if there were two of the prototype BPM's
at our disposal? The naive answer is that the number of pairs would double,
therefore the number of points taken should be reduced by a factor of
$\sqrt{2}$. Actually, the situation would be a smidgen better than that,
because part of the error in the $\sigma$ estimate comes from the
non-independance of the pair measurements, and that factor would be
diminished given two independant sequences of measurements.
On the other hand, since there are now two probably incommeasurate $\rho$
values, instead of being able to group many differences toghether to get a more
accfurate fraction of correct results, we would get twice as many points with
half the data for each. This slightly hurts the estimate accuracy. Also,
since $\rho$ needs to be small compared to $\sigma$ to get resolving power, and
$j \rho$ needs to be large compared to $\sigma$, there is some worry that
we would not be able to reduce the number of measurements taken by the full
factor of $\sqrt{2}$.
If we had planned to do 35 measurements to achieve 15\% accuracy, then
the availability of a second BPM might reduce that to just 25.
On the whole, it doesn't seem worth changing any plans for this.
If there were dozens of BPMs available, we would still need on the order of
10 measurements, because of the need to span $\sigma$ by reasonable factors
on both ends.
\section{How Accurately Can Resolution Be Assesed in Full System?}
{\em
(Section not done yet.)
}
%\begin{thebibliography}{9}
%\bibitem{reqs}Jim
%``approved requirements'' get info from docs database.
%\bibitem{bumps}M. Fischler,
%``The Physics and Math of Bumps in Courant/Snyder,'' Fermilab Technical
%Memo (2003).
%\end{thebibliography}
\end{document}