Precision

In metrology, precision indicates the repeatability or reproducibility of an instrument (but does not indicate accuracy). In other words is the degree of the repetitiveness of the measuring process of a quantity made by using the same method, under similar conditions.

In error theory, precision is the degree of “convergence” (or “dispersion”) of individually collected data (sample) with respect to the mean value of the series to which they belong, or, in other words, their variance (or standard deviation) with respect to the sample mean.

Making an analogy with a series of arrows shot at a target, the more arrows that are grouped together, the more accurate the series of shots is. No matter how close the center of the group (the mean) comes to the center of the target, this other factor is in fact determined by accuracy. See also: Accuracy vs Precision »

Scatter of values can be produced by non-repeatable random variations (statistical error). In order to obtain a reliable average value, a sufficiently large number of measurements must be taken. In statistics, precision is expressed in terms of standard deviation.

The ability of the measuring instrument to repeat the same results during the act of measurements for the same quantity is known as repeatability. Repeatability is random in nature and, by itself, does not assure accuracy, though it is a desirable characteristic. Precision refers to the consistent reproducibility of a measurement. A precise instrument should be accurate at the same time, unless you know the magnitude of the deviation (systematic error) and make the appropriate corrections.

Reproducibility is normally specified in terms of a scale reading over a given period of time. If an instrument is not precise, it would give different results for the same dimension for repeated readings. In most measurements, precision assumes more significance than accuracy. It is important to note that the scale used for the measurement must be appropriate and conform to an internationally accepted standard.

If an instrument is used to measure the same input, but at different instants, spread over the whole day, successive measurements may vary randomly. It also represents a static characteristic of an instrument. The random fluctuations of readings, (mostly with a Gaussian distribution) are often due to random variations of several other factors that have not been taken into account while measuring the variable.

A precision instrument indicates that the successive reading would be very close, or in other words, the standard deviation \(\sigma_e\) of the set of measurements would be very small. Quantitatively, the precision can be expressed as:

\[\textrm{Precision}=\dfrac{\textrm{measured range}}{\sigma_e}\]

Leave a Comment