Accuracy

In error theory, accuracy is the degree of correspondence of the theoretical data, which can be inferred from a series of measured values (data sample), with the real or reference data, i.e. the difference between the average sample value and the true or reference. Indicates the proximity of the value found to the real one. It is a qualitative concept that depends on both random and systematic errors. See also: Accuracy vs Precision »

Measurement accuracy

In accordance with the general concept, the accuracy of the measurement is the degree of agreement between the average value deduced through one or more measurements and the relative true value, that is the value taken as reference.

The error resulting from the deviation between the measured value and the true value is called accuracy error (or simply accuracy) and, as already mentioned, is normally not a contribution in the evaluation of measurement uncertainty. Commonly this error is expressed as:

\[E_{accuracy}=V_{measured}-V_{true\;value}\]

In the simplest case, the measured value is the value obtained from a single measurement; otherwise, especially where significant sources of random error are suspected, it is the average of a series of measurements made under the same conditions.

It can also be defined as the maximum amount (error) by which the result differs from the true value or as the nearness of the measured value to its true value, often expressed as a percentage. It also represents a static characteristic of a measuring instrument.

The concept of the accuracy of a measurement is a qualitative one. An appropriate approach to stating this closeness of agreement is to identify the measurement errors and to quantify them by the value of their associated uncertainties, where uncertainty is the estimated range of value of an error. Accuracy depends on the inherent limitations of instruments and shortcomings in the measurement process. Often an estimate for the value of the error is based on a reference value used during the instrument’s calibration as a surrogate for the true value. A relative error based on this reference value is estimated by:

\[\textrm{Accuracy (A)}=\dfrac{\textrm{|measured value – true value|}}{\textrm{reference value}}\times 100\]

Thus, if the accuracy of a temperature indicator, with a full-scale range of 0 ÷ 500 °C is specified as ±0.5%, it indicates that the measured value will always be within ±2.5 °C of the true value if measured through a standard instrument during the process of calibration. But if it indicates a reading of 250 °C, the error will also be ±2.5 °C, i.e. ±1% of the reading. Thus it is always better to choose a scale of measurement where the input is near full-scale value. But the true value is always difficult to get. We use standard calibrated instruments in the laboratory for measuring true value if the variable.

Relativity of accuracy

It must be noted that the true value is a conventional value, especially since no value can be perfectly known. The true value, even if it is conventional, is deduced from measurements made with very precise instruments, i.e. instruments whose measured values, of the same physical quantity, differ very little between them. For example, we would never assume as true value of the mass of an object, the average value measured with a balance that has provided a set of measures like: 20.3 g; 25.4 g 32.5g 27.9 g. Conversely, the average value of the same size, measured with a balance that has provided this other set of measurements: 20.3 g; 20.2 g; 20.1 g ; 20.2 g can reasonably be assumed as the true value. It follows that the concept of accuracy must always be put in relation to the true-value that operators consider “right”, by choice, where this choice is motivated by the precision of the instrument with which that value has been obtained: precision is objective not subjective.

Instrumental accuracy

Instrumental accuracy is defined as the ability of a measuring instrument to give indications free from systematic errors, and tending to the true value of the measurand.

A deteriorated or altered instrument, used to acquire a series of values, may appear accurate because the values obtained may be close to each other, but be poorly accurate if these values differ from the true value of the measurand. Consider, for example, a meter used at a high ambient temperature and then stretched due to thermal expansion.

The evaluation of the instrumental accuracy is made by calibrating the instrument with appropriate samples.

Accuracy and costs

The demand for accuracy increases the costs increases exponentially. If the tolerance of a component is to be measured, then the accuracy requirement will typically be 10% of the tolerance values. Demanding high accuracy unless it is required is not viable, as it increases the cost of the measuring equipment and hence the inspection cost. Besides, it makes the measuring equipment unreliable, because, higher accuracy increases sensitivity. Therefore, in practice, while designing the measuring equipment, the desired/required accuracy to cost considerations depends on the quality and reliability of the component/product and inspection cost.

As the industry moves toward greater adoption of machine learning, accuracy is becoming a primary design consideration. Training systems use a level of accuracy that is not possible when doing inferencing at the edge. Teams knowingly introduce inaccuracy to reduce costs. A better understanding of accuracy’s implications is required, particularly when used within safety-critical applications such as autonomous driving.

Leave a Comment