Resolution

In metrology, the resolution of a measuring instrument is the ability to detect the smallest change in the value of a physical property that an instrument can detect. It also represents a static characteristic of an instrument. The resolution of an instrument can also be defined as the minimum incremental value of the input signal that is required to cause a detectable change in the output. Resolution is also defined in terms of percentage as:

\[\textrm{Resolution}=\dfrac{\Delta I}{I_{max}-I_{min}}\times 100\]

resolution

The quotient between the measuring range and resolution is often expressed as a dynamic range and is defined as:

\[\textrm{Dynamic range}=\dfrac{\textrm{measurement range}}{\textrm{resolution}}\]

And is expressed in terms of dB. The dynamic range of an n-bit ADC comes out to be approximately \(6n\) dB.

The resolution is actually the lower limit of subdivision of the measurement scale within which it still makes sense to define a reading value. Example: you can say that you have read 10.5 V with a measuring system having a resolution of 0.1 V (or 0.5 V); it does not make sense to say that you have read 10.5 V with a measuring system having a resolution of 1 V.

In common usage, the resolution is defined as the value of the measuring instrument’s format unit (the smallest graduation of a scale). Example: A 20 cm ruler with notches in 1 mm increments is commonly referred to as a ruler with a resolution of 1 mm. This easy approximation is not acceptable in metrology, where the distinction is not only conceptual, but also practical. Wanting to forcefully apply this approximation can lead to significant errors.

A still very common error (and much more serious) in everyday life is to consider the accuracy of the measure equal to the value of the resolution of the instrument, where “resolution” still means the unit of format of the instrument. The precision of a measurement depends on a multiplicity of factors, of which the resolution is only one element.

Resolution limitations

Although an ideal measurement should have infinite resolution (i.e., reveal even infinitesimal variations in the measurand), many limitations arise in real measurements. These can be divided into three broad categories:

  • instrumental;
  • of reading;
  • of the measurand.

Instrumental limitations

Instrumental limitations are those arising from the internal structure of the measuring instrument and limiting its ability to react to small variations in the measurand. The sources of these limitations are the operating principle, the physical structure or perfection in construction; example:

  • presence of background noise from the electronics used;
  • frictional locking of moving parts;
  • deformation and inertia of moving parts;
  • discretization of signals by digital electronic components.

Reading limitations

Reading limitations are those arising from the ability to read small variations on the measuring instrument display. The sources of these limitations depend both on the structure of the scale, and arising from the operator’s ability to read; example:

  • number of digits (when the instrument is digital);
  • unit value, graduation length and index size (when the instrument is analog);
  • parallax error;
  • the operator’s ability to determine the position of the index in the graduated scale.

Limitations of the measurand

Limitations of the measurand are those that result from physical limitations of the measurement or boundary conditions. The sources of these limitations are normally outside the operator’s field of operation and constitute an unresolvable limit to the measurement; example:

  • presence of small electrical interferences or magnetic fields;
  • presence of electrical background noise;
  • quantum effects on the indeterminacy of the measurement.

Resolution error

Failure to detect a change in the measurand, due to the limits mentioned above, constitutes a measurement error known as resolution error and is an element in the evaluation of measurement uncertainty. Correctly quantifying the resolution error of a measurement requires a two-step analysis:

  • a priori, by assessing before measurements are made the limits to resolution due to physical limitations, display structure, and known instrument resolution errors;
  • in the field, by observing during the measurement the presence of instabilities, discontinuities in the display, and others, which may indicate the presence of undocumented limits to the resolution.

The resolution error of an instrument is the resolution that the instrument would have under optimal conditions of use. In the evaluation of the latter it is considered irrelevant external factors due to the measurand, the environment or the operator; in this sense the resolution of an instrument represents the uncertainty of the reading of the same, not to be confused with the uncertainty of instrumental measurement (which must also take into account the other metrological parameters).

A correct evaluation of the instrumental resolution would require a specific analysis by a specialized laboratory, which ensures:

  • the optimal control of the boundary conditions;
  • adequate sample instrumentation;
  • a good knowledge of the working principle of the instruments;
  • must have their ”own” measurement uncertainty lower by at least one order of magnitude, compared to the expected resolution of the instrument under examination.

Quantitative evaluation

Rigorous quantification of resolution error would require a lengthy (and expensive) analysis. Fortunately, in practice, the following considerations come in favor:

  • the “limitation to resolution” of highest value makes all others “unresolvable”, so it is sufficient to find this one to know the resolution error of the measurement system;
  • for a certain application, the limitations that can be significant are few and always the same;
  • the empirical control on the field, during the execution of the measures, highlights which can be the resolution error of the measuring system.
  • Below are some useful rules for the evaluation of the resolution error.

Reading digital instruments

Digital instruments discretize measurements, that is, starting from an analog input signal, they transform it into a numerical format. It is evident that the discretization of a measure constitutes a limit to its resolution. The classic example of these devices are digital indicators with numerical display.

The resolution error in the reading of a digital instrument is normally equal to the value of the least significant digit of its display. Example: a 4 digits voltmeter, with a full scale of 10 V, has a resolution error on the reading of 0.001 V. Exceptions to this are:

  • when the least significant digit varies with an increment other than 1 (typical are increments of 2 in 2, or 5 in 5), in which case the resolution on the reading is equal to the value of the increment;
  • when measurements are made during the transition of the digit value, on a measurand that is certain to vary very slowly, in which case the resolution of the reading will be a fraction of a digit, depending on the reading speed of the instrument and the maximum speed of variation of the measurand.

Reading analog instruments

The resolution error in reading an analog instrument depends on:

  • the value of the format unit;
  • the length of the graduation;
  • the size of the index;
  • the reading ability of the operator.

Example: For a pressure gauge with a unit size of 1 bar, a distance between two graduations of 5 mm and an index thickness of 1 mm, the theoretical limit of resolution is 0.2 bar.

In practice, despite the use of optical aids (lenses, microscopes), the reading resolution of an analog instrument is severely limited by the operator’s ability to discern fractions of graduation, as well as by the parallax error that can be generated (index and graduated scale are not on the same plane). Although trained personnel can discern 1-2 microns when reading a micrometer equipped with a centesimal nonius (10-micron units), a general operator is unlikely to be able to reliably distinguish 1-mm shifts when the index is far from the scale markings.

To adjust for the subjectivity of these considerations, some activity standards or for instruments (e.g., manometers or dynamometers) specifically define how to calculate the reading resolution. In all other cases, common sense and the principle of prudence apply: when in doubt, an operator should at least be able to discern to which notch the index is closest, in which case the reading resolution becomes equal to the unit of format.

Exceptions are the cases:

  • when the index is larger than the scale division, in which case the reading resolution is a multiple of the format unit;
  • when measurements are made in conditions where there is a coincidence between index and notch, in which case the reading resolution will be equal to the theoretical limit.

Background noise and interference

When making measurements, the display may show instabilities that cannot be attributed to real variations in the measurand. This problem is typical of electronic instrumentation, especially when working in very low voltage ranges (< 1 mV) or when very high resolutions (< 0.1 of full scale) are required from an instrument. In these conditions the indicators may read background noise from their own electronics or noise from an external source.

If these noises cannot be shielded or filtered out, the instability found on the reading constitutes a limit to the resolution of the measurement; in this case the resolution of the reading is defined as equal to the amplitude of the oscillation observed.

Mechanical Limits

When using instruments equipped with mechanical components (rods, gears, linkages, racks), jerky or jerky readings may be encountered. The most likely cause of these behaviors is a manufacturing defect or damage to the instrument; however, when working with very sensitive instruments, it may also indicate that the instrument is working at the limits of its mechanical capabilities. Under these conditions, one can already see the effects of friction, elastic deformation and inertia that prevent the mechanics involved from “following” the variations in the measurand.

As already mentioned, a rigorous evaluation of these limits to resolution would require a specific analysis by a specialized laboratory. Fortunately, normally these limits are an order of magnitude smaller than the others, requiring only a quick check of the response of the display to changes in the measurand to ensure that this problem is irrelevant.

If the problem is significant, the resolution of the reading is defined as the maximum ” trigger” or ” bottleneck” detected.

Measurement discretization

We have already mentioned the problem of the resolution limitation due to the discretization of digital instruments, inherent in their reading; now it is worth pointing out that this is only one aspect of the problem of discretization of measurements. The increasingly massive use of digital electronic instrumentation extends the considerations made with regard to visualization, to “measurement” in the broadest sense.

In fact, beyond the problem of visualization, almost all digital electronic instruments perform an analog-to-digital conversion of signals, and consequently create a relative resolution error. It is important to note that this error is present regardless of the presence or characteristics of the display device adopted: a limiting case is an instrument used to acquire measurements to be stored in files, where there is no real display device, but nevertheless there is a resolution error due to the discretization of the measurand.

In almost all cases, the analog-to-digital conversion works on electrical signals that constitute or represent (following the use of a transducer) the measurand. The conversion is performed by an electronic circuit called ADC (analogue-to-digital converter), whose main characteristic is the size (in bits) of the corresponding digital value; the latter is an indication of the resolution of the conversion: a 10-bit ADC converter is able to encode 1024 different values (210) within the measuring range, an 8-bit converter is able to encode 256 values (28).

Similarly, resolution errors are also generated in the case of digital-to-analog conversion: in fact, although in theory it is possible the perfect transformation of a digital signal in the analog equivalent, the very fact that we start from a discrete signal prevents the possibility of generating signals of arbitrary value. The electronic circuit called for this conversion is called DAC (digital-to-analog converter). Three general cases can be envisaged:

  1. A/D instruments, instruments that provide a conversion of an analog signal representing the measurand, in order to display and store it in digital format (e.g. digital indicators or measurement acquirers);
  2. D/A instruments, instruments that generate an analog signal, starting from the relative command in digital format (as in function generators or calibrators used for the calibration of instrumentation);
  3. A/D/A instruments, instruments that perform operations (filtering, amplification, conversion, storage) on analog signals, after converting them into digital format, and then make them available again in analog format (e.g. some signal conditioners or in measuring recorders).

In the absence of precise manufacturer’s instructions, the resolution error can be measured in the laboratory by slowly varying the measurand, and detecting the jump in display due to discretization.