Measurement

In metrology the term measurement is closely associated with all the activities about scientific, industrial, commercial, and human aspects. It is defined as the assignment of a number to a characteristic of an object or event, which can be compared with other objects or events. The knowledge of the reality that surrounds us is based on the measurement of physical quantities, in fact, we can say that knowing means measuring.

Measurement applications

Types of measurement applications can be classified into only three major categories:

  1. Monitoring of processes and operations: refers to situations where the measuring device is being used to keep track of some physical quantity (without any control functions).
  2. Control of processes and operations: is one of the most important classes of measurement application. This usually refers to an automatic feedback control system.
  3. Experimental engineering analysis: is that part of engineering design, develop­ment, and research that relies on laboratory testing of one kind or another to answer questions.

Every application of measurement, including those not yet “invented,” can be put into one of the three groups just listed or some combination of them.

The primary objective of measurement in the industrial inspection is to determine the quality of the component manufactured. Different quality requirements, such as permissible tolerance limits, form, surface finish, size, and flatness, have to be considered to check the conformity of the component to the quality specifications. In order to realize this, quantitative information of a physical object or process has to be acquired by comparison with a reference. The three basic elements of measurements, which are of significance, are the following:

  1. Measurand, a physical quantity to be measured (such as length, weight, and angle);
  2. Comparator, to compare the measurand (physical quantity) with a known standard (reference) for evaluation;
  3. Reference, the physical quantity or property to which quantitative comparisons are to be made, which is internationally accepted.

All these three elements would be considered to explain the direct measurement using a calibrated fixed reference. In order to determine the length of the component, measurement is carried out by comparing it with a steel scale (a known standard).

Measurement chain

With measurement chain, we refer to the set of stages of a measuring instrument, which process the information detected by the physical quantity (object of study), to then present a result: i.e., the measurement. The main stages of a measurement chain are three:

  • the first stage consists of a sensor and/or a transducer in contact with the physical quantity to be detected (also called the primary sensitive element). In measurement chains that include more than one transducer, crosstalk effects can occur, and the cause of this effect is to be found in capacitive and inductive couplings that can occur in the transducers themselves, in the connection cables, and finally, in the block of manipulation;
  • the second stage consists of an intermediate signal processing system or signal conditioner which converts the information coming from the previous stages into a form such as to adapt to the acquisition system. Typical operations performed by the conditioning circuit are noise filtering, linearization of the transfer function, conversion, and amplification of the signal generated by the transducer. The output signal from the measurement chain can be analog or digital. The power supply provides the electrical power necessary for the operation of the various electronic devices used in the measurement chain. The sensor is not powered when it draws the indispensable power for information from the outside world as it happens for thermocouples and piezoelectric sensors;
  • the third stage is represented by the terminal instrument which indicates the result of the operations carried out by the previous stages, which provides the operator with the value of the measurement.

Measurement uncertainty

In metrology, the estimation of the dispersion of the values “attributable” to the measurand is defined as the uncertainty of a measure. Measurement uncertainty is the degree of uncertainty with which the value of a physical quantity or property is obtained through its direct or indirect measurement. The result of the measurement is therefore not a single value, but a set of values derived from the measurement (direct or indirect) of the physical size or property itself. It is associated with the measurement value as follows in the example (a measurement of a diameter):

\[D=47\pm 0.1\;\textrm{mm}\]

Uncertainty means the range of possible values within which the true value of the measurement lies. This definition changes the usage of some other commonly used terms. For example, the term accuracy is often used to mean the difference between a measured result and the actual or true value. Since the true value of a measurement is usually not known, the accuracy of a measurement is usually not known either. Because of these definitions, we modified how we report lab results. For example, when students report results of lab measurements, they do not calculate a percent error between their results and the actual value. Instead, they determine whether the accepted value falls within the range of uncertainty of their result.

Components of uncertainty

In general, measurement uncertainty includes numerous sources of uncertainty, each of which is called a “component of uncertainty”. Some components arise from effects of a systematic nature (e.g., components associated with corrections, or values assigned to measurement samples), and among these is definitional uncertainty. To estimate the overall uncertainty, it may be necessary to examine each component of uncertainty and treat it separately to assess its contribution to the total uncertainty. Most of the time, however, it is possible to evaluate the simultaneous effect of several components, which allows us to simplify the uncertainty calculation. For a measurement result y we may have:

  • Combined standard uncertainty, \(u_c(y)\), is the total uncertainty of the measurement result \(y\); it is a mean square deviation estimated as the positive square root of the total variance obtained by combining all components of uncertainty.
  • Expanded uncertainty, \(U(y)\), obtained by multiplying the previous \(u_c(y)\), by a coverage factor \(k\): provides a range within which the value of the measurand is found with a higher confidence level; should be used in most cases of measurements in analytical chemistry. For a 95% confidence level, the coverage factor \(k = 2\).

Uncertainty estimation procedure

The uncertainty estimation procedure, i.e. error estimation, from a practical point of view requires:

  • Specification of the measurand, i.e., clear and unambiguous definition of what is being measured;
  • Definition of the mathematical model, i.e. definition of the relationship that links the measurand to the quantities determined by the measurement procedure chosen;
  • Identification of the sources of uncertainty; there are various techniques, from the compilation of a structured list, to the use of cause-effect diagrams; the effect of several sources can be evaluated cumulatively;
  • Quantification of uncertainty components; it is generally sufficient to quantify only the most important sources. Category A uncertainties are estimated as standard deviations from experimental data distributions; category B uncertainties must be derived from existing data and must be expressed and treated as standard deviations;
  • Combining uncertainty components: all uncertainty components must be converted into standard uncertainties;
  • Compounding the standard uncertainty; the expanded uncertainty will be calculated from the latter by applying the coverage factor.

Influence quantities in measurements

In metrology, in cases where the environmental conditions of the actual use of the transducer deviate significantly from the environmental calibration conditions, the effects due to the influence quantities must be taken into account. In these cases, specific tests must be conducted on a population of transducers or, at least, on a single transducer.

It appears necessary to highlight that attention must be paid to environmental conditions not only during the sensor operation but also during the previous phases such as storage and transport; these environmental conditions, if not checked and verified, can significantly alter, and above all, in an unpredictable way the metrological performance of the transducer. Some of the main quantities of influence that occur in mechanical and thermal measurements are summarized below.

Effects due to temperature

For each transducer, the working temperature variation range is indicated within which it can be used without causing damage. In this field of use, the trends of both the zero drift and the sensitivity drift are generally provided by the manufacturer. In fact, for example, in the measurements carried out with resistance strain gauges are given both the trends of the apparent deformation as a function of temperature (zero drift) and the sensitivity coefficient of the calibration factor as a function of temperature (sensitivity drift).

A further method that allows expressing the effect due to the temperature is the identification of a range of variation of the error due to it, which is expressed for example as a percentage of the full scale. It is also necessary to know the maximum and minimum value of the temperature at which the transducer can be exposed without permanent damage, that is without the metrological characteristics varying. Changes in ambient temperature determine not only effects on static metrological characteristics but also dynamic ones. It is necessary that the values supplied by the manufacturer refer to a specific temperature variation range. However, the temperature shows effects that can also be significant when there are step variations.

Effects due to acceleration

Errors caused by acceleration can occur either directly on the sensitive element, or the connection or support elements and can be of such a magnitude as to induce deformations to render the measurements conducted meaningless. In general, the transducers will show a more relevant acceleration sensitivity according to some axes; therefore it is necessary to indicate the triad of the selected reference axes and express the error due to acceleration.

The maximum difference between the output of the sensor in the absence and the presence of a specified constant acceleration applied according to a specific axis is defined as the acceleration error. Finally, it is opportune to specify that some sensors show a sensitivity to the acceleration of gravity so that the disposition of the transducer with respect to the gravitational field constitutes an essential condition of constraint.

Effects due to vibrations

The variation of the frequency of the vibrations, applied according to a specific reference axis, can determine (for example due to resonance phenomena, etc.) significant effects in the signal output provided by the transducer.

To express the effect due to vibrations, it will be necessary to define the maximum variation in the output, for each value of the physical input quantity, when a specific amplitude of the vibration, and for a given frequency range, is applied according to an axis of the transducer.

Effects due to environmental pressure

Sometimes it can be verified that the transducer must operate in conditions under which the pressure is significantly different from the pressure at which the calibration operation was carried out, which in general is equal to the environmental pressure. Relatively different pressures from those to which the calibration tests have been conducted may determine variations in the internal geometry of the transducer to vary the metrological characteristics provided by the manufacturer.

A deviation from the calibration conditions is much more severe than from damage to the transducer which, on the other hand, is easily detectable by the experimenter. The error due to pressure is defined as the maximum variation of the transducer output, for each value of the input quantity included in the measurement range, when the pressure at which the transducer operates is made to vary in specified intervals.

Effects due to commissioning of the transducer

If the commissioning of a transducer does not occur with care, damage can occur (deformation of the structure, for example) such as to vary the operating conditions of the transducer. No data relating to this cause of the error are available from the manufacturer, and the user must make sure of the proper and correct installation of the device.

Measurement errors

The error of measurement is the difference between a measured value of a quantity and its true value. The term measurement uncertainty is often used as a synonym for measurement error. In metrology, the analysis of errors includes the study of uncertainties in the measurements, since no measure as far as it is carried out with care is entirely free from uncertainties.

The term error does not necessarily imply an incorrect measurement procedure by the operator, but also an uncertainty provided by the instrumentation, namely that the value presented by the measuring instrument provides a value of the measured quantity with a certain approximation. Measurement errors are caused by:

  • human factors (inaccuracies in the design of the measurement chain, distractions or poor operator accuracy);
  • technological factors (static and dynamic constructive and metrological qualities of the instruments);
  • environmental factors (external influence quantities present in the environment in which the measurement is made).

In statistics, an error is not a “mistake”. Variability is an inherent part of the results of measurements and of the measurement process. The measurement operation is always invasive, in fact, it introduces a perturbation in the system that we want to investigate; therefore the variables involved are always altered when the measurement is performed.

The measurement error can depend on both the instrument and the observer. There are two main types of errors:

  • random errors (or accidental, which may vary from observation to another);
  • systematic errors (which always occurs, with the same value, when we use the instrument in the same way and in the same case).

Random error

Random error is always present in a measurement. It is caused by inherently unpredictable fluctuations in the readings of a measurement instrument, in operating and environmental conditions or the experimenter’s interpretation of the instrumental reading.

Random errors can be analyzed statistically, as it is empirically seen that they are generally distributed according to simple laws. In particular, it is often hypothesized that the causes of these errors act in a completely random manner, thus determining deviations, with respect to the average value, both negative and positive. This allows us to expect that the effects vanish on average; substantially that the average value of the accidental errors is zero.

The smaller the random errors are, the more it is said that the measurement is precise.

Random (or accidental) errors have less impact than systematic errors because, by repeating the measurement several times and calculating the average of the values found (reliable measurement), their contribution is generally reduced for a probabilistic reason.

This observation has a fundamental consequence: if we can correct all the gross errors and the systematic ones, so we will have to deal only with accidental errors, we will just need to take repeated measures and then mediate the results: the more measures we will consider, the less the result final (average of the individual results) will be affected by accidental errors.

Systematic error

Systematic errors are predictable and typically constant or proportional to the true value. If the cause of the systematic error can be identified, then it usually can be eliminated. An error is called systematic if the functional relationship between the magnitude of the error and the intensity of the physical quantity (that is the cause) is known. Systematic errors always occur with the same sign + or – and the same amplitude, where the measurement of a physical quantity is repeated several times with the same instrumentation and under the same operating and environmental conditions.

Systematic errors are caused by imperfect calibration of measurement instruments or imperfect methods of observation (an error, voluntary or involuntary, committed by the observer), or interference of the environment with the measurement process, and always affect the results of an experiment in a predictable direction.

Incorrect zeroing of an instrument leading to a zero error is an example of systematic error in instrumentation. Other types of errors are: gross errors, static errors, and dynamic errors.

Gross error

Gross errors are those attributable to inexperience or distraction of the operator who is making the measurement; may for example result from a wrong reading or by improper use of measuring instruments, or by incorrect transcriptions from experimental data or even erroneous processing of such data. These errors do not occur when measurements are taken with care and attention and in any case, can be eliminated by repeating the measurement.

Static error

Static errors are those errors evaluated in static conditions, that is, by performing the measurement of a constant physical quantity; they are:

  • Reading error
  • Mobility error
  • Hysteresis error
  • Fidelity error
  • Zero error
  • Calibration error

Reading error

In metrology, the reading error is that what happens when evaluating the relative position of the index of the measuring instrument with respect to the scale; this error is generally due to four causes:

  • resolving power of the human eye: it is defined as the angle of minimum separation between two points that the eye is able to discern as two separate and distinct objects (it is about 0.1 mm which corresponds to 100 μm, however with many physiological variables);
  • parallax error: due to the fact that the index and the scale of the measuring instrument are located on different planes (the operator’s gaze should always be perpendicular to the scale for a correct measurement). Parallax error is primarily caused by viewing the object at an oblique angle with respect to the scale, which makes the object appear to be at a different position on the scale. For example, if measuring the distance between two ticks on a line with a ruler marked on its top surface, the thickness of the ruler will separate its markings from the ticks. If viewed from a position not exactly perpendicular to the ruler, the apparent position will shift, and the reading will be less accurate than the ruler is capable of. In the context of reading a piece of volumetric glassware, such as a measuring cylinder, burette, or volumetric flask, the meniscus should be at eye level otherwise there will be an error in the reading. If the meniscus is above eye level an increased volume measurement will be made, conversely, if the eye is above the meniscus then a lower volume reading will be made. A similar error occurs when reading the position of a pointer against a scale in an instrument such as an analog multimeter. To help the user avoid this problem, the scale is sometimes printed above a narrow strip of mirror, and the user’s eye is positioned so that the pointer obscures its own reflection, guaranteeing that the user’s line of sight is perpendicular to the mirror and therefore to the scale. The same effect alters the speed read on a car’s speedometer by a driver in front of it and a passenger off to the side, values read from a graticule not in actual contact with the display on an oscilloscope, etc.
  • interpolation uncertainty: when the scale of the measuring instrument is linear, it is of the order of ±10% of the distance between two successive divisions. If the scale is not linear, this uncertainty value can also increase considerably. To reduce the interpolation error, reading systems with nonii, micrometric screws and silicone scales can be used;
  • background noise: it is the set of all those causes that impose index movements overlapping the displacement produced by the measurand. Regarding the estimation of the error value produced by the background noise, when it comes to appreciating the average value of the observed or recorded signal, it is admitted equal to ±10% calculated on the double amplitude of the oscillation.

Mobility error

Mobility error is mainly due to the friction that develops between the mobile components of the measurement instrument and the inevitable spaces between them.

Hysteresis error

The hysteresis error of a measuring instrument is defined as the maximum difference between the value detected by the transducer when a specific value of the input quantity is applied, by imposing increasing inputs, and the same value obtained by imposing decreasing inputs. In other words, the hysteresis error is given by the maximum difference between the value measured in ascending direction and the respective value measured in a decreasing direction.

Hysteresis represents the history dependence of physical systems. If you push on something, it will yield: when you release, does it spring back completely? If it doesn’t, it is exhibiting hysteresis, in some broad sense. The term is most commonly applied to magnetic materials: as the external field with the signal from the microphone is turned off, the little magnetic domains in the tape don’t return to their original configuration (by design, otherwise your record of the music would disappear!) Hysteresis happens in lots of other systems: if you place a large force on your fork while cutting a tough piece of meat, it doesn’t always return to its original shape: the shape of the fork depends on its history.

Fidelity error

In Metrology, it is said that a measuring instrument is all the more “faithful” the more it provides indications of little discordant value between them in the course of several measurements of a constant physical quantity. The fidelity error is evaluated by performing a certain number of measurements of the same, assuming constant measurand: the error will, therefore, be represented by the semi-difference between the maximum and minimum value of the corresponding measures.

The fidelity error is mainly due to external influences: temperature, magnetic field, pressure, angular or linear acceleration, etc. These quantities will act simultaneously and with different intensity for each moment so that the instrument will provide different indications of the same size over time; therefore, an instrument will be all the more faithful, the more it has been constructed to be insensitive to the magnitudes of influence.

Zero error

In metrology, zero error means the error that is made when long-term measurements are made, and it is verified that the zero of the measuring instrument undergoes a drift phenomenon, called zero drift. The zero error is evaluated in units of the quantity to be measured.

Calibration error

The limiting factor of the calibration process is repeatability because it is the only characteristic error that cannot be calibrated out of the measuring system and hence the overall measurement accuracy is curtailed. Thus, repeatability could also be termed as the minimum uncertainty that exists between a measurand and a standard.

Conditions that exist during calibration of the instrument should be similar to the conditions under which actual measurements are made. The standard that is used for calibration purpose should normally be one order of magnitude more accurate than the instrument to be calibrated. When it is intended to achieve greater accuracy, it becomes imperative to know all the sources of errors so that they can be evaluated and controlled.

Dynamic error

dynamic error is a difference between the true value of the quantity changing with time and the value indicated by the measurement system if no static error is assumed. This error may have an amplitude and usually a frequency related to the environmental influences and the parameters of the system itself.

In metrology, dynamic errors are caused by dynamic influences acting on the system such as vibration, roll, pitch, or linear acceleration; they are:

  • Insertion error
  • Rapidity error
  • Error band

Device’s dynamic error results from the dynamic loads in motion when the device is tracking space objects.

Insertion error

In metrology, the insertion error is caused by the presence of the measuring instrument itself, inside the environment in which the measurement is carried out; in other words, the measuring instrument changes the measurement conditions and consequently also changes the final value of the measurand.

Therefore it is said that a measuring instrument is the better, the less it disturbs the phenomenon to be measured; that is, the smaller the entity of the disturbance caused by its presence. This interference can be evaluated if the characteristics of the instrument and in particular of its sensor (or transducer) are known.

Rapidity error

In metrology, the error of rapidity is that metrological quality of a measuring instrument that expresses the ability to follow the (dynamic) variations in the time of the measurand; it is essential because it allows evaluating the limits within which a measuring instrument can be suitable for measuring variable quantities over time (dynamic quantities).

Another practical definition of the error of rapidity establishes that: it is the smaller, the faster the index of the measuring instrument in changing its position on the graduated scale of the instrument. The speed, limited by the inertia of the moving parts of the instrument and by the damping to which they are subjected, is characterized differently depending on the type of temporal variations of the size.

  • In the case in which the quantity to be measured is constant, the speed of the instrument is characterized by the response time. This is defined by the time required, for the index, to reach the final position once put in contact with the measurand.
  • In the case in which the quantity to be measured is slowly variable over time, the speed is characterized by the delay with which the index of the instrument follows it. This delay is constant if the variation of the magnitude under examination is, and is higher in proportion to the said variation. If instead, the variation of the quantity is periodic, the index of the instrument provides a measurement whose maximum value is less than the maximum value of the quantity: the delay depends on the frequency of this variation.
  • Finally, in the case of rapidly variable quantities over time, speed is defined by the behavior of the mobile parts of the instrument when the size varies sinusoidally. In general, the ratio between the indication provided by the instrument and the value of the input quantity decreases when the frequency increases.

Error band

The error band is a worst-case error measurement. This is the best specification (with respect to linearity) for determining the suitability of the measuring device for an application. The range of maximum deviation of the transducer output from a reference curve due to the transducer is defined as error band; said deviation (which is generally expressed in percent of the full scale) can be caused by non-linearity, non-repeatability, hysteresis, etc.; it is determined by several consecutive calibration cycles so as to include repeatability. Error band is a measurement of worst-case error. This is the best specification (compared to linearity) to determine gauge suitability for an application.

Error band of measurement

It can also be verified that the transducer must operate only in a range of variation of the input quantity that is contained in the measurement range; it follows that by varying the value of the static error considered acceptable, it is possible to have different fields of use.

The error band specification describes a bipolar band (i.e., ±0.2%) around the ideal line. The “ideal line” is the line plotted where all dimensional changes produce perfect sensor output voltages. All measurements must fall within the error band for the instrument to be within specification. The magnitude of the band is equal to the worst-case error throughout the gauge’s measurement range. Using the worst-case error assures that every measurement made by the gage will perform within the error band specification.

Methods of measurement

When precision measurements are made to determine the values of a physical variable, different methods of measurement are employed. For measurement method is defined as the logical sequence of efficient operations, employed in measuring physical quantities under observation.

The better the measurement method used and how much better are the instruments and their technology, much closer to reality is the measure describing the state of the measured physical quantity. In principle, therefore, the measure represents the physical reality with a certain approximation, or with a certain error, an error that can be made very small but never null.

The choice of the method of measurement depends on the required accuracy and the amount of permissible error. Irrespective of the method used, the primary objective is to minimize the uncertainty associated with the measurement. The common methods employed for making measurements are as follows:

Direct method

In this method, the quantity to be measured is directly compared with the primary or secondary standard. Scales, vernier calipers, micrometers, bevel protractors, etc., are used in the direct method. This method is widely employed in the production field. In the direct method, a very slight difference exists between the actual and the measured values of the quantity. This difference occurs because of the limitation of the human being performing the measurement.

The advantage of direct measurements consists mainly in the fact that with them it is harder to make gross errors, since the instrument necessary to make the comparison is generally simple, and therefore not subject to hidden faults.

Indirect method

In this method, the value of a quantity is obtained by measuring other quantities that are functionally related to the required value. Measurement of the quantity is carried out directly and then the value is determined by using a mathematical relationship.

Most of the measurements are obtained indirectly, almost always for cost reasons. For example, a density measurement of a given substance could be obtained directly through a device called densimeter, but it is definitely more convenient to directly measure the mass and volume of the substance and then make the relationship.

Indirect measurements, on the other hand, are more subject to approximations since error propagation is present in the formula that represents the physical law. It is, therefore, necessary to pay particular attention to the approximations that are made when performing direct measurements.

Fundamental or absolute method

In this case, the measurement is based on the measurements of base quantities used to define the quantity. The quantity under consideration is directly measured and is then linked with the definition of that quantity.

Comparative method

In this method, as the name suggests, the quantity to be measured is compared with the known value of the same quantity or any other quantity practically related to it. The quantity is compared with the master gauge and only the deviations from the master gauge are recorded after comparison. The most common examples are comparators, dial indicators, etc.

Transposition method

This method involves making the measurement by direct comparison, wherein the quantity to be measured V is initially balanced by a known value X of the same quantity; next, X is replaced by the quantity to be measured and balanced again by another known value Y. If the quantity to be measured is equal to both X and Y, then it is equal to:

\[V=\sqrt{XY}\]

An example of this method is the determination of mass by balancing methods and known weights.

Coincidence method

This is a “differential” method of measurement wherein a very minute difference between the quantity to be measured and the reference is determined by careful observation of the coincidence of certain lines and signals. Measurements on vernier caliper and micrometer are examples of this method.

Deflection method

This method involves the indication of the value of the quantity to be measured directly by the deflection of a pointer on a calibrated scale. Pressure measurement is an example of this method.

Complementary method

The value of the quantity to be measured is combined with a known value of the same quantity. The combination is so adjusted that the sum of these two values is equal to the predetermined comparison value. An example of this method is the determination of the volume of a solid by liquid displacement.

Null measurement method

In this method, the difference between the value of the quantity to be measured and the known value of the same quantity with which comparison is to be made is brought to zero.

Substitution method

It is a direct comparison method. This method involves the replacement of the value of the quantity to be measured with a known value of the same quantity, so selected that the effects produced in the indicating device by these two values are the same. The Borda method of determining mass is an example of this method.

Contact method

In this method, the surface to be measured is touched by the sensor or the measuring tip of the instrument. Care needs to be taken to provide constant contact pressure in order to avoid errors due to excess constant pressure. Examples of this method include measurements using a micrometer, vernier caliper, and dial indicator.

Contactless method

As the name indicates, there is no direct contact with the surface to be measured. Examples of this method include the use of optical instruments, tool maker’s microscope, and profile projector.

Composite method

The actual contour of a component to be checked is compared with its maximum and minimum tolerance limits. Cumulative errors of the interconnected elements of the component, which are controlled through a combined tolerance, can be checked by this method. This method is very reliable to ensure the interchangeability and is usually effected through the use of composite GO gauges. The use of a GO screw plug gauge to check the thread of a nut is an example of this method.