Measurement

In metrology the term measurement is closely associated with all the activities about scientific, industrial, commercial, and human aspects. It is defined as the assignment of a number to a characteristic of an object or event, which can be compared with other objects or events. The knowledge of the reality that surrounds us is based on the measurement of physical quantities, in fact, we can say that knowing means measuring.

Measurement applications

Types of measurement applications can be classified into only three major categories:

  1. Monitoring of processes and operations: refers to situations where the measuring device is being used to keep track of some physical quantity (without any control functions).
  2. Control of processes and operations: is one of the most important classes of measurement application. This usually refers to an automatic feedback control system.
  3. Experimental engineering analysis: is that part of engineering design, develop­ment, and research that relies on laboratory testing of one kind or another to answer questions.

Every application of measurement, including those not yet “invented,” can be put into one of the three groups just listed or some combination of them.

The primary objective of measurement in the industrial inspection is to determine the quality of the component manufactured. Different quality requirements, such as permissible tolerance limits, form, surface finish, size, and flatness, have to be considered to check the conformity of the component to the quality specifications. In order to realize this, quantitative information of a physical object or process has to be acquired by comparison with a reference. The three basic elements of measurements, which are of significance, are the following:

  1. Measurand, a physical quantity to be measured (such as length, weight, and angle);
  2. Comparator, to compare the measurand (physical quantity) with a known standard (reference) for evaluation;
  3. Reference, the physical quantity or property to which quantitative comparisons are to be made, which is internationally accepted.

All these three elements would be considered to explain the direct measurement using a calibrated fixed reference. In order to determine the length of the component, measurement is carried out by comparing it with a steel scale (a known standard).

Measurement chain

With measurement chain, we refer to the set of stages of a measuring instrument, which process the information detected by the physical quantity (object of study), to then present a result: i.e., the measurement. The main stages of a measurement chain are three:

  • the first stage consists of a sensor and/or a transducer in contact with the physical quantity to be detected (also called the primary sensitive element). In measurement chains that include more than one transducer, crosstalk effects can occur, and the cause of this effect is to be found in capacitive and inductive couplings that can occur in the transducers themselves, in the connection cables, and finally, in the block of manipulation;
  • the second stage consists of an intermediate signal processing system or signal conditioner which converts the information coming from the previous stages into a form such as to adapt to the acquisition system. Typical operations performed by the conditioning circuit are noise filtering, linearization of the transfer function, conversion, and amplification of the signal generated by the transducer. The output signal from the measurement chain can be analog or digital. The power supply provides the electrical power necessary for the operation of the various electronic devices used in the measurement chain. The sensor is not powered when it draws the indispensable power for information from the outside world as it happens for thermocouples and piezoelectric sensors;
  • the third stage is represented by the terminal instrument which indicates the result of the operations carried out by the previous stages, which provides the operator with the value of the measurement.

Measurement uncertainty

In metrology, the estimation of the dispersion of the values “attributable” to the measurand is defined as the uncertainty of a measure. Measurement uncertainty is the degree of uncertainty with which the value of a physical quantity or property is obtained through its direct or indirect measurement. The result of the measurement is therefore not a single value, but a set of values derived from the measurement (direct or indirect) of the physical size or property itself. It is associated with the measurement value as follows in the example (a measurement of a diameter):

\[D=47\pm 0.1\;\textrm{mm}\]

Uncertainty means the range of possible values within which the true value of the measurement lies. This definition changes the usage of some other commonly used terms. For example, the term accuracy is often used to mean the difference between a measured result and the actual or true value. Since the true value of a measurement is usually not known, the accuracy of a measurement is usually not known either. Because of these definitions, we modified how we report lab results. For example, when students report results of lab measurements, they do not calculate a percent error between their results and the actual value. Instead, they determine whether the accepted value falls within the range of uncertainty of their result.

Components of uncertainty

In general, measurement uncertainty includes numerous sources of uncertainty, each of which is called a “component of uncertainty”. Some components arise from effects of a systematic nature (e.g., components associated with corrections, or values assigned to measurement samples), and among these is definitional uncertainty. To estimate the overall uncertainty, it may be necessary to examine each component of uncertainty and treat it separately to assess its contribution to the total uncertainty. Most of the time, however, it is possible to evaluate the simultaneous effect of several components, which allows us to simplify the uncertainty calculation. For a measurement result y we may have:

  • Combined standard uncertainty, \(u_c(y)\), is the total uncertainty of the measurement result \(y\); it is a mean square deviation estimated as the positive square root of the total variance obtained by combining all components of uncertainty.
  • Expanded uncertainty, \(U(y)\), obtained by multiplying the previous \(u_c(y)\), by a coverage factor \(k\): provides a range within which the value of the measurand is found with a higher confidence level; should be used in most cases of measurements in analytical chemistry. For a 95% confidence level, the coverage factor \(k = 2\).

Uncertainty estimation procedure

The uncertainty estimation procedure, i.e. error estimation, from a practical point of view requires:

  • Specification of the measurand, i.e., clear and unambiguous definition of what is being measured;
  • Definition of the mathematical model, i.e. definition of the relationship that links the measurand to the quantities determined by the measurement procedure chosen;
  • Identification of the sources of uncertainty; there are various techniques, from the compilation of a structured list, to the use of cause-effect diagrams; the effect of several sources can be evaluated cumulatively;
  • Quantification of uncertainty components; it is generally sufficient to quantify only the most important sources. Category A uncertainties are estimated as standard deviations from experimental data distributions; category B uncertainties must be derived from existing data and must be expressed and treated as standard deviations;
  • Combining uncertainty components: all uncertainty components must be converted into standard uncertainties;
  • Compounding the standard uncertainty; the expanded uncertainty will be calculated from the latter by applying the coverage factor.

Influence quantities in measurements

In metrology, in cases where the environmental conditions of the actual use of the transducer deviate significantly from the environmental calibration conditions, the effects due to the influence quantities must be taken into account. In these cases, specific tests must be conducted on a population of transducers or, at least, on a single transducer.

It appears necessary to highlight that attention must be paid to environmental conditions not only during the sensor operation but also during the previous phases such as storage and transport; these environmental conditions, if not checked and verified, can significantly alter, and above all, in an unpredictable way the metrological performance of the transducer. Some of the main quantities of influence that occur in mechanical and thermal measurements are summarized below.

Effects due to temperature

For each transducer, the working temperature variation range is indicated within which it can be used without causing damage. In this field of use, the trends of both the zero drift and the sensitivity drift are generally provided by the manufacturer. In fact, for example, in the measurements carried out with resistance strain gauges are given both the trends of the apparent deformation as a function of temperature (zero drift) and the sensitivity coefficient of the calibration factor as a function of temperature (sensitivity drift).

A further method that allows expressing the effect due to the temperature is the identification of a range of variation of the error due to it, which is expressed for example as a percentage of the full scale. It is also necessary to know the maximum and minimum value of the temperature at which the transducer can be exposed without permanent damage, that is without the metrological characteristics varying. Changes in ambient temperature determine not only effects on static metrological characteristics but also dynamic ones. It is necessary that the values supplied by the manufacturer refer to a specific temperature variation range. However, the temperature shows effects that can also be significant when there are step variations.

Effects due to acceleration

Errors caused by acceleration can occur either directly on the sensitive element, or the connection or support elements and can be of such a magnitude as to induce deformations to render the measurements conducted meaningless. In general, the transducers will show a more relevant acceleration sensitivity according to some axes; therefore it is necessary to indicate the triad of the selected reference axes and express the error due to acceleration.

The maximum difference between the output of the sensor in the absence and the presence of a specified constant acceleration applied according to a specific axis is defined as the acceleration error. Finally, it is opportune to specify that some sensors show a sensitivity to the acceleration of gravity so that the disposition of the transducer with respect to the gravitational field constitutes an essential condition of constraint.

Effects due to vibrations

The variation of the frequency of the vibrations, applied according to a specific reference axis, can determine (for example due to resonance phenomena, etc.) significant effects in the signal output provided by the transducer.

To express the effect due to vibrations, it will be necessary to define the maximum variation in the output, for each value of the physical input quantity, when a specific amplitude of the vibration, and for a given frequency range, is applied according to an axis of the transducer.

Effects due to environmental pressure

Sometimes it can be verified that the transducer must operate in conditions under which the pressure is significantly different from the pressure at which the calibration operation was carried out, which in general is equal to the environmental pressure. Relatively different pressures from those to which the calibration tests have been conducted may determine variations in the internal geometry of the transducer to vary the metrological characteristics provided by the manufacturer.

A deviation from the calibration conditions is much more severe than from damage to the transducer which, on the other hand, is easily detectable by the experimenter. The error due to pressure is defined as the maximum variation of the transducer output, for each value of the input quantity included in the measurement range, when the pressure at which the transducer operates is made to vary in specified intervals.

Effects due to commissioning of the transducer

If the commissioning of a transducer does not occur with care, damage can occur (deformation of the structure, for example) such as to vary the operating conditions of the transducer. No data relating to this cause of the error are available from the manufacturer, and the user must make sure of the proper and correct installation of the device.

Measurement errors

The error of measurement is the difference between a measured value of a quantity and its true value. The term measurement uncertainty is often used as a synonym for measurement error. In metrology, the analysis of errors includes the study of uncertainties in the measurements, since no measure as far as it is carried out with care is entirely free from uncertainties.

The term error does not necessarily imply an incorrect measurement procedure by the operator, but also an uncertainty provided by the instrumentation, namely that the value presented by the measuring instrument provides a value of the measured quantity with a certain approximation. Measurement errors are caused by:

  • human factors (inaccuracies in the design of the measurement chain, distractions or poor operator accuracy);
  • technological factors (static and dynamic constructive and metrological qualities of the instruments);
  • environmental factors (external influence quantities present in the environment in which the measurement is made).

In statistics, an error is not a “mistake”. Variability is an inherent part of the results of measurements and of the measurement process. The measurement operation is always invasive, in fact, it introduces a perturbation in the system that we want to investigate; therefore the variables involved are always altered when the measurement is performed.

The measurement error can depend on both the instrument and the observer. There are two main types of errors:

  • random errors (or accidental, which may vary from observation to another);
  • systematic errors (which always occurs, with the same value, when we use the instrument in the same way and in the same case).

Methods of measurement

When precision measurements are made to determine the values of a physical variable, different methods of measurement are employed. For measurement method is defined as the logical sequence of efficient operations, employed in measuring physical quantities under observation.

The better the measurement method used and how much better are the instruments and their technology, much closer to reality is the measure describing the state of the measured physical quantity. In principle, therefore, the measure represents the physical reality with a certain approximation, or with a certain error, an error that can be made very small but never null.

The choice of the method of measurement depends on the required accuracy and the amount of permissible error. Irrespective of the method used, the primary objective is to minimize the uncertainty associated with the measurement. The common methods employed for making measurements are as follows:

Direct method

In this method, the quantity to be measured is directly compared with the primary or secondary standard. Scales, vernier calipers, micrometers, bevel protractors, etc., are used in the direct method. This method is widely employed in the production field. In the direct method, a very slight difference exists between the actual and the measured values of the quantity. This difference occurs because of the limitation of the human being performing the measurement.

The advantage of direct measurements consists mainly in the fact that with them it is harder to make gross errors, since the instrument necessary to make the comparison is generally simple, and therefore not subject to hidden faults.

Indirect method

In this method, the value of a quantity is obtained by measuring other quantities that are functionally related to the required value. Measurement of the quantity is carried out directly and then the value is determined by using a mathematical relationship.

Most of the measurements are obtained indirectly, almost always for cost reasons. For example, a density measurement of a given substance could be obtained directly through a device called densimeter, but it is definitely more convenient to directly measure the mass and volume of the substance and then make the relationship.

Indirect measurements, on the other hand, are more subject to approximations since error propagation is present in the formula that represents the physical law. It is, therefore, necessary to pay particular attention to the approximations that are made when performing direct measurements.

Fundamental or absolute method

In this case, the measurement is based on the measurements of base quantities used to define the quantity. The quantity under consideration is directly measured and is then linked with the definition of that quantity.

Comparative method

In this method, as the name suggests, the quantity to be measured is compared with the known value of the same quantity or any other quantity practically related to it. The quantity is compared with the master gauge and only the deviations from the master gauge are recorded after comparison. The most common examples are comparators, dial indicators, etc.

Transposition method

This method involves making the measurement by direct comparison, wherein the quantity to be measured V is initially balanced by a known value X of the same quantity; next, X is replaced by the quantity to be measured and balanced again by another known value Y. If the quantity to be measured is equal to both X and Y, then it is equal to:

\[V=\sqrt{XY}\]

An example of this method is the determination of mass by balancing methods and known weights.

Coincidence method

This is a “differential” method of measurement wherein a very minute difference between the quantity to be measured and the reference is determined by careful observation of the coincidence of certain lines and signals. Measurements on vernier caliper and micrometer are examples of this method.

Deflection method

This method involves the indication of the value of the quantity to be measured directly by the deflection of a pointer on a calibrated scale. Pressure measurement is an example of this method.

Complementary method

The value of the quantity to be measured is combined with a known value of the same quantity. The combination is so adjusted that the sum of these two values is equal to the predetermined comparison value. An example of this method is the determination of the volume of a solid by liquid displacement.

Null measurement method

In this method, the difference between the value of the quantity to be measured and the known value of the same quantity with which comparison is to be made is brought to zero.

Substitution method

It is a direct comparison method. This method involves the replacement of the value of the quantity to be measured with a known value of the same quantity, so selected that the effects produced in the indicating device by these two values are the same. The Borda method of determining mass is an example of this method.

Contact method

In this method, the surface to be measured is touched by the sensor or the measuring tip of the instrument. Care needs to be taken to provide constant contact pressure in order to avoid errors due to excess constant pressure. Examples of this method include measurements using a micrometer, vernier caliper, and dial indicator.

Contactless method

As the name indicates, there is no direct contact with the surface to be measured. Examples of this method include the use of optical instruments, tool maker’s microscope, and profile projector.

Composite method

The actual contour of a component to be checked is compared with its maximum and minimum tolerance limits. Cumulative errors of the interconnected elements of the component, which are controlled through a combined tolerance, can be checked by this method. This method is very reliable to ensure the interchangeability and is usually effected through the use of composite GO gauges. The use of a GO screw plug gauge to check the thread of a nut is an example of this method.

Leave a Comment