In Metrology, the sensitivity of a measuring instrument is that metrological characteristic that provides information on the instrument’s ability to detect small variations in the input quantity; in other words: the increment of the output signal (or response) to the increment of the input measured signal. It can be defined also as the ratio of the incremental output and the incremental input.

The International Metrology Vocabulary (VIM) defines sensitivity as “quotient of the change in an indication of a measuring system and the corresponding change in a value of a quantity being measured.”

While defining the sensitivity, we assume that the input-output characteristic of the instrument is approximately linear in that range. It also represents a static characteristic of an instrument. Sensitivity of a measuring system can depend on the value of the quantity being measured.

In mathematical terms, the sensitivity of a measuring instrument or sensor is the ratio of the change in the measured value \(R\) to the change in the actual value \(E\) of the quantity under consideration, for arbitrarily small variations:

\[S = \dfrac{dR}{dE}\]

There is a limiting variation \(dE\) below which \(dR\) either becomes undetectable or is confused with the intrinsic noise of the instrument. This determines the minimum sensitivity of the system, i.e., the minimum change in the physical quantity that is capable of producing an effect.

Again sensitivity of an instrument may also vary with temperature or other external factors. This is known as sensitivity drift. In order to avoid such sensitivity drift, sophisticated instruments are either kept at a controlled temperature, or suitable in-built temperature compensation schemes are provided inside the instrument.

Sensitivity in statistics

The term sensitivity, in statistics, more precisely in the field of epidemiology, indicates the intrinsic ability of a screening test to detect in a reference population the sick subjects. This concept is opposed to that of specificity that is the ability of the test to identify as negative healthy subjects.

It is given by the proportion of subjects really diseased and positive to the test (true positives) compared to the entire population of patients. A test will be the more sensitive the lower will be the quota of false negatives (i.e. diseased subjects mistakenly identified by the test as healthy). A very sensitive test, in the end, allows us to limit the possibility that a diseased subject will test negative.

It is not related to the prevalence of the disease in question.

Leave a Comment