What is Metrology?
Metrology is the science that aims to identify the most appropriate and accurate methods for measuring any physical quantity, for which it also defines the unit of measurement, and for correctly expressing and using the result of the measurement itself. Therefore, it deals only with physical quantities; so much so that, in order to have the right to be defined as such, the properties of an object or phenomenon must be measurable, that is, it must be possible to define units and methods of measurement.
Metrology is a multidisciplinary science which, for its purposes, must be confronted with both purely theoretical aspects (e.g. fundamental physics, mathematics and statistics) and practical aspects (e.g. mechanical engineering, electronics), up to and including proper management (e.g. management of laboratories, cost-benefit analysis).
The need to establish uniform and reproducible methods for measuring goods and products has led authorities since ancient times to define patterns of units of measurement. Often based on artifacts, measuring parts of the human body (thumb, foot) and mainly localized, they were totally inadequate for the development not only of science and technology, but also of modern industry and international trade.
The foundation of metrology as a science is the work of the spirit of the Enlightenment, and its purpose from the beginning has been to harmonize the locally distributed samples into a single system. Initially, only two units of measurement were adopted, for length and mass. In 1795, the French National Assembly debated the definition of the meter as the forty millionth part of a terrestrial meridian, and that of the kilogram as the mass of an equivalent volume of purified water. In 1799, the metric system progressed with the acquisition of samples of units of measurement, namely a bar and a block of platinum. From the beginning, it was clear that the realization of the sample was quite different from the definition of the unit of measurement.
During the 19th century, the metric system spread first in Europe and then in South America: many countries now used it, but each one had its own patterns, different from each other. To standardize the different national standards, in 1875 the International Metric Convention created the Bureau International des Poids et Mesures (BIPM), whose mission was to preserve the primary prototypes of the meter and the kilogram, to make copies and to compare the national standards with the primary prototypes. The International Committee of Weights and Measures (CIPM) and the General Conference of Weights and Measures (CGPM) were also created to decide on changes to the system of fundamental units. These institutions still exist today and continue their mission.
The International Metre Convention, which now has 51 member states, officially adopted the metric system, which, with the inclusion of the astronomical second as a unit of time, became known as the MKS system (from the initials of meter, kilogram and second). Throughout the twentieth century, new units of measurement were added and the definitions of existing units were continually revised. In 1948, with the introduction of the ampere as the fundamental electrical unit, the MKSA system was established, which was replaced in 1960 by the current International System (SI). This system includes seven fundamental units of measurement for as many physical quantities: the meter (m) for length, the kilogram (kg) for mass, the second (s) for duration, the ampere (A) for electric current, the kelvin (K) for thermodynamic temperature, the mole for quantity of substance, and the candle (cd) for luminous intensity.
Over the past century, the principles behind the choice of fundamental units have evolved: from units based on artifacts to units rooted in the natural world and physical laws. As early as 1870, James C. Maxwell argued for the need to define units of measurement by linking them to the physical properties of atoms and molecules, rather than to the motion or size of the Earth. Max Planck went further, arguing that units should not be defined by the properties of specific atoms, but rather by fundamental constants, such as the speed of light in a vacuum c, the universal gravitational constant G, Planck’s constant h, the elementary charge e, and so on. The main difficulty with such a system of units was its impracticality: these fundamental units were extremely advantageous for the microscopic world, but were not very adaptable to the needs of everyday life in the macroscopic world.
Today, Maxwell’s point of view is taken for granted, and thanks to the increased ability to relate the scale of the microscopic world to the human one, metrology is moving in the direction indicated by Planck. However, metrology is a cautious science: changes in the fundamental units proceed at a steady but slow pace and are adopted only when there is a consensus on the maturity of the scientific and technological knowledge involved and after an evaluation of the costs and benefits to science and society.
Physical quantity
A physical quantity is defined as a physical property of a body or entity with which it is possible to describe phenomena that can be measured (quantified by measurement). A physical quantity can be expressed as the combination of a magnitude expressed by a number â usually a real number â and a unit of measurement. They can be of two types: scalar or vector.
A scalar quantity is a quantity that is described solely, from a mathematical point of view, by a âscalar,â that is, by a real number associated with a unit of measurement (examples are the following: mass, energy, temperature, etc.). The definition of âscalarâ derives from the possibility of reading the value on a graduated scale of a measuring instrument, as it does not need other elements to be identified.
On the other hand, it is more complex to define a physical quantity (such as velocity, acceleration, force, etc.) to associate its value with other information such as, for example, a direction or a verse or both; in this case, we are dealing with a vector quantity described by a vector. Unlike vector quantities, the scalar ones are therefore not sensitive to the size of the space, nor to the particular reference or coordinate system used.
Furthermore, each physical quantity corresponds to a unit of measurement that can be âfundamentalâ (base) if the physical quantity is one of the fundamental ones of the International System, or âderivedâ if it derives (or is formed) from the fundamental ones. So, the physical quantities can be classified into two types: base and derived.
Base physical quantities (SI base units)
By convention, the base physical quantities used in the International System of Units (SI) are seven, organized in a system of dimensions and assumed to be independent. Each of the seven base quantities used in the SI is regarded as having its dimension, which is symbolically represented by a single sans serif roman capital letter. The symbols used for the base quantities, and the symbols used to denote their dimension, are given as follows.
The dimension of a physical quantity does not include magnitude or units. The conventional symbolic representation of the dimension of a base quantity is a single upper-case letter in roman (upright) sans-serif type.
Base quantity | Symbol for quantity | Symbol for dimension | SI unit | SI unit symbol |
---|---|---|---|---|
length | l | L | meter | m |
mass | m | M | kilogram | kg |
time | t | T | second | s |
electric current | I | I | ampere | A |
thermodynamic temperature | T | Î | kelvin | K |
amount of substance | n | N | mole | mol |
luminous intensity | Iv | J | candela | cd |
The value of a quantity is generally expressed as the product of a number and a unit. The unit is a particular example of the quantity concerned which is used as a reference. Units should be chosen so that they are readily available to all, are constant throughout time and space, and are easy to realize with high accuracy. The number is the ratio of the value of the quantity to the unit. For a particular quantity, many different units may be used. All other quantities are called derived quantities, which may be written in terms of the base quantities by the equations of physics.
Derived physical quantities (SI derived units)
Derived units are products of powers of base units. They are either dimensionless or can be expressed as a product of one or more of the base units, possibly scaled by an appropriate power of exponentiation. Coherent derived units are products of powers of base units that include no numerical factor other than 1. The base and coherent derived units of the SI form a coherent set, designated the set of coherent SI units.
The International System of Units (SI) assigns special names to 22 derived units from SI base units, which includes two dimensionless derived units, the radian (rad) and the steradian (sr).
Name | Symbol | Quantity | Equivalents | SI base unit equivalents |
---|---|---|---|---|
hertz | Hz | frequency | 1/s | sâ1 |
radian | rad | angle | m/m | 1 |
steradian | sr | solid angle | m2/m2 | 1 |
newton | N | force, weight | kg·m/s2 | kg·m·sâ2 |
pascal | Pa | pressure, stress | N/m2 | kg·mâ1·sâ2 |
joule | J | energy, work, heat | N·m C·V W·s | kg·m2·sâ2 |
watt | W | power, radiant flux | J/s V·A | kg·m2·sâ3 |
coulomb | C | electric charge or quantity of electricity | s·A F·V | s·A |
volt | V | voltage, electrical potential difference, electromotive force | W/A J/C | kg·m2·sâ3·Aâ1 |
farad | F | electrical capacitance | C/V s/Ω | kgâ1·mâ2·s4·A2 |
ohm | Ω | electrical resistance, impedance, reactance | 1/S V/A | kg·m2·sâ3·Aâ2 |
siemens | S | electrical conductance | 1/Ω A/V | kgâ1·mâ2·s3·A2 |
weber | Wb | magnetic flux | J/A T·m2 | kg·m2·sâ2·Aâ1 |
tesla | T | magnetic field strength, magnetic flux density | V·s/m2 Wb/m2 N/(A·m) | kg·sâ2·Aâ1 |
henry | H | electrical inductance | V·s/A Ω·s Wb/A | kg·m2·sâ2·Aâ2 |
degree Celsius | °C | temperature relative to 273.15 K | K | K |
lumen | lm | luminous flux | cd·sr | cd |
lux | lx | illuminance | lm/m2 | mâ2·cd |
becquerel | Bq | radioactivity (decays per unit time) | 1/s | sâ1 |
gray | Gy | absorbed dose (of ionizing radiation) | J/kg | m2·sâ2 |
sievert | Sv | equivalent dose (of ionizing radiation) | J/kg | m2·sâ2 |
katal | kat | catalytic activity | mol/s | sâ1·mol |
Definition of physical quantity value and true value
In metrology, the quantity value represents the number and the unit together expressing the magnitude of a physical quantity. Instead, the exact value of a variable is called the true value (corresponds to the correct measure without uncertainties). True value may be defined as the mean of the infinite number of measured values when the average deviation due to the various contributing factors tends to zero. In practice, the realization of the true value is not possible due to uncertainties (errors) of the measuring process and hence cannot be determined experimentally. Positive and negative deviations from the true value are not equal and will not cancel each other. One would never know whether the quantity being measured is the true value of the quantity or not. The sources of this uncertainty are many; for example:
- impossibility of ensuring the absolute absence of measurement errors;
- impossibility of having infinitely precise instrumentation;
- impossibility of perfect control of the boundary conditions, which modify the measurand;
- intrinsic instability present in practically all the measurands, linked to the nature of the measured quantity;
- quantum effects on matter and energy.
Conventionally true value
The conventionally true value is the value of a quantity which, in particular cases, can be considered a true value. In general, for a given purpose, it is considered that the conventionally true value is close enough to the true value, that the difference can be considered as negligible. When the concept of conventionally true value is applied to a physical quantity that characterizes an object, and this can be considered stable, the true value is defined as the nominal value of the object. An example may be the nominal value of the sample weight.
Dimensionless physical quantity
A dimensionless quantity is a physical quantity to which no physical dimension is assigned, also known as a bare, pure, or scalar quantity or a quantity of dimension one, with a corresponding unit of measurement in the SI of the unit one (or 1), which is not explicitly shown. Dimensionless quantities are widely used in many fields, such as mathematics, physics, chemistry, engineering, and economics.
Concept of measurement
In metrology, the term measurement is closely associated with all activities related to scientific, industrial, commercial, and human aspects. It is defined as the assignment of a number to a characteristic of an object or event that can be compared with other objects or events. The knowledge of the reality that surrounds us is based on the measurement of physical quantities, in fact we can say that to know means to measure.
The need to establish precise rules for trade and for the organization of territory has made the need for measurement alive since the Mesolithic period; the development of craft activities, the construction of dwellings and artifacts for public use, the exchange mediated by coins, and the organization of work and property led to the devising, from the earliest civilizations, of practical systems of measurement, such as the division of time into months and days (and then into smaller fractions), of lengths and areas, of the weight of various objects, and of the value of monetary units.
Each people developed their own systems of measurement, but the transformation from one system of units to another was always approximate until equivalence criteria were established that took into account precise patterns for the various units of measurement.
The earliest examples of units of length and time, found in Egyptian and Assyrian cultures, were kept in temples and other sacred buildings and were preserved for centuries without much change. More rigorous were the units of measurement of ancient Rome, which spread throughout Europe and, by the fall of the empire, had given rise to numerous systems of measurement, also very different from each other. It was not until the end of the 18th century that we saw the emergence of measurement systems that gradually acquired a worldwide character.
Measurement applications
Types of measurement applications can be divided into only three broad categories:
- Monitoring of processes and operations: refers to situations where the instrument is used to monitor some physical quantity (without any control functions).
- Control of processes and operations: is one of the most important classes of measurement applications. This usually refers to an automatic feedback control system.
- Experimental Engineering Analysis: is that part of engineering design, development, and research that relies on laboratory tests of one kind or another to answer questions.
Any application of measurement, including those that have not yet been âinvented,â can be classified in one of the three groups just listed, or any combination of them.
The primary goal of measurement in industrial inspection is to determine the quality of the manufactured component. Various quality requirements such as allowable tolerance limits, shape, surface finish, size, and flatness must be considered to verify that the component meets the quality specifications. To do this, quantitative information about a physical object or process must be obtained by comparing it to a reference. The three basic elements of measurement that are important are the following
- Measurand, a physical quantity to be measured (such as length, weight, and angle);
- Comparator, to compare the measurand (physical quantity) with a known standard (reference) for evaluation;
- Reference, the physical quantity or property to which quantitative comparisons are to be made and which is internationally recognized.
All of these three elements would be considered to explain direct measurement against a calibrated fixed reference. To determine the length of the component, the measurement is made by comparing it to a steel scale (a known standard).
Definition of measurand
The measurand is defined as a physical quantity to be measured (such as length, weight, and angle). Specifying a quantity requires:
- knowledge of the type of physical quantity;
- a description of the state of the phenomenon, body, or substance of which the physical quantity is a property (including all relevant constituents);
- the chemical entities involved.
It is important to emphasize that the term “measurand” does not refer to the object or phenomenon on which a measurement is made, but to a specific physical quantity that characterizes it. For example, when we measure the temperature of a liquid, the measurand is not the liquid, but its temperature.
Interferences in measurements
In metrology, in cases where the environmental conditions of the actual use of the transducer differ significantly from the environmental calibration conditions, the effects of the influence quantities must be taken into account. In these cases, specific tests must be performed on a population of transducers or at least on a single transducer.
It seems necessary to emphasize that attention must be paid to the environmental conditions, not only during the operation of the sensor, but also during the previous phases, such as storage and transport; these environmental conditions, if not checked and verified, can significantly and, above all, unpredictably modify the metrological performance of the transducer. Some of the most important influences that occur in mechanical and thermal measurements are summarized below.
Temperature effects
For each transducer, the operating temperature range is specified within which it can be used without damage. Within this range, the trends of both the zero drift and the sensitivity drift are generally provided by the manufacturer. For example, in the case of resistance strain gages, both the trends of the apparent deformation as a function of temperature (zero drift) and the sensitivity coefficient of the calibration factor as a function of temperature (sensitivity drift) are given.
Another method of expressing the effect of temperature is to determine a range of variation of the error due to temperature, expressed, for example, as a percentage of full scale. It is also necessary to know the maximum and minimum temperature at which the transducer can be exposed without permanent damage, i.e. without the metrological characteristics changing. Changes in ambient temperature not only affect the static metrological characteristics, but also the dynamic ones. It is necessary that the values supplied by the manufacturer refer to a specific temperature variation range. However, temperature has effects that can be significant even when there are step changes.
Acceleration effects
Errors due to acceleration can occur either directly on the sensitive element or on the connection or support elements and can be of such magnitude as to cause deformations that render the measurements made meaningless. In general, the transducers will show a more significant sensitivity to acceleration along some axes; therefore it is necessary to indicate the triad of the selected reference axes and to express the error due to acceleration.
The acceleration error is defined as the maximum difference between the output of the transducer in the absence and in the presence of a specified constant acceleration applied along a specific axis. Finally, it should be noted that some sensors are sensitive to the acceleration due to gravity, so the position of the transducer with respect to the gravitational field is an essential constraint.
Effects due to vibrations
The variation of the frequency of the vibrations, applied according to a specific reference axis, can determine (for example, due to resonance phenomena, etc.) significant effects in the signal output provided by the transducer.
To express the effect due to vibrations, it will be necessary to define the maximum variation of the output, for each value of the physical input quantity, when a given amplitude of vibration and for a given frequency range is applied along an axis of the transducer.
Effects of ambient pressure
Sometimes it is necessary to verify that the transducer will operate in conditions where the pressure is significantly different from the pressure at which the calibration was performed, which is generally the ambient pressure. Relatively different pressures from those at which the calibration tests were carried out can determine variations in the internal geometry of the transducer to vary the metrological characteristics provided by the manufacturer.
A deviation from the calibration conditions is much more serious than damage to the transducer, which can be easily detected by the experimenter. The error due to pressure is defined as the maximum variation of the transducer output for each value of the input quantity included in the measurement range when the pressure at which the transducer operates is varied at specified intervals.
Effects of transducer commissioning
If the transducer is not installed with care, it may be damaged (e.g. deformation of the structure) and the operating conditions of the transducer may change. The manufacturer does not have any information on this cause of failure, and the user must ensure that the instrument is installed properly and correctly.
Measurement methods
When precision measurements are made to determine the values of a physical quantity, different measurement methods are used. A measurement method is defined as the logical sequence of efficient operations used to measure the physical quantities under observation.
The better the measurement method used and the better the instruments and their technology, the closer to reality is the measure describing the state of the measured physical quantity. In principle, therefore, the measure represents the physical reality with a certain approximation, or with a certain error, an error that can be made very small, but never zero.
The choice of measurement method depends on the accuracy required and the amount of error that can be tolerated. Regardless of the method used, the primary objective is to minimize the uncertainty associated with the measurement. The most common methods of measurement are as follows:
Direct method
In this method, the quantity being measured is compared directly to the primary or secondary standard. Scales, calipers, micrometers, goniometers, etc. are used in the direct method. This method is widely used in the production field. In the direct method, there is a very small difference between the actual value and the measured value of the quantity. This difference is due to the limitation of the person who makes the measurement.
The advantage of direct measurement is that it is more difficult to make gross errors, since the instrument used for comparison is generally simple and therefore not subject to hidden errors.
Indirect method
In this method, the value of a quantity is obtained by measuring other quantities that are functionally related to the required value. The quantity is measured directly, and then the value is determined using a mathematical relationship.
Most measurements are made indirectly, almost always for cost reasons. For example, the density of a given substance could be measured directly using a device called a densimeter, but it is definitely more convenient to measure the mass and volume of the substance directly and then establish the relationship.
Indirect measurements, on the other hand, are more subject to approximations, since the error propagation is present in the formula that represents the physical law. Therefore, it is necessary to pay special attention to the approximations that are made when making direct measurements.
Fundamental or absolute method
In this case, the measurement is based on the measurements of the base quantities used to define the quantity. The quantity under consideration is measured directly and then linked to the definition of that quantity.
Comparative method
In this method, as the name suggests, the quantity to be measured is compared with the known value of the same quantity or another quantity that is practically related to it. The quantity is compared to the reference and only the deviations from the reference are recorded after the comparison. The most common examples are comparators, dial indicators, etc.
Transposition method
In this method, the measurement is made by direct comparison, where the quantity to be measured, V, is first compared with a known value, X, of the same quantity; then X is replaced by the quantity to be measured and compared again with another known value, Y. If the quantity to be measured is equal to both X and Y, then it is equal to
\[V=\sqrt{XY}\]
An example of this method is the determination of mass by balancing methods and known weights.
Coincidence Method
This is a âdifferentialâ method of measurement in which a very small difference between the quantity to be measured and the reference is determined by carefully observing the coincidence of certain lines and signals. Calipers and micrometers are examples of this method.
Deflection method
In this method, the value of the quantity to be measured is indicated directly by the deflection of a pointer on a calibrated scale. Pressure measurement is an example of this method.
Complementary method
The value of the quantity to be measured is combined with a known value of the same quantity. The combination is adjusted so that the sum of the two values is equal to the predetermined reference value. An example of this method is determining the volume of a solid by displacing a liquid.
Zero measurement method
In this method, the difference between the value of the quantity to be measured and the known value of the same quantity to be compared with is set to zero.
Substitution method
This is a direct comparison method. In this method, the value of the quantity to be measured is replaced by a known value of the same quantity, chosen so that the effects produced by these two values in the indicator are the same. The Borda method of mass determination is an example of this method.
Contact method
In this method, the surface to be measured is touched by the sensor or measuring tip of the instrument. Care must be taken to maintain a constant contact pressure to avoid errors due to excessive constant pressure. Examples of this method include measurements using a micrometer, caliper gauge, and dial indicator.
Non-Contact method
As the name implies, there is no direct contact with the surface being measured. Examples of this method include the use of optical instruments, a toolmakerâs microscope, and a profile projector.
Composite method
The actual contour of a part to be inspected is compared to its maximum and minimum tolerance limits. This method can be used to check the cumulative errors of the interconnected elements of the component, which are controlled by a combined tolerance. This method is very reliable for ensuring interchangeability and is usually performed using composite GO gauges. The use of a GO plug gauge to check the thread of a nut is an example of this method.
Concept of Time
Time is a concept inextricably linked to nature and human experience; it is placed, along with the concept of space, at the foundation of manâs constructed models of the universe and the phenomena that occur within it.
The concept of time and its measurement are essentially based on cyclic processes: the first proposal for the use of linear processes, more congruent with the philosophical concept of time, was made in 1715 by E. Halley, who indicated in the degree of salinity of the sea, increasing with time, an index of time. Similarly, W. T. Kelvin indicated in the cooling of the Earth and H. Helmholtz in the contraction of the Sun, indices of time.
The discovery of radioactivity provided the most precise linear process: the corresponding unit is the half-life of a radioactive element, that is, the time required for half of the elementâs nuclei to decay. Linear time scales proved particularly useful in the development of methods for measuring large time intervals (e.g., the carbon-14 method). Implicit in the use of radioactive natural clocks and the development of ultraprecise clocks is the notion that atoms obey the same physical laws in all places and at all times: the possibility that physical laws vary over time remains to be tested.
Understanding time in physics
In physics, it is considered a fundamental physical quantity, defined only by the method used to measure it. The problem of measuring time is one of the most important problems in science and technology.
The concept of time, from the metrological point of view, should be studied in the two aspects of the time scale and the unit of time; a time scale is an uninterrupted succession of phenomena that makes it possible to establish a chronology, that is, to assign a date to every other event; knowing the mechanics, that is, the set of physical laws that coincide to form the scale, dates can be expressed in uniform time, which in turn can be used to interpret any other natural phenomenon. The unit of time is the duration of the time interval separating two phenomena chosen once and for all along the time scale.
The history of time measurement shows that the phenomena chosen to form the time scale were periodic natural phenomena; multiples or submultiples of the period of the phenomena themselves were adopted as units of time. The precision with which the units of time are determined, that is, the uniformity of the time scale, is a function of the knowledge of the theory behind the phenomena forming the scale. For a long time, these phenomena were astronomical, related to the rotation of the Earth and its orbit around the Sun.
The units of time (day, second, year, etc.) were derived by comparing numerous observations. The unit of time, however, can be directly available if there is a reproducible duration at any place and any time: conditions of this kind are met today with the use of clocks, in particular atomic clocks, which use as the unit of time the period of a suitable atomic transition, chosen as the sample duration (the inverse of the sample duration is the sample frequency: for practical purposes it is actually more convenient to use frequencies rather than time). With the latter, it is possible to define a time scale, called atomic time, which is independent of the set of astronomical phenomena commonly considered for the evaluation of time.
The precision with which one operates today with an astronomical time scale, i.e. the dating of astronomical phenomena, is limited only by observational errors. However, it should be emphasized that the theory of astronomical phenomena, especially of the Earthâs motion, is still imperfect, so the resulting unit of time is imprecise.Therefore, for the sake of homogeneity and convenience, conventional time scales are used to date astronomical phenomena. The most common ones are based on sidereal time and solar time: the former has as its unit the sidereal day, the latter the solar day, defined by the value of the hour angle of the stars (or an appropriately chosen star) and the Sun, respectively. Thus, both are local times, that is, their value at the same moment depends on the position of the observer on the Earth (longitude); this variability gave rise to the need to introduce time zones, 24 zones on the Earth, in each of which the same time conventionally applies.
The philosophical notion of time
In philosophy, the concept of time varies according to whether it is considered from an objectivist point of view, in which time is seen as something real and absolute in itself, independent of relations to the external world and the human subject, or from a subjectivist-idealist point of view, in which the origin of time is located in the subject. The concept of time has a special place in contemporary existentialist thought. Fundamentally realist and objectivist, though in different approaches, is the concept of time that the Greek thinkers, from the Pythagoreans to Plato, had, who saw in time the image â in movement, but cyclical movement, always returning, as in the cycles of the years, the seasons, the regular movements of the stars â of the eternity and immutability of being.
Aristotle calls time the âmeasure of motion,â that is, the measurable expression of the regular and constant movements of the life of the cosmos. This concept was taken up in a different form by the great post-Aristotelian schools, as well as by the leading thinkers of the Christian Middle Ages, but was nevertheless neglected by the religious thought of the late ancient world. Plotinus, in fact, identified time with the very life of the soul, with its passing from one moment of its inner existence to another; St. Augustine, relying on the three-dimensionality of time, asserted that the future is âexpected,â the past is âremembered,â only the present is authentic temporality, though it always flows between the other two dimensions. However, the Aristotelian concept of time remained dominant in philosophy until I. Kant, who instead made a real revolution by defining time as a âpure a priori intuitionâ, the âform of internal senseâ. Far from conceiving of it as an absolute dimension, Kant sees in it a fundamental condition of the possibility of perception and thus of knowledge itself.
The Kantian concept of time, interpreted in a one-sided way, as in fact occurred in German idealism, undoubtedly leads to subjectivist reductions that betray the genuine thought of Kant, whose analysis of time must be integrated with those pages of the Analytic of Principles where he identifies the order of temporal succession with the causal order of phenomena: a thesis that has been reproposed in modern times by H. Reichenbach and that is also applied to Einsteinâs theory, which always sees in time a value of causal succession, denying only the uniqueness and absoluteness of such an order.
A âconscientizedâ time is then again contrasted with the âspatializedâ time of contemporary science, in many spiritualist currents beginning with H. Bergson; and even in Husserlian phenomenology, albeit on a very different background, we see an interpretation of time as a current of lived experience. On the other hand, a very distinctive philosophical conception of time emerges with modern existentialism, and especially with M. Heidegger in his work entitled Being and Time. In his interpretation of âbeingâ in terms of possibility, project, and anticipation, Heidegger affirms the existential primacy of the future, in which lies that authentic temporality that the philosopher contrasts with the inauthentic temporality of datable and measurable time.
The sociological approach to time
Two main areas of time analysis can be distinguished in sociology. The first goes back to G. Friedmannâs research on natural time and mechanical time as the opposition between the world of nature â governed by the rhythms of species, the organism, the seasons â and the world of technology, subjected to the dictatorship of productivity, the minute chronological organization of life and everyday life (even echoed in the notion of the sports record).
The second declension of time refers, for the social sciences, to the now traditional and somewhat hackneyed notion of leisure. This term has been used to translate the richer and more precise English concept of leisure (or the French loisir) to denote non-work opportunities for leisure, recreation, and various sporting, recreational, or generally cultural activities.
Some scholars, however, prefer to distinguish at least between leisure (as simple non-work, although this does not adequately take into account certain social conditions, from that of the unemployed to that of the housewife or retiree) and liberated time, as a living space that is removed from the rigid organization of work through individual or collective strategies of time reconquest. The theme of liberated time has been brought to the attention of the social sciences by womenâs emancipation and liberation movements, as part of a critical analysis of the division of social roles and of work itself in contemporary technologically advanced societies.