When Is Calibration Important?

When Is Calibration Important?

Author's Contact Information


Perfection is relative and application specific. The perfect race car is not the car we use to commute to work. We need products for everyday use that are high quality, affordable, and solidly reliable. There will be times when we must use components that are not perfect, and this is when calibration becomes important. Calibration techniques reduce tolerances in imperfect manufacturing equipment while maintaining affordability.


Another term for superior calibration is first-rate workmanship. There was a time when guilds were formed to propagate quality workmanship. An apprentice would work for years to build a physical skill. A master craftsman might dedicate his entire life to expertly carving wood or stone, forging iron, or sculpting pottery to make beautiful buildings, artwork, or monuments.
Today, some argue we are more practical, but commonplace consumer devices are beautifully mind-blowing pieces of utilitarian art. Humankind has gone from a room-filling vacuum tube computer, to a transistor, to a laptop, to smartphones, tablets, and e-readers in about 60 years. We have become jaded instead of being constantly amazed; we just accept these wonders as everyday occurrences.
We insist on quality products, which require accurate manufacturing equipment. At the same time, equipment must be affordable. How can manufacturers deliver "perfect" equipment at a reasonable price? In a word, calibration. Electronic calibration enables the remote calibration and testing of field devices such as sensors, valves, and actuators in factories. It also enables the creation of many low-cost consumer devices.
All practical components, both mechanical and electronic, have manufacturing tolerances. The more relaxed the tolerance, the more affordable the component. But when components are assembled into a system, the individual tolerances sum to create a total system-error tolerance. Through the proper design of trim, adjustment, and calibration circuits, it is possible to correct these system errors, making equipment safe, accurate, and affordable.
Calibration is a comparison of the equipment's performance to a standard of known accuracy, and then a correction (adjustment) to minimize any errors. It allows affordable tolerance components to produce products that surpass normal expectations. The benefits of calibration are many and can reduce cost in several areas. Calibration can be used to remove manufacturing tolerances, specify less-expensive components, increase reliability and customer satisfaction, reduce test time and customer returns, and speed product delivery.

Calibration in the Real World

A common advertising gimmick auto repair shops use illustrates the importance of careful calibration. A representative of the shop faces the audience as a car quickly approaches from behind. The vehicle comes to a screeching halt, seconds away from hitting the employee. The employee denotes confidence in the brakes and the company's workmanship with the phrase "we stand in front of our work." One quickly decides that to trust the company's products and services.
Another story emphasizing the use of calibration is found in Dr. W.J. Youden's book, "Experimentation and Measurement".¹
In 1890, a British scientist, Lord Rayleigh, undertook a study in which he compared nitrogen obtained from the air with nitrogen released by heating ammonium nitrite. He wanted to compare the densities of the two gases; that is, their weights per unit of volume. He did this by filling a bulb of carefully determined volume with each gas in turn under standard conditions: sea level pressure at 0° Centigrade. The weight of the bulb when full minus its weight when the nitrogen was exhausted gave the weight of the nitrogen. One measurement of the weight of atmospheric nitrogen gave 2.31001 grams. Another measurement on nitrogen from ammonium nitrite gave 2.29849 grams. The difference, 0.01152, is small. Lord Rayleigh was faced with a problem: was the difference a measurement error or was there a real difference in the densities? On the basis of existing chemical knowledge there should have been no difference in densities. Several additional measurements were made with each gas, and Lord Rayleigh concluded that his data were convincing evidence that the observed small difference in densities was in excess of the experimental errors of measurement and therefore actually existed. There now arose the intriguing scientific problem of finding a reason for the observed difference in density. Further study finally led Lord Rayleigh to believe that the nitrogen from the air contained some hitherto unknown gas or gases that were heavier than nitrogen, and which had not been removed by the means to remove the other known gases. Proceeding on this assumption, he soon isolated the gaseous element argon. Then followed the discovery of the whole family of the rare gases, the existence of which had not even been suspected. The small difference in densities, carefully evaluated as not accidental, led to a scientific discovery of major importance.
The number of inventions and discoveries that have been made possible by careful, observant scientists using equipment that they can trust is immense. We cannot escape the fact that calibration is an important part of our lives.

Absolutely Perfect or Good Enough?

Very few things are absolutely perfect. Even with all the money spent on professional race cars, are they perfect? Would we want to drive one to pick up groceries with our children? Could we afford the fuel costs? Obviously we understand that "application specific" applies to cars.
Similarly, when is "good enough" really enough? This also depends on the application. If we buy a quart of milk, a part of an ounce is important. Contrastingly, the amount of drinking water in a reservoir with miles of shoreline is measured in millions or billions of gallons. In these types of measurements, an ounce is insignificant.
Electronic measurements are also application specific—and laboratory instruments have very different tolerance requirements. One way to build an electronic-measuring instrument for a laboratory is to use extremely close tolerance parts for every component. This extra expense might be warranted in some applications. However, another approach can result in the required accuracy at a lower cost. This second method, employing calibration, can achieve a more accurate instrument. The instrument is manufactured with components with an affordable tolerance level. Those tolerances are calibrated out of the measurement by adjusting the instrument while comparing the instrument with a reliable standard.
A simple illustration is a circuit made up of four amplifiers (Figure 1). The gain is set by the tolerances of eight resistors. If the overall circuit is required to be within ±1%, we could use 0.1% resistors (method one) or we could use calibration. We could decide to replace one resistor with an adjustable resistor or potentiometer (pot). Then, the other seven resistors could be 5% resistors (±35% total) or 1% resistors (±7% total). An analysis of the circuit is necessary to decide what tolerance is practical. Other parameters will also be considered, such as power consumption, the granularity of adjustment, and temperature stability.
Figure 1. A system with four amplifiers between other circuit functional blocks.
Figure 1. A system with four amplifiers between other circuit functional blocks.
The next consideration is the stability of the adjustment—will one adjustment at the factory meet the requirement, or will periodic adjustment be necessary? Again, this is application specific. An extraordinary tool, the Micro-Cap 10 Circuit simulator from Spectrum Software, compares stability and the change in a waveform versus the tolerance of components. A free evaluation version can be found at www.maximintegrated.com/cal. The software allows resistor values to be swept and Monte Carlo analysis to be performed to explore the effects of component tolerances.

Calibration for Quality and Affordability

Calibration allows the design engineer to create solid, reliable products that are affordable. The inexperienced design engineer may be tempted to take shortcuts. Those with more experience have seen this error and they have lamented why there is always time to fix something when there was not enough time to do it correctly in the first place. In the long run, it is best to remember Mark Twain's famous saying, "Always do right. This will gratify some and astonish the rest."
Accurate automated adjustments with calibration digital-to-analog converters (CDACs) and calibration digital potentiometers (CDPots) make trimming away component tolerances easy. CDACs and CDPots share some unique attributes that enable automated calibration—upon power-on, they start in a known condition. That can be full-scale, midscale, low-scale, or a previously set level from self-contained nonvolatile memory. Figure 2 compares a DAC with a CDAC and CDPot.
Ordinary DACs allow a single reference voltage (VREF) to be applied; this reference voltage usually becomes the highest DAC setting. The lowest DAC setting is a fixed voltage, typically ground. The CDAC and CDPot allow both the top and bottom DAC voltage to be set to arbitrary voltages, thus removing excess adjustment range. Removing the unused adjustment range eliminates any possibility that the circuit could be grossly misadjusted. The high and low voltages for the CDAC and CDPot are arbitrary and, therefore, can be centered wherever the circuit calibration is required.
Figure 2. Comparing a DAC with a CDAC and CDPot.
Figure 2. Comparing a DAC with a CDAC and CDPot.

Benefits of Replacing Mechanical Trims with Electronic Equivalents

Digitally controlled adjustable devices offer several advantages over mechanical devices in industrial systems. The largest advantage is lower test cost. Automatic test equipment (ATE) can perform calibration precisely time after time, eliminating the considerable costs associated with error-prone manual adjustments. Also, digital pots allow periodic testing to occur more frequently or over longer equipment lifespans, since they guarantee 50,000 writing cycles. The best mechanical pots can support only a few thousand adjustments.
Location flexibility and size are other advantages compared to mechanical pots. Digitally adjustable pots can be mounted on the circuit board directly in the signal path, exactly where they are needed. In contrast, mechanical pots can require human access, possibly necessitating long circuit traces or coaxial cables. In sensitive circuits, the capacitance, time delay, or interference pickup of these cables can reduce equipment performance.
Digital pots also maintain their calibration values better over time, whereas mechanical pots can continue to experience small movements even after they are sealed. For example, a wiper will move as the wiper spring relaxes when the pot is temperature cycled, or when the pot is subjected to shipping vibration. Calibration values stored in digital pots are not affected by these factors.
Additionally, a one-time programmable (OTP) CDPot can be used for extra safety. It permanently locks in the calibration setting, preventing an operator from making further adjustments (Figure 3). To change the calibration value, one must physically replace the OTP CDPot. A special variant of the OTP CDPot always returns to its stored value upon power-on reset, while allowing operators to make limited adjustments during operation at their discretion.
Figure 3. An adjustable filter with gain allows the calibration setting to be frozen via OTP.
Figure 3. An adjustable filter with gain allows the calibration setting to be frozen via OTP.

Leveraging Precision Voltage References for Digital Calibration

Sensor and voltage measurements with precision analog-to-digital converters (ADCs) are only as accurate as the voltage reference used for comparison. Likewise, output control signals are only as accurate as the reference voltage supplied to the DAC, amplifier, or cable driver. Compact, low-power, low-noise, and low-temperature-coefficient voltage references are affordable and easy to use. In addition, some references, like the DS1859, have internal temperature sensors to aid in the tracking of environmental variations.
Common power supplies are not adequate to act as precision voltage references. Typical power supplies are only 5% to 10% accurate; they change with load and line changes, and they tend to be noisy. On the other hand the MAX6325 has an initial accuracy of ±0.02%, noise less than 1.5µVP-P, and a 1ppm temperature coefficient.
In general, there are three kinds of serial calibration voltage references (CRefs), each of which offers unique advantages for different factory applications. Having a choice of serial voltage references enables the designer to optimize and calibrate exact circuits.
The first type of reference enables a small trim range, typically 3% to 6% (Figure 4). This is an advantage for gain trim in many systems. It in effect allows an analog gain trim on a digital converter. For example, coupling a DAC with a trimmable CRef such as the MAX6350 allows the overall system gain to be fine-tuned by simply adjusting the CRef voltage.
Figure 4. Digital pot trims reference voltage to change system gain via DAC.
Figure 4. Digital pot trims reference voltage to change system gain via DAC.
The second type is an adjustable reference (such as the MAX6037 or MAX6160) that allows adjustment over a wide range (e.g., 1V to 12V). This is advantageous for field devices that have wide-tolerance sensors and that must operate on unstable power. Portable maintenance devices might need to operate from batteries, automotive power, or emergency power generators.
The third type is an E2CRef (Figure 5), which integrates memory, allowing a single-pin command to copy any voltage between 0.3V and [VIN - 0.3V], and then to infinitely hold that level.
Figure 5. The DS4303 infinite sample-and-hold adjustable voltage reference block diagram.
Figure 5. The DS4303 infinite sample-and-hold adjustable voltage reference block diagram.
E2CRefs benefit test and monitoring instruments that need to establish a baseline or warning-alert threshold.


When is calibration important? Only when we require accuracy, quality, and perfection. We need products for everyday use that are high-quality, affordable, and solidly reliable. Calibration helps us reduce component tolerance buildup in the system, while maintaining affordability. For more calibration information, products, application notes, and design tools, please visit www.maximintegrated.com/cal.


  1. Youden, Dr. W.J. "Experimentation and Measurement." Applied Mathematics Division, National Bureau of Standards in 1961 Reprinted May 1997, U.S. Department of Commerce, National Institute of Standards and Technology (NIST), Special Publication 672.