In Part 1, a look at general static analog-to-digital converter inaccuracy errors and ADC inaccuracy error that involves bandwidth was given. Hopefully, a greater understanding of ADC errors and how these errors influence the signal chain was also given. With that, keep in mind that not all components are created equal—this is true for both active and passive devices and thus there will be errors within the analog signal chain regardless of what is down-selected as the final part to fit into the system.
In this article, the differences between accuracy, resolution, and dynamic range will be described. It will also be revealed how inaccuracies accumulate within the signal chain and cause errors. This plays an important role in understanding how to specify or choose an ADC properly when defining system parameters for a new design.
Accuracy vs. Resolution vs. Dynamic Range
Many users of converters seem to use the terms accuracy and resolution interchangeably. However, this is a mistake. The terms accuracy and resolution are not equal but are related and therefore should not be used interchangeably. Think of accuracy and resolution as cousins possibly, but not twins, please.
Accuracy is simply error, or how much the value under measurement deviates from its true value. Accuracy error can also be referred to as sensitivity error. Resolution is simply how finely the value measured can be represented or displayed. Even though a system may have 12 bits of resolution doesn’t mean it will be able to measure a value to 12 bits of accuracy.
For example, say a multimeter has six digits to represent a measurement. The resolution of this multimeter is six digits, but if the last one or two digits seem to “flicker” between measurement values, then the resolution is compromised and so is the accuracy of the measurement.
The errors in any system or signal chain accumulate throughout, distorting the original measurement taken. Therefore, it is also key to understand the dynamic range of the system in order to gauge the accuracy and the resolution of the signal chain under design.
Let’s look again at the mulitmeter. If there are six digits of representation, then the dynamic range of this device should be 120 dB (or 6 × 20 dB/ decade). Keep in mind, though, that the bottom two digits are still flickering. So the real dynamic range is only 80 dB. That means if the designer intends to measure a 1 µV (or 0.000001 V) volts, the error involved in this measurement could be off as much as 100 µV, since the actual device is only accurate to 100 µV (or 0.0001 V or 0.0001XX V, where XX represents the bottom two digits flickering).
Effectively there are two ways to describe any system’s overall accuracy: dc and ac. DC accuracy represents the “deviated” accumulation of error as shown throughout a given signal chain. This is sometimes termed as a “worst-case” analysis. AC accuracy represents the noise error terms that accumulate throughout the signal chain. This defines the signal-to-noise (SNR) of the system. These errors then add up, lowering the SNR and yield a more true effective number of bits or ENOB of the entire design. Obtaining both parameters effectively tells the user how accurate the system can be with both static/wondering and dynamic signals.
Relationship Between Low Frequency SNR, ENOB, Effective Resolution, and Noise-Free Code Resolution
Remember an ADC can “take in” many types of signals that are typically classified as either dc or ac and quantify them digitally. Understanding the ADC’s error in the system means the designer must understand the type(s) of signals that will be sampled. Therefore, depending on the signal type depends on the way to define the converter’s error contribution to the overall system. These converter errors are generally defined in two ways: noise-free code resolution, representing dc-type signals, and the “SNR equation,” representing ac-type signals.
All active devices, like ADC internal circuits, produce a certain amount of rms noise due to resistor noise and “kT/C” noise. This noise is present even for dc-input signals and accounts for the code transition noise in the converter’s transfer function. This is more commonly referred to as input referred noise. The input referred noise is most often characterized by examining the histogram of a number of output samples when a dc input is applied to the converter. The output of most high speed or high resolution ADCs is a distribution of codes, centered around the nominal value of the dc input. To measure its value, the input of the ADC is either grounded
or connected to a heavily decoupled voltage source, and a large number of output samples are collected and plotted as a histogram (sometimes referred to as a grounded-input histogram)—see Figure 1. Since the noise is approximately Gaussian, the standard deviation of the histogram, σ, can be calculated, corresponding to the effective input rms noise and expressed in terms of LSBs rms.
Although the inherent differential nonlinearity (DNL) of the ADC may cause some minor deviations from an ideal Gaussian distribution, it should be at least approximately Gaussian. If the code distribution has large and distinct peaks and valleys, this could be an indication of a bad PC board layout, poor grounding techniques, or improper power supply decoupling, among other things.
Typically input referred noise can be expressed as an rms quantity, usually having the units of LSBs rms. Specifications involving these types of quantities are more generally associated with high resolution precision type converters, because of the low sample rates and/or dc-type or slow moving signals they acquire. Σ-Δ ADCs designed for precision measurements, having resolutions in the 16- to 24-bit range, have data sheet specifications such as input referred noise, effective resolution, and noise-free code resolution to describe their dc dynamic range.
On the other hand, higher frequency Σ-Δ ADCs for audio applications are generally characterized exclusively in terms of total harmonic distortion (THD) and total harmonic distortion plus noise (THD + N).
Successive-approximation (SAR) converters cover a wide range of sampling rates, resolutions, and applications. They typically have the input referred noise specification but in addition have specifications for SNR, ENOB, SFDR, and THD for ac input signals.
Although higher speed converters (such as pipelined) that sample in the hundreds of MHz or beyond are typically specified in terms of ac specifications such as SNR, SINAD, SFDR, and ENOB, and they can also capture dc-type or slow moving signals. It is therefore useful to understand how to derive the low frequency performance of high speed converters from the ac specifications given on the data sheet.
Sidebar Discussion: SNR Equation
The maximum error an ideal converter makes when digitizing a signal is ±½ LSB, as shown in the transfer function of an ideal N-bit ADC. The quantization error for any ac signal that spans more than a few LSBs can be approximated by an uncorrelated sawtooth waveform having a peak-to-peak amplitude of q, the weight of an LSB. Another way to view this approximation is that the actual quantization error is equally probable to occur at any point within the range ±½ q.
The quantization error as a function of time is shown in more detail in Figure 2. A simple sawtooth waveform provides a sufficiently accurate model for analysis. The equation of the sawtooth error is given by
The mean-square value of e(t) can be written:
Performing the simple integration and simplifying,
The root-mean-square quantization error is therefore
The sawtooth error waveform produces harmonics that extend well past the Nyquist bandwidth or dc to F_{S}/2, where F_{S} = converter sample rate. However, all these harmonics fold (alias) back into the Nyquist bandwidth and sum together to produce an rms noise equal to q/√12.
Quantization noise is approximately Gaussian and spreads uniformly over the Nyquist bandwidth of interest, typically dc to F_{S}/2. The underlying assumption here is that the quantization noise is uncorrelated to the input signal. The theoretical signal-to-noise ratio can now be calculated assuming a full-scale input sine wave:
The rms value of the input signal is therefore
The rms signal-to-noise ratio for an ideal N-bit converter is therefore
Relating both slow speed, dc-type signals, and high speed, ac-type signal specification quantities does requires some math. So break out your college math book and flip to the identity table in the back and let’s review below how a relation can be struck between SNR, ENOB, effective resolution, and noise-free code resolution for low frequency inputs.
With FSR = full-scale range of ADC, and n = input referred noise, (rms) effective resolution is defined as the following:
Note that:
Therefore,
Or,
For ac analysis, use a full-scale sine wave input. Also, see side bar discussion above where:
Therefore,
Rearranging this a bit we get
Which yields the following:
Therefore, by substituting in Equation 16, we can derive the relationship between ENOB, ac-type signals, and dc-type (slow moving) signals. Or,
To verify this let’s calculate the ENOB for an ideal N-bit ADC.
Where the full-scale range (FSR) is = 2^{N} and the input referred noise is n = 1/√12 = 0.289.
Substituting in these values,
Or
To summarize this, when looking at dc, slow moving signals, the ENOB of the system is roughly 1 bit larger (0.92 bits to be exact) than the noisefree code resolution of the converter and 2 bits less than the effective resolution of the converter.
However, as the signals move faster, or ac type signals, where BW is involved, the SNR and the ENOB of the converter becomes frequency dependent and typically degrades for higher frequency inputs.
Converter Inaccuracies in a Signal Chain
Now that the converter errors are understood the rest of the signal chain is applied to understand these concepts at the system level. Figure 3 describes an example of a simple data acquisition signal chain. Here a sensor is connected to a long amount of cable that ultimately gets connected to the data acquisition card. The sensor’s ac signal pushes through two stages of preconditioning amplifiers before arriving at the ADC’s inputs to be sampled. The goal here is to design a system that can accurately represent a sensor’s signal within ±0.1% of its original value. Hmmm … sound challenging?
In order to design such a system it is important to think about the types of errors that could be affecting the sensor’s original signal and where they are coming from throughout the signal chain. Imagine what the converter sees in the end when the signal is finally sampled.
Let’s suppose the ADC has a 10 V full-scale input and 12 bits of resolution in this example. If the converter were ideal it could be determined that it has a dynamic range or SNR of 74 dB.
However, the data sheet specifications only show the converter to have an SNR of 60 dB or 9.67 ENOB.
Please note the calculation of SNR and ENOB: when calculating ENOB from an SNR number in the data sheet it should be clear to the designer that this may or may not include harmonics. If it does include distortion, then SINAD can be used, which is defined as SNR + distortion or sometimes referred to as THD (total harmonic distortion).
Therefore, the LSB size can be defined as 12.2 mV p-p or VFS/2^{N} = 10/2^{9.67}. This dramatically reduces the number of representations that can occur on the digital outputs. Remember the bottom LSBs/bits are flickering because of the noise in the ADC!
This also means the converter has an accuracy of ±6.12 mV or 0.0612%.
Additionally, this implies that for a 1.00000 V input applied to the converter, the output can be between 0.99388 V and 1.00612 V.
Therefore, the 12-bit converter with 9.67 bit ENOB can only measure a signal to 0.1% accuracy. The dynamic range of the converter is approximately 60 dB, rather than 74 dB (ideal 12-bit ADC).
Visually this can be described as the following in Figure 4.
Table 1 describes some quick equalities for referencing desired system performance.
Accuracy (%) | Error (mV) | Dynamic Range (dB) |
10 | 100 (0.1 V) | 20 |
1 | 10 (0.01 V) | 40 |
0.1 | 1 (0.001 V) | 60 |
0.01 | 0.1 (0.0001 V) | 80 |
Other System Inaccuracies
Be mindful of all the front-end components as suggested in the signal chain example above. Just because the converter accuracy meets or beats the system accuracy specification defined for the system there are still more inaccuracies to comprehend—that is, the front end, power supply, any other outside influences or environments.
The design of such a signal chain as described in Figure 3 from above can be very intense and is beyond the scope of this paper. However, a quick view on inaccuracies/errors associated with such a signal chain can be summarized in Table 2.
Device | Device Errors | Circuit Errors |
Amplifier 1 | Offset voltage (mV) | Resistor tolerance (%) |
Offset voltage drift (mV/°C) | Resistor drift (ppm/°C) | |
Input offset current (nA) | Resistor life(% per 1000 hrs) | |
Input offset current drift (nA/°C) | R_{F}/R_{I} gain error | |
Input bias current (nA) | ||
PSRR (dB) | ||
CMRR (dB) | ||
Voltage noise (nV/√Hz) | ||
Current noise (pA/√Hz) | ||
Multiplexer | On resistance (Ω) | |
Isolation (dB) | ||
PSRR (dB) | ||
CMRR (dB) | ||
Amplifier 2 | Offset voltage (mV) | Resistor tolerance (%) |
Offset voltage drift (mV/°C) | Resistor drift (ppm/°C) | |
Input offset current (nA) | Resistor life(% per 1000 hrs) | |
Input offset current drift (nA/°C) | R_{F}/R_{I} gain error | |
Input bias current (nA) | ||
PSRR (dB) | ||
CMRR (dB) | ||
Voltage noise (nV/√Hz) | ||
Current noise (pA/√Hz) | ||
Analog-to-Digital Converter | Linearity (LSB) | Oscillator jitter (ps) |
Offset error (LSB) | Gate 1 jitter (ps) | |
Offset error drift (ppm/°C) | Gate 2 jitter (ps) | |
Gain error (%FS) | ||
Gain error drift (ppm/°C) | ||
PSRR (dB) |
There are many errors that are present in any signal chain, not to mention the cable and other outside influences that can also play a big role in determining the design of such a system. Whatever the error accumulation, it ultimately gets sampled at the converter along with the presence of the signal—assuming the error is not great enough to mask the signal that is being acquired!
When designing with converters keep in mind there are two parts to the equation when it comes to defining the accuracy of the system. There is the converter itself as described above and everything else used to precondition the signal before the converter. Remember for every bit lost a 6 dB decrease in dynamic range occurs. The corollary, for every bit gained, the system’s sensitivity increases by 2×. Therefore, the front end requires an accuracy specification to be much better than the converter’s accuracy chosen to sample the signal.
To illustrate this point, use the same front-end design shown in Figure 3. Let’s say the front end itself has 20 mV p-p of inaccuracies; that is, the accumulated noise, as shown in Figure 5. The system accuracy is still defined as 0.1%. Is the same 12-bit converter going to have enough accuracy to maintain the system specification defined? Answer—no, and here’s why.
Here is how to figure this out using the ADC that has a SNR = 60 dB.
Notice that 20 mV of noise can degrade the system by 1 bit or 6 dB, bringing the performance down to 54 dB from 60 dB as is required for the system’s performance. To get around this, maybe a new converter should be chosen in order to maintain the 60 dB or 0.1% system accuracy. Let’s choose an ADC that has 70 dB of SNR/dynamic range or an ENOB of 11.34 bits to see if this works.
It appears that the performance didn’t change much. Why? Because the noise of the front end is too great to comprehend 0.1% accuracy even though the converter’s performance itself is much better than the specification. What needs to change is the front-end design in order to get the performance desired. This is represented figuratively in Figure 6 below. See why this last configuration example won’t work? The designer can’t simple pick a better ADC to improve the overall system performance.
Bringing It All Together
The previously chosen 10 V full-scale, 12-bit ADC has a dynamic range of 60 dB to achieve 0.1% accuracy. This means a total accumulated error of <10 mV or 10 V/(10^{60}/20) needs to be met in order to reach the 0.1% requirement. Therefore, the front-end components have to change in order to reduce the front-end error down to 9 mV p-p as shown in Figure 7, using a converter that has 70 dB of SNR.
If the 14-bit, 74 dB ADC was to be used, as in Figure 8, then the requirements on the front end can be relaxed even further. However, trade-offs can have an upside in cost. These trade-offs need to be evaluated per the design and application. It may be worth it to pay more for tighter tolerance and lower drifting resistors than to splurge for a higher performing ADC, for example.