Ask The Applications Engineer 17
Must a “16bit” converter be 16bit monotonic and settle to 16 ppm?
by Dave Robertson and Steve Ruscak
Q. I recently saw a data sheet for a lowcost 16bit, 30 MSPS D/A converter. On examination, its differential nonlinearity (DNL) was only at the 14 bit level, and it took 35 ns (1/28.6 MHz) to settle to 0.025% (12 bits) of a full scale step. Isn’t this at best a 14 bit, 28 MHz converter? And if the converter is only 14bit monotonic, the last two bits don’t seem very effective; why bother to keep them? Can I be sure they’re even connected?
A. That’s a lot of questions. Let’s take them one at a time, starting with the last one. You can verify that the 15th and 16th bits are connected by exercising them and observing that 0..00, 0..01, 0..10, and 0..11 give a very nice 4level output staircase, with each step of the order of 1/65,536 of full scale. You can see that they would be especially useful in following a waveform that spent some of its time swinging between 0..00 and 0..11, or providing important detail to one swinging through a somewhat wider range. This is the crux of the resolution spec, the ability of the DAC to output 2^{16} individual voltage levels in response to the 65,536 codes possible with a 16bit digital word.
Systems that must handle both strong and weak signals require large dynamic range. A notable example of this is the DACs used in early CD player designs. These converters offered 1620 bits of dynamic range but only about 14 bits of differential linearity. The somewhat inaccurate representation of the digital input was far less important than the fact that the dynamic range was much wider than that of LP records and allowed both loud and soft sounds to be reproduced with barely audible noiseand that the converters’ low cost made CD players affordable.
The resolution is what makes a 16bit DAC a “16bit DAC”. Resolution is closely associated with dynamic range, the ratio of the largest signal to the smallest that can be resolved. So dynamic range also depends on the noise level; the irreducible “noise” level in ideal ADCs or DACs is quantization noise.
Q. What is quantization noise?
A. The sawtoothwaveshaped quantization noise of an ideal nbit converter is the difference between a linearly increasing analog value and the stepwiseincreasing digital value. It has an rms value of 1/(2^{n+1}3) of span, or (6.02 n + 10.79) dB (below pp full scale). For a sine wave, with peaktopeak amplitude
equal to the converter’s span, rms is 2/4, or
9.03 dB, of span, so the fullscale signaltonoise ratio of an ideal nbit converter, expressed in dB, becomes the classical
6.02 n + 1.76 dB. (1)
As the analog signal varies through a number of quantization levels, the associated quantization noise resembles superimposed “white” noise. In a real converter, the circuit noise produced by the devices that constitute it adds to quantization noise in rootsumofsquares fashion, to set a limit on the amplitude of the minimum detectable signal.
Q. But I still worry about that differential nonlinearity spec. Doesn’t 14bit differential nonlinearity mean that the converter may be nonmonotonic at the 16bit level, i.e., that those last two bits have little influence on overall accuracy?
A. That’s true, but whether to worry about it depends on the application. If you have an instrumentation application that really requires 16bit resolution, 1/2LSB accuracy for all codes, and 1LSB fullscale settling in 31.25 ns (we’ll get to that discussion shortly), this isn’t the right converter. But perhaps you really need 16bit dynamic range to handle fine structure over small ranges, as in the above example, while high overall accuracy is not neededand is actually a burden if cost is critical.
What you need to consider in regard to DNL in signalprocessing applications is 1) the noise power generated by the DNL errors and 2) the types of signals that the D/A will be generating. Let’s consider how these might affect performance.
In many cases, DNL errors occur only at specific places along the converter’s transfer function. These errors appear as spurious components in the converter’s output spectrum and degrade the signaltonoise ratio. If the power in these spurs makes it impossible to distinguish the desired signal, the DNL errors are too large. Another way to think about it is as a ratio of the quantity of good codes to bad codes (those having large DNL errors). This is where the type of signal is important.
The various applications may concentrate in differing portions of the converter’s transfer function. For example, assume that the D/A converter must be able to produce very large signals and very small signals. When the signals are large, there is a high proportion of DNL errors. But, in many applications, the signaltonoise ratio will be acceptable because the signal is large.
Now consider the case where the signal is very small. The proportion of DNL errors that occur in the region of the transfer function exercised by the signal may be quite small. In fact, in this particular region, the spurs produced by the DNL errors could be at a level comparable to the converter’s quantization noise. When the quantization noise becomes the limiting factor in determining signaltonoise ratio, 16 bits of resolution will really make a difference (12 dB!) when compared to 14 bits.
Q. OK, I understand. That’s why there’s such a variety of converters out there, and why I have to be careful to interpret the specs in terms of my application. In fact, maybe data sheets that have a great number of “typical” plots of parameters that are hard to spec are providing really useful information. Now, how about the settlingtime question?
A. Update rate for a D/A converter refers to the rate at which the digital input circuitry can accept new inputs, while settling time is the time the analog output requires to achieve a specified level of accuracy, usually with fullscale steps.
As with accuracy, timedomain performance requirements differ widely between applications. If full accuracy and fullscale steps are required between conversions, the settling requirements will be quite demanding (as in the case of offset correction with CCD image digitizers). On the other hand, waveform synthesis typically requires relatively small steps from sample to sample. The solid practical ground is that fullscale steps in consecutive samples mean operation at the Nyquist rate (half the sampling frequency), which makes it extremely difficult (how about “impossible”?) to design an effective antiimaging filter.
Thus, DACs used for waveform reconstruction and many other applications* inevitably oversample. For such operation, fullscale settling is not required; and in general, smaller transitions require less time to settle to a given accuracy. Oversampled waveforms, taking advantage of this fact, achieve accuracy and speed greater than are implied by the fullscale specification.
*The AD768 is an example of such a DAC.
