Education Library

Frequently Asked Question

When a jitter number is specified without an associated bandwidth, what bandwidth should be assumed?

Jitter is a time domain phenomenon which is the result of all the noise which affects the node of interest. This noise is usually assumed to be random noise, which is broadband in nature, although narrowband spurs and noise can certainly contribute to jitter. The most natural bandwidth is the actual bandwidth of sensitivity of the node where the jitter is of concern - such as at the encode node of an ADC. This bandwidth is not always known in practice. So, when a measurement of jitter is made it is referred to as a "broadband" measurement. That is, all of the broadband noise which affects the node is accounted for in the measured jitter – it has not been limited by an intentional filter.

Because ADCs and DACs are time-sampled data systems, they are also Nyquist systems. This means that the broadband noise of the sampling clock is folded back into the Nyquist bandwidth (between 0 and Fs/2). Therefore, the Nyquist bandwidth is a natural bandwidth over which to measure phase noise of the sampling clock in order to calculate the jitter to be expected in a converter.

Often the broadband phase noise of a high quality clock source falls below the system noise floor long before reaching the Nyquist frequency (Fs/2). In this case, the noise floor is dominant in determining the jitter over the Nyquist bandwidth. However, close-in phase noise of some sources rises significantly at small offsets from Fs, and can have an impact on the jitter. In this case, a low frequency limit for the phase noise measurement should be specified, unless the total jitter due to the close in phase noise is significant in the system.