In broad terms, software-defined radio (SDR) is any radio where the signal chain is partially software. In practical terms, it will have some or all of these features: wideband, multiband, multimode, multidata rate, software reconfigurable, and its digital conversion (receive or transmit) is as close to the antenna as possible. Note that this description can also describe such RF instruments as a modern signal (spectrum) analyzer.
E-Systems (now Raytheon) in Garland, Texas, is believed to have built the first software-defined baseband receiver in 1984. The first software-defined baseband transceiver was probably the WSC-3(v)9, built by E-Systems’ St. Petersburg, Florida, division circa 1987 for Patrick AFB. Haseltine and Motorola developed a later radio, SpeakEasy, for Rome AFB circa 1989. Modern examples include satellite and terrestrial radio, military joint tactical radio system (JTRS), and almost any cellular or land mobile radio terminal or base station.
For digital conversion and signal processing to work, theory says we should have a linear, time invariant system. Practice tells us that putting the analog pieces together is a series of compromises. By careful component selection and gain distribution, however, you can maximize the dynamic range of your SDR while still maintaining sensitivity. It doesn’t matter whether the SDR is a communications receiver base station or a signal analyzer, the same rules still apply.
In some standards-based communications systems, that is cellular systems, SDRs operate in a controlled environment; the standard spells out the requirements for the receiver and transmitter, and the carrier adds margin to the standard. In other systems, such as military, amateur, and land mobile radio, the environment is uncontrolled; the nearest emitter can be next door and the furthest can be less than a whisper over the horizon.
Thus, when starting a design, you need to develop a checklist:
- What does the standard require?
- What are the desired minimum and maximum signal levels?
- How much filtering is needed?
- What image filter, channel filter, and antialiasing filters are available?
- Is group delay in the filters an issue?
- What architecture are you using? Zero IF, single, dual, or triple conversion?
- How are you generating quadrature?
- Are you working in the analog or digital (IF sampling) domains?
Choosing an ADC is a discussion in and of itself. The dynamic range of the ADC determines the systems architecture, and vice versa. Start by looking at the signal bandwidth and sampling frequencies (the exact sampling frequency is usually determined by such digital signal processing requirements as clock or frame rate, or both). And to get the full SNR of the ADC, especially when sampling high input frequencies, can you generate a good enough clock to sample at the desired frequency without degrading the ADC’s specified SNR? For the system to be a linear time invariant system, the ADC must provide enough dynamic range to accommodate the desired signal plus interferers, as well as add margin to allow for signal fading and AGC response time.
So how much dynamic range is enough? The highest performance software-defined radios (and RF lab instruments) typically use 14-bit to 16-bit, high speed ADCs to sample at as high a frequency as possible for signals as wide as 250 MHz. For testing the widest signals in standards such as the alphabet soup that is 802.11, the industry preference is to use dual high speed ADCs such as the 14-bit AD9680 for quadrature sampling of I/Q signals at baseband with I/Q bandwidths of 500 MHz or more. Some applications need less dynamic range and will typically use a 12-bit GSPS ADC such as the AD9625 to grab a 500 MHz wide chunk of spectrum and bring it down to baseband using an integrated digital downconverter.
The dynamic range of the ADC is the fundamental
It may seem obvious, but you must design the receiver to keep the input to the ADC linear under all operating conditions. This requires, incorporating the AGC’s response time into the ADC’s margin. To do so, allow a certain number of dB as a margin to account for the change in the input signal while the AGC is reacting so that the receiver is not overloaded by a change in signal level.
In addition, you may also want to add additional margin for signal fading, whether due to atmospheric conditions in the case of lower frequencies or the signals being blocked by buildings or foliage in the case of UHF and microwave signals. On top of these, you also need to include C/N ratio for demodulation plus margin for adjacent and alternate channels interferers and possible PA feed through in full- duplex systems.
Another thing to keep in mind is that narrow-band receivers have a wider AGC range than wideband receivers. Wideband receivers basically move a large slice of spectrum up or down a bit in amplitude—typically less than 10 dB—to keep it centered in the ADC’s linear window, as would be the case when digitizing an entire cellular band. In contrast, narrow-band receivers rely heavily on filtering to minimize the number of signals in the pass band, but must accommodate larger interferers, generally in uncontrolled environments, and their AGC works on signals in the much narrower pass band.
When you set up a cascaded noise figure and intercept model for a receiver design, you have to model the system three times: once for minimum signal level—that is, AGC off with maximum gain; again for maximum signal level— that is, AGC on with maximum gain reduction; and finally, for the receiver’s nominal input level. You also have to account for intermodulation effects in all three models. Fortunately, free tools such as ADI’s ADIsimRF (Figure 1) will help do this for you; such tools often come with a built- in library of models for RF gain blocks, mixers, attenuators, baluns, filters, and high speed converters.
Frequency planning is another fun issue that requires extensive analysis. Not only do you need a mixer table (Figure 2) for each mixer, but you may also want to create a similar DAC table for the transmit path. You also have to consider which Nyquist zone to operate the converter in (either ADC or DAC). The system clock is often a multiple of the frame rate (which is why multiples of 1.2288 MHz and 13 MHz are commonplace). With luck, you can use a frequency high enough that its harmonics do not fall into a band or on a signal of interest. You need to minimize internal and external interference by careful selection of the system clock, intermediate frequencies, and local oscillator (LO) frequencies because these frequencies will mix with unforeseen consequences.
Once you have set up your cascaded noise figure and intercept model for the number of stages and type of functions, such as filters, mixers, and amplifiers, you need to perform some side calculations.
First, you’ll need to calculate the ADC’s noise figure (NF) using the equation
Where PFS is the ADC’s full-scale input power in dBm, PFS (dBm) = 10 log10 [PFS (mW)/ 1 mW], SNR is the ADC’s signal-to-noise ratio in dB, and B is the bandwidth being digitized, taking into account the noise bandwidth of the input filter (Figure 3).
Note that this bandwidth may be wider than the signal bandwidth if you want to digitize the desired signal plus interferers in order to filter the interferers out in the digital domain.
Fortunately, you can improve an ADC’s noise figure by oversampling the input signal; in this case, the equation for noise figure becomes
where fs is the sample clock and B is still the signal bandwidth (or bandwidth to be digitized). Some IF sampling ADCs, such as the AD9874 and AD9864, use both oversampling and noise shaping in a band-pass Σ- architecture. These ADCs are actually complete IF subsystems—accepting an IF input and providing SNRs approaching 100 dB and 16 or 24 bits of decimated I/Q data at their output.
Oversampling isn’t the only way to improve the ADC’s noise figure. You can also use a transformer to increase the ADC’s input voltage with noiseless gain, as shown in Table 1 for the AD6645.
Number of Poles | Ratio of Noise Bandwidth to 3 dB Bandwidth |
1 | 1.57 |
2 | 1.11 |
3 | 1.05 |
4 | 1.03 |
5 | 1.02 |
The last side calculation you need to do is for the conversion clock. The ability (or lack thereof) to generate low jitter clocks is why ADCs are not yet placed at the antenna for high dynamic range receivers, as we shall see below. The equation for an ADC’s theoretical SNR as a function of clock jitter is SNR = 20 log10 [1/(2Πftj)], where f is the frequency being sampled in Hz and tj is the clock jitter in seconds.
Alternately, given a desired SNR in dB and maximum input frequency f in Hz, tj = 1/(2Πf10[SNR/20]).
When IF sampling, for example, if you want to sample a 20 MHz wide signal at an IF of 240 MHz, the highest frequency component being sampled is the IF plus half the signal bandwidth, or 250 MHz. Solving for 80 dB SNR at a maximum input frequency of 250 MHz, this yields a maximum clock jitter of 63.66 fs. You can simulate the performance of a PLL/VCO and calculate the jitter for various types of loop filters and circuit configurations using such free tools as ADIsimPLL™ and ADIsimCLK™. Using these tools, you can optimize the design of a loop filter for the best phase noise and thus minimum jitter at the expense of increasing the filter’s settling time—generally not a problem for a fixed frequency clock for a high speed ADC.
Thus far, we have reviewed just some of the circuit elements, calculations, and simulation tools that go into maximizing the dynamic range of a software-defined radio, with emphasis on the performance of the ADC and frequency planning. Due to space limitations, we’ve ignored the effects of gain and gain distribution on distortion, but that can be a topic for another day.