I. INTRODUCTION
The progression of data converter architectures and performance draws much attention from the scientific community [1-5]. New converter architectures and techniques emerge, from time to time, in response to diverse application requirements. Some of the new architectures evolve alongside with time-tested ones, like the successive approximation ADCs, the pipelined ADCs, the resistor-string DACs etc. While some other architectures do not last too long, following a process somewhat akin to a Darwinian selection.
Some innovation is driven purely by intrinsic converter technology challenges. For example, by the need to mitigate linearity limitations associated with device matching, or those due to the impact of some finite transistor’s parameter. The intention, in these cases, is to push forward the conversion’s dynamic performance, or to improve energy efficiency.
In other cases the innovation drivers are rather more extrinsic to the converters themselves. These include, for example, the need to integrate ADCs/DACs in SoCs/SiPs, making their area or power fit certain constraints. Or by the need to efficiently interface the data converters to sensor/RF/mixed-signal functionality or to embed them with digital processing in complex signal chains.
In yet other circumstances, there may be more of a mixture of intrinsic and extrinsic innovation. Such is the case for the need to make the converters viable to a finer lithography, which may, in turn, introduce new device and interconnects challenges.
Such variety of requirements and underlying conditions leads to many completely different types of converters. It can challenge the ability of designers to objectively assess and compare dissimilar architectural options. It can also be hard to develop a consistent taxonomy for guiding solution selection.
One way to discriminate is by assessing the power efficiency with which a given converter performs its function. The latter is generally assessed and tracked by means of a couple of popular Figures of Merit (FOMs) [1, 3-4].
FOMs are meant for a quick comparison between similar ADCs/DACs and do, indeed, capture fundamental trade-offs between power consumption, signal bandwidth and spectral purity. But, over time, FOMs have also been employed to highlight performance trends, to point to architectural strengths and shortcomings. Perhaps in some cases FOMs have been nearly promoted to the rank of another design specification, the deliberate optimization of which may end up being rewarded with a scientific publication. Such unintended effects of FOMs are being acknowledged by the technical community [7].
But new points in a FOM scatter plot regularly emerge as the result of what designers are working on, which is influenced by application and business dynamics. So the emergence of new points should not be confused as an indication of what converter technology could possibly do in absolute sense (some level of correlation with technology potential in a FOMs trend should not be hastily confused with causation).
With that in mind, in this paper, two classes of emerging converter architectures and techniques are reviewed: time-domain converters and compressive sensing converters. Neither of them quite align with the FOM lenses, but deserve attention from the data converter technical community. The paths that these innovative architectures open and tread are here justified by a diverse set of objectives, the understanding of which can help guiding the next steps.
What is covered here has no pretense to be exhaustive. Publication references are provided to the reader to deepen many of the subjects. However, this paper attempts at bringing the attention of the technical community to such interesting cases while offering some original observations about them.
This paper is organized as follows. Section II discusses how data converter innovation happens as a symbiosis between application needs and technology progression and where the increasing popularity of power efficiency FOMs can introduce un-necessary blinders. Section III discusses time-domain converters and provides conjectures for their future evolution. Section IV discusses compressive sampling and provides a brief survey of the most recent architectural breakthroughs. Some conclusions are drawn in Section V.
II. PROGRESSION IS SELDOM A STRAIGHT LINE
A. What can be overlooked when focusing on FOM too much?
Before discussing emerging converters it’s worth pointing out what a FOM-focus risks to obscure.
A commonly used ADC FOM is the so-called Schreier’s FOM, measured in dB/J (although the unit “Joules” is usually dropped) and defined as follows [1]:
where, SNDR is the signal-to-noise-and-distortion ratio, also in dB and measured for high frequency inputs (hence the subscript hf in the FOM symbol), P is the corresponding power consumption, expressed in watts, and BW is the input signal bandwidth measured in Hz. BW is generally assumed to be equal to the sample rate fs divided by the oversampling ratio OSR. This definition allows comparing Nyquist converters (for which BW=fs/2) and oversampled converters together [1]. A scatter plot based on ADCs published during the last twenty years at the ISSCC and VLSI conferences is depicted in Fig 1 [6].
While this shows a comprehensive landscape of what has been published at these two conferences, it is easy to notice that the majority of new data points (those indicated by the squares and diamonds) correspond to the highest bandwidth ADCs aligning to the diagonal dashed line asymptote known as the “technology front”. A similar distribution of new data points is found, year after year, with newer points pushing the dashed line asymptotes to wider band and higher FOM. In fairness, not all papers accepted at these conferences ought to establish a substantially better FOM, provided that valuable innovations are demonstrated in other important dimensions, as seen by few new points laying far away from the hustle along the dashed line.
However, this picture, while insightful from an energy efficiency perspective, should be used carefully. A contrarian view may be that it does not represent a really conclusive innovation dashboard of this field and that it could even be misleading. Let us consider some counter-examples to the FOM view. Because while quantitative and objective, such representation misses relevant architectural innovation that is either not submitted for publication in the first place, or that while attacking other valuable problems, doesn’t necessarily stand out in FOM and so it risks to be overlooked or further developed.
For example, a number of companies developing, among others, innovative high speed data converters embedded in more complex systems simply don’t publish. That is true for both commercial applications in ultra-wide band optical, wired and wireless infrastructure communication systems, as well as for defense and space applications (it shall be noted that for defense-related applications there are, in fact, specific norms barring publications). Non-CMOS technologies, such as heterogeneous or optical technologies are also sometimes used for these applications and these allow handling signal bands that, at any given time, can be an order of magnitude beyond the technology front of Fig. 1.
There are also cases where the electronics is allowed to use as much power as it is needed to meet ambitious performance objectives. For these, the FOM or the physical size would not compare favorably with what is shown in Fig. 1. It should be said that while these are outliers, if accounted by adding their points to the scatter plot, they may distort the regularity of the distribution in Fig. 1.
Also, as noted earlier, the horizontal asymptote, known as the “architecture front” doesn’t see many new points added year-on-year. This may suggest stale innovation in low bandwidth ADCs1. While, in reality, there is a lot of relevant converter innovation in narrow band applications that doesn’t necessarily aim to optimize the FOM. As a matter of fact, the vast majority of commercial converters developed each year actually process much lower bandwidth than those close to the “technology front”. There are very many remarkable such ADCs introduced yearly, often termed “precision converters” (low bandwidth, high dynamic range), attacking very important application problems in very innovative ways but they are intentionally seldom disclosed in publications. These converters rely on proprietary circuit and algorithmic techniques and leverage special process technology capabilities also to achieve very high linearity and noise performance. All of these forms of innovation are protected by trade secrets and patents and it is often deemed to be counterproductive to give them a high visibility in the open literature. Expectably, none of these specifics will intentionally be disclosed here. Although the interested reader can confirm such assertions by an in-depth browsing of the relevant cases, publicly available at the US and European patent office websites.
In conclusion, FOMs are very useful tools if used with care. However conversion efficiency is only one lens for looking at converters’s progress. Overly emphasizing conversion efficiency at prestigious conferences will inevitably incentivize and possibly develop gregarious lines of research at the expenses of other important directions. Secondly, though not less important, FOM-based trends can miss out some important industrial innovation.
B. When an application requirement triggers a major turn
Trends in application areas are a main driver and progression of signal specifications can change quite dramatically within an application space, hence forcing technology dislocations.
For instance, high speed converters are often required in cellular wireless infrastructure systems [10]. About six or so years ago, a receive signal path for a cellular base-station (BTS) would have been required to process a signal such as a multi-carrier GSM channel with an RF bandwidth BW of 75MHz or a CDMA channel with RF BW=100MHz. The prior generation’s requirement was of about 40MHz, while about three years later, a subsequent generation of BTSs required an RF bandwidth of BW=200MHz. Today, consensus on so-called fifth generation (5G) systems is for BTSs [12] to be capable of processing RF bands of BW=1GHz~1.2GHz.
So if an ADC is used to digitize base band in a homodyne receive scheme, its sample rate would need to roughly double when going from the 40MHz generation to the 100MHz generation and then double again to enable the 200MHz generation. But the following ADC generation would require a sample rate that is five to six times higher than its predecessor to process a 1~1.2GHz band. So while in the previous cases an appropriate process technology transition for nearly the same ADC architecture could meet the requirement, in the last case a substantial architectural change is indispensable.
Continuing with the very same application space, converter requirements progression can actually get even less linear than in the previous example. For instance, if the popular heterodyne receive scheme is considered, the ADC is can be used to digitize the desired communication channel with band BW but centered at an intermediate frequency fIF, rather than in baseband/zero IF. In the 100MHz BW systems generation, such IF frequency was commonly chosen between 150MHz and 350MHz. In the 200MHz systems generation some BTS designs have moved their fIF to slightly higher frequency. So, again, a sample rate doubling is very challenging but not necessarily disruptive to the adopted converter architecture.
However, in some more recent cases, the requirement on the input signal for the ADC has moved to much higher frequency. Namely, the RF to IF frequency down conversion is moved from the analog domain, in front of the ADC, to the digital domain, right after digitization. In other words, the 200MHz wideband signal that the ADC needs to sample, is not centered at a few hundred megahertz; it is now located at a few GHz. And while under-sampling is a possible avenue, the demand is to use the first Nyquist band for acquisition. As for 5G cellular communication, designers distinguish between sub-6GHz systems, where the RF channel is placed below 6GHz, and the millimeter wave systems, where the channel is located between 29GHz and 32GHz or so [12]. So, for example, if a 10-12GSPS ADC could be used as an RF digitizer [11] in the receive path of a sub-6GHz system, doubling fS to 20-24GSPS could provide some incremental advantage in processing gain and in terms of analog filtering requirements. Yet a completely different approach to the millimeter wave systems is needed.
Additionally, one of the other technologies required by 5G communication systems is beamforming. The ability to establish a spatially directed receive/transmit communication link between certain mobile devices and the BTS is obtained via phased arrays of antennas, each one of which may have its RF/mixed-signal signal chain. While, certainly, processing power efficiency is very important (FOM), the size and weight of the electronics introduce very restrictive conditions to the system design that trickle down also to the data converters. Converter architectures that can be very compact in area, that scale well with nanometer process technologies and can then be integrated in large channel count are receiving substantial attention. That includes classic SAR ADC architectures. But it also include emerging classes of converters such as the time-to-digital and digital-to-time converters discussed in the following sections.
C. When a converter breakthrough is an enabler
The innovation cycle doesn’t simply work in the direction of an application challenge driving an engineering solution. It works also in the opposite direction, when a technology breakthrough enables an application that wasn’t practical or conceivable before.
For instance, while trimming has been fairly common practice in precision analog circuits for many decades, despite much research, self-calibration has truly become mainstream in industrial data converter design only in the last fifteen years or so. Self-calibration techniques allowed to substantially loosen up analog design trade-offs between matching, area, noise and linearity, power consumption, speed [8, 1]. Because of that, in the mid 2000s, there has been a rapid expansion in terms of converter architecture innovation significantly pushing the performance fronts forward in multiple directions, particularly in CMOS processes [1]. First, 8-10b ADCs went from sample rates of a few hundred MSPS to well into the GSPS range, thanks to a combination of substantial circuit size reduction (calibration correcting for matching limitation, hence allowing size reduction and hence speed acceleration) and simple two-way (“ping-pong”) interleaving. Then further improvements in core self-calibration, plus higher-order time-interleaving (8 sub-ADCs or higher) assisted by channel mismatch calibration, enabled also Nyquist-rate 12-14b ADCs to break the GSPS speed barrier [1, 2, 11]. Different self-calibration techniques were employed in continuous-time △Σ ADCs to control parametric spread in the loop filters, in the feedback delays, and to linearize the feedback DACs. Hence allowing such architectures to digitize hundreds of MHz of signal band at frequency centered all the way up to the low GHz range [10].
As result, considering again the examples in the previous section, cellular wireless communication systems have been positively impacted by the ability to employ RF digitization and synthesis. That has made it possible to move a lot of the modulation / demodulation functionality from the analog/RF domain into the digital domain, with substantial benefit to integration, flexibility/programmability, development time etc.
Similarly, the substantial reduction in size and power enabled by new self-calibration techniques has also enabled considerable miniaturization / integration in medical instrumentation systems where data converters also constituted one of the bottlenecks, hence enabling the creation of affordable portable health monitoring systems such as ultra-sound systems etc. with an appreciable benefit for our well-being.
Finally, while the philosophy of development of analog systems has traditionally been to design for best performance, leaving to trimming and calibration the role of making up for manufacturing imperfections, recent advances in self-calibration are rapidly changing this strategy. Looking ahead, deeper analog-digital co-design is anticipated. For example, in order to further overcome power/speed limitations, the data converter architectural preferences may go to those that, while characterized by high but predictable and correctable nonlinearity, can enable substantial higher speed or lower power or smaller area, leaving to self-calibration and software algorithms the task of linearization [1-3, 37].
III. TIME-TO-DIGITAL (TDC) AND DIGITAL-TOTIME (DTC) CONVERTERS
A. Justification for exploring time-domain data converters
MOS device scaling is accompanied by voltage supply scaling. Difficult trade-offs between signal headroom, noise, linearity, bandwidth, power consumption and device matching introduce limitations to the performance of voltage-domain analog circuits; data converters included [8].
In the early nineties, in response to the shrinking headroom issue for the voltagemode signal swing, researchers explored current-mode circuits [9]. But while a hard ceiling in the current range isn’t always immediately explicit, currents and voltages are tied to one another by finite node impedances. Inevitably, the original boundary conditions on voltage mode processing led to homologous challenges in current mode systems. Moreover, many of the signal sources, sensors and actuators are voltage-mode devices, hence making the voltage-to-current and current-to-voltage transducers the inevitable new bottlenecks2.
Meanwhile, while the pace of reduction in supply voltages has since slowed down, the voltage headroom problem has not gone away. Analog designers have begun looking at another analog variable that could be used to represent and process information: time intervals3. Time-domain circuits such as phase-locked loops (PLLs) or delay-locked loops (DLLs) are mature architectures and seminal work in time-domain data converters can be traced-back to many decades ago. Time-to-digital (TDCs) and digital-to-time (DTCs) converters have actually been important functional blocks for digital and semi-digital timing/clock systems [1].
B. TDCs/DTCs primitive circuits
Two of the most important analog circuit primitive for processing time are the CMOS inverter and the D-type edge-triggered flip-flop (DFF) [1, 13]. The voltage/ current-domain signals processed by TDCs/DTCs have generally an approximately rectangular or, especially at a high frequency, a distorted-sine shape. Though what really matters is not their shape. What matters is when such signals cross a pre-established set of thresholds hence determining the instant of transition from 0 to 1 or from 1 to 0. Such transition instant is liberally referred to as the “zero crossing” time.
In TDCs/DTCs, the CMOS inverter is often current-starved in order to be able to adjust its gate delay by means of a control current IC or a control voltage VC and it is employed to realize a voltage controlled delay unit (VCDU) as in the example depicted in Fig. 2 [13]. The input is represented by a signal ϕin, while the output is a signal ϕout. The control variable, in this example is VC, and it can vary the net gate delay 𝛥T. The small signal gain G𝜙 at the VCDU’s quiescent point of the voltage-totime characteristic determines the ability of this primitive to process time [13].
VCDUs like the one in Fig. 2 or alternate ones, especially those implemented in differential form, are building blocks for ring-oscillator VCOs and voltage controlled delay lines which are then used for continuous processing of time signals.
The other time-domain primitive is the D-type edge-triggered flip-flop (DFF), as the one shown in Fig. 3. The DFF can be used as an analog primitive to realize a comparator function since, given two pulses, say 𝜙in and 𝜙ref, fed to its D input and clock input respectively as shown in Fig. 3, will return a logic 1 at its Q output when 𝜙in leads 𝜙ref (𝜙in < 𝜙ref) and 0 otherwise (𝜙in ≥ 𝜙ref).
The VCDU and the DFF can then be used to build a large variety of TDCs and DTCs as the many architectures described in excellent tutorials such as [13] and [14].
A rather simple one is the time-domain flash ADC depicted in Fig. 4. Here the VCDU’s gate delay 𝛥T is used to set the time-domain comparator thresholds and hence determines the quantum size and the nominal resolution of the converter. Finer quanta can be obtained by phase interpolation between two delay elements’ outputs or by using a sliding “time veneer” obtained by introducing a second time-shifted servo-ed voltage-controlled delay line [1, 13]. These techniques, however, introduce additional complexity, area, power consumption, noise and linearity issues that need to be carefully managed.
Another common way to realize a TDC consists in building a ring oscillator VCO using VCDUs. Then using the analog voltage input signal to control the VCDUs so that the frequency of oscillation of the VCO depends on the input to be digitized. Finally, a counter, or an array of DFFs properly connected to the output phases of the ring oscillator, are used to map the oscillator frequency to a digital representation of the analog input [1, 13, 14]. More advanced “VCOADC” architectures use such VCOs as quantizers embedded in time-domain or classic voltage-domain delta-sigma modulators [14].
C. So, do TDCs work then?
How well do TDCs perform compared with traditional ADCs? The aperture plot and the energy plot for all ADCs, traditional and time-domain, are shown in Fig. 5 for all the papers published at ISSCC and VLSI symposium during the last twenty years [6]. In these two plots, the TDCs are highlighted by using black squares. Some of the latter data points also include a few hybrid TDCs, namely ADC architectures that mix traditional voltage-mode circuits with time-domain sub-blocks. Moreover, the most recent data points, tend to be those closer to the state-of-the art lines (low jitter contours, high in the aperture plot, and best figures of merit contours, low in the energy plot).
Overall, the performance of the published architectures so far spans the medium SNDR range, medium BW range. The energy efficiency isn’t the most competitive, though recent data points seem to show appreciable improvement in conversion efficiency. Indeed, as stated in Section II.A, one should be careful in quickly drawing conclusions only based on inspection of these plots, particularly with respect to the question about what type of performance it may be possible to attain with TDCs.
More in-depth examination of the papers behind these data points suggests that, especially in the case of industrial publications, these tend to target signal bandwidths in the tens of MHz with SNDR in the mid-70s, presently finding application as embedded ADCs in mobile handset’s SoCs.
The one differentiating aspect that generally highlight the TDCs is their substantial area compactness, making them very competitive with comparably performing but physically larger pipelined and SAR ADCs.
Another application space where TDCs are also finding increasing use is as part of digital temperature sensors [1, 22-24] and other low frequency/low power sensing and digitization systems, including those for Internet of Things (IoTs). That is due to the combination of very high compactness, low power and low cost.
D. How well do TDCs/DTCs scale?
As stated above, one of the motivations in support of pursuing TDCs and DTCs as alternate architectures for data converters relates with their scalability with CMOS process technology. Based on that and considering the primitives of Fig. 2 and 3, some initial observations can be made.
First of all, the area of these primitives scales approximately with Moore’s law, which is expected to continue to hold up to 7nm and likely beyond that. That is a net advantage over traditional ADCs and DACs since amplifiers, for example, don’t shrink that well.
The minimum gate delay 𝛥Tmin of a VCDU is process technology dependent. Based on actual data reported in [15] and accounting for the non-smooth transition from planar MOS to FinFET occurring around 22nm it is possible to estimate that 𝛥Tmin shortens from one CMOS node to the next one with an approximate geometric progression of factor 1.15~1.2. But, since a reduction in 𝛥Tmin directly relates to the TDC’s quantization capability, this is a relatively modest improvement.
The gate switching energy, on the other hand, has a more aggressive scaling profile. Based on the trends shown in [15], we can estimate a relative energy reduction of about 1.52~1.55 times from a CMOS generation to the next one. That is rather impactful to the data conversion process efficiency and tends to be higher than what most traditional ADC architectures experience for the same node transition. So the conversion efficiency of TDCs/DTCs benefits strongly of scaling.
But while reducing 𝛥Tmin can be used to the benefit of higher resolution in TDCs/ DTCs, the phase noise on the zero crossing would still limit the realizable dynamic range. A concern valid until recent years was that while MOS’s transconductance gm improved at a faster pace than the supply drop, reducing thermal noise, on the other hand, the flicker noise 1/f corner substantially increased in frequency. Beyond 90nm, the latter could actually be the dominant contributor to phase noise. This required, for example, various forms of mitigations in different architectures employing CMOS delay lines and oscillators, depending on the resulting noise modulation mechanisms contributing to phase noise/jitter [16, 17].
But with the introduction of FinFETs both flicker as well as the thermal noise of the FETs have substantially improved over planar high K gate MOSFETs (e.g. about 3dB better in 16nm FinFET than in 28nm planar MOS [18, 19]). This is very encouraging news. While, to the best of the author’s knowledge, a quantitative assessment of the impact to TDCs hasn’t yet been published4, it should be expected that all TDC architectures will see a larger net jitter improvement compared to the previously cited decrease in 𝛥T. If that is indeed the case then this points to renewed potential for developing higher dynamic range TDCs.
IV. COMPRESSIVE SAMPLING ADCS
A. Justification for compressive sampling ADCs
While applications, such as those in communication systems or in high performance instrumentation, deal with very active signals, on the other hand, sensing applications in health/vital monitoring, seismic/environmental monitoring and some industrial process control applications, among others, deal with signals which experience very little change for extended lengths of time, followed by short bursts of activity [24, 25]. There are also classes of signals (e.g. audio) that can be represented by either few significant components in the frequency domain, or by limited events of activity in the time domain. Because of that, such signals are said to be “sparse”: sparse in frequency domain or sparse in time domain respectively. A paradigm based on classic Shannon sampling theory where a time-uniform sampling at a rate that is at least twice the highest frequency component, while completely valid, is not very efficient for sparse signals as it results into a very long sample series that, while capturing the signal, does require too many samples/data to deliver the desired information content. A mathematically accurate description of signal sparsity can be found in [26-28].
This issue of signal sparsity and associated processing, while well-known for several decades in many engineering disciplines (e.g. compression algorithms are ubiquitous in software design and data storage; also wavelet theory is well established in signal processing) has recently found renewed attention in the circuit design community due to the rapid growth of the Internet of Things (IoTs). This is particularly true in the case of wireless sensor networks (WSNs). For that, a network of sensor nodes (SNs) senses, pre-processes and wirelessly delivers specific sensory information to a central hub/base station. Each SN is constituted by the sensor(s), the conditioning and data acquisition circuitry, a local DSP and the wireless transceiver (TRX) (plus a power management unit) as shown in Fig. 6.
The requirements on size, weight and power (SWAP) associated with the SNs are extremely demanding and while each circuit block in the signal chain making up the SN is subject to meeting corresponding challenging specifications, in several cases the real bottlenecks and the most power hungry functions are either in the data transmission from the SN to the hub (TRX) or the SN’s digital signal processing (DSP) required to extract the relevant information from the data to be sent to the hub. While relatively speaking, the ADC only consumes a very modest amount of the overall power budget (e.g. ~5% of the total SN’s power consumption) [29]. But if the ADC is a classic time-uniform sample rate (Shannon) converter, it produces a very large amount of data that then causes the DSP and/or the TRX to require more power.
So, in such cases, the data converter’s architectural challenge consists in developing a compressive sampling (CS) architecture producing less data as result of the analog-to-digital conversion, hence resulting to an overall lower power consumption budget for the SN as a whole. The compressed information is then transmitted to the hub, where, with a substantially larger computational capability and power budget, the reconstruction of the received compressed signal into the original sensed signal can be performed.
B. Architectures for compressive sampling
Different implementation approaches for the compressive sampling frameworks have been proposed in literature. In Shannon’s uniform sampling theory, a time-domain sampled signal can be thought as a modulation / convolution between the original continuous input signal and a Dirac pulse train. At a very high level, in compressive sampling the pulse train is replaced by pulse amplitude modulated signals with amplitude defined by an independent, identically distributed noise (ideally Gaussian) vectors (usually a pseudo-random binary sequence, or PRBS) that constitute an alternate representation basis. If the original input signal is sparse, then after convolution with the PRBS signals (the operation of compression), the resulting signal has far fewer samples [28]. In order to subsequently reconstruct the original signal, the operation needs to be reversible with tolerable/controllable losses/degradation.
So, in general, the compression operation can be thought of as a matrix multiplication between the vector of the samples of the original input signal and an encoding matrix made of appropriate PRBS vectors (the convolution operation consists of an inner product of the input signal sequence with the basis vectors). The compression can occur at different stages in the signal chain of Fig. 6. It can be done in the analog continuous-time domain before the ADC. In which case the ADC’s sample rate can be reduced (to a sub-Nyquist rate), though the burden of the encoding needs to be performed by an analog convolution circuit. It could also be done in the digital domain, after the ADC, and performed by the DSP. In this case the ADC is a traditional uniform sampling converter (adhering to Nyquist sampling) and the burden of encoding is on the DSP. Or, it can be performed in analog domain, combined with the ADC function (running at sub-Nyquist rate), leading to compressive sampling ADC architectures.
The systems reported in [32, 33] are examples of the case in which the compression is performed in the analog domain before the ADC. The implementation of the encoder uses a so-called random-modulation pre-integrator (RMPI) architecture which consists in an array of parallel signal paths each one including a mixer with a different random basis function, followed by a low-pass/integrator stage and a reduced sample rate (usually a SAR) ADC. While the mixers with +/- 1 random components can be efficiently implemented in analog form, the filtering/integration requires power/area hungry operational transconductance amplifiers (OTAs). So while the ADCs are low sample rate and don’t require a lot of power and area, the rest of the analog encoder can require substantial power and area. Moreover, the parallel paths require proper time-alignment, hence introducing additional design challenges.
In [29], on the other hand, the CS encoder is implemented in the digital domain. Here the integrators are replaced by energy efficient digital accumulators, though the ADC, while using a very power efficient implementation, runs at Nyquist rate.
A very different way to realize CS is presented in [34]. Here the mixing of the input signal with the PBRS basis functions and sub-sequent integration are replaced by a much simpler architecture where the sampler in front of the ADC is actually controlled by the PBRS. In other words, instead of sampling N consecutive samples at a uniform rate fs, this CS sampler effectively chooses only M of them at random (with M<N) from each successive length N window of the input signal resulting non-uniform time sequence (NUS) of samples corresponds then to a lower sample rate average M/N fs and is digitized by an ADC that is structurally identical to a conventional asynchronous SAR ADC, but where each conversion cycle is edge-triggered by the sampler’s PBRS clock. Another implementation that uses a similar non-uniform sampling (NUS) principle is the one reported in [35]. The implementations using the NUS compressive sampling framework have the advantage to provide a rather simplified hardware implementation in the SN (shifting more of the burden of the decompression to the hub/base station). However they also tend to show a more limited performance in terms of the sparsity of signal they can process, compared to the alternatives [36].
Lastly, a very effective approach is introduced in [36]. In this case a SAR ADC is augmented with an extended front-end that performs the CS encoding in discrete time. The mixing with the PBRS sequences is done similarly to the RMPI implementations using four-switch passive mixers, though the discrete-time implementation has advantages over the continuous-time circuits used in the RMPI architectures.
In addition the subsequent integration operation is performed in the charge domain using a reconfigurable extension of the capacitive DAC array of the SAR ADC itself, hence avoiding the use of power/area hungry OTAs used in the previous RMPI architectures and only using passive switch capacitor charge-domain circuitry.
The examples reported here show very encouraging progress occurred over a limited span of time. The emphasis on power efficiency of the converter itself is not that important. The main reason to develop a compressive sampling ADC lies with the substantial impact to the signal chain and to the overall power of the SN, not the ADC in isolation.
V. CONCLUSION
In summary, recent developments in the innovative field of data converters have been discussed. Special attention has been given to the promising technologies of TDCs/DTCs and compressive sampling converters. Neither of these emerging converter classes quite fit with the popular emphasis on converter power efficiency. But both are demonstrating good results and visible progress in addressing valuable engineering problems. It is incumbent upon the technical community to look at architectural innovation with the widest possible perspective.