Modeling and Simulation of RF and Microwave Systems

Modeling and Simulation of RF and Microwave Systems

作者联系方式

摘要

This application note describes system-level characterization and modeling techniques for radio frequency (RF) and microwave subsystem components. It illustrates their use in a mixed-signal, mixed-mode system-level simulation. The simulation uses an RF transmitter with digital predistortion (DPD) as an example system. Details of this complex system and performance data are presented.

A similar version of this article appeared in the November/December 2012 issue of IEEE Microwave Magazine

Introduction

The radio frequency (RF) and microwave world has largely been exempt from the rigors of Moore's Law, whereas aggressive scaling in the digital world has been the norm for several decades. But now, as CMOS transistors have picosecond switching speeds and transition frequencies in the tens to hundreds of GHz, there is an opportunity for the integration of RF and microwave components into a complete system on a single integrated circuit. Single-chip transceivers for wireless LAN and femtocell base stations are already a commercial reality. As with complex digital systems, the demand for first-pass design success requires that accurate predictive models are available for the subsystem components of the whole system, enabling the system design to be simulated and any errors or faults remedied before the design is committed to hardware. This will save time and the cost of unnecessary fabrication and test of the complete system.

System-level modeling and simulation for RF and microwave design have become very sophisticated in recent years. Commercial simulation tools with extensive libraries of signal sources, component models, and data analysis tools are now available from EDA vendors such as AWR and Agilent [1, 2]. The development of nonlinear behavioral models for the RF and microwave components such as amplifiers and mixers, and the ability to measure these subsystems accurately in a typical application environment have been significant enablers of these recent simulator advances.

This tutorial describes the system-level characterization and modeling techniques for RF and microwave subsystem components, and illustrates their use in a mixed-signal, mixed-mode system-level simulation. I shall use an RF transmitter with digital predistortion (DPD) as an example system, as shown in Figure 1. This is a complex system that includes:

  • RF components such as the power amplifier (PA), for which we need a nonlinear dynamical model
  • Mixed-signal components such as data converters, and IQ modulators/demodulators
  • The digital components of the predistorter, which can be realized as a field-programmable gate array (FPGA) or custom IC, which in turn can be described by models in a high-level description language such as Verilog®, or by means of their algorithmic function using a mathematical language such as Mathworks' MATLAB® software.

Figure 1. Block diagram of a wireless infrastructure transmitter system with DPD capability and the observation path for the DPD receiver included. This figure shows the major blocks and simulation domains that we shall consider.

Figure 1. Block diagram of a wireless infrastructure transmitter system with DPD capability and the observation path for the DPD receiver included. This figure shows the major blocks and simulation domains that we shall consider.

The RF transmitter with digital predistortion (DPD) is a good example of a practical complex system. Often in such a system, independent teams will design the subsystem components, and yet it is vital to know that the components will work together as a complete system, preferably before the hardware is bolted together. This will allow us to make any necessary architecture changes at the design stage, rather than after construction of the hardware.

We need a simulation environment that is capable of controlling several different simulator engines and handling the data transfer between them. For example, we may use a harmonic balance or envelope transient simulation for the RF components of the system, whereas the digital parts of the system will be simulated in a time-stepping or time-marching manner. We may also have to manage floating-point and fixed-point data representations in the different simulators. Having a simulator tool that can manage this cosimulation is essential.

Before we get into the details of the models, their construction, and their use in the simulation tools, let's review briefly some of the design challenges and considerations that we encounter in such a DPD-enabled RF transmitter system. While the design of a high-power RF amplifier to meet the power and efficiency demands of the modern spectrally efficient, digitally modulated communications signals such as wideband CDMA (WCDMA) and long-term evolution (LTE) is challenging enough, the addition of DPD brings an extra wrinkle or two.

Modern digitally modulated wireless communications signals such as WCDMA and LTE are designed to maximize the amount of data that can be transmitted in a given bandwidth. Details of modern wireless communications digital modulation can be found in [3]. Aside from the complexity of the modulation and coding schemes, one of the most obvious features of these signals is that they have a very high peak-to-average power ratio (PAPR), between 6dB and 10dB in practical cases. In other words, the signal peaks can be 10 times higher power than the average power of the signal that we want to transmit. The average power determines the range of the transmission; the peak power determines the "size" of the power amplifier. This means that the amplifier must have a power capability four to 10 times that which is necessary for the base-station coverage area, just to handle the peaks. Running a Class AB PA "backed-off" from its peak capability generally means running at very low efficiency, which is unacceptable because of the cost of the electricity to run it. Instead, we find high-efficiency PA architectures such as the Doherty are used in wireless infrastructure base-station PAs. While efficient, these PAs can produce high levels of distortion when operated at the high powers necessary. Hence the need for some form of linearization technique, and DPD is currently the method of choice.

Using DPD brings its own challenges. First, we need to build an observation path (see Figure 1) into the transmitter, to detect the distortion produced by the PA. The predistorter compares the PA output with the desired signal, and then uses a nonlinear function to generate distortion in the PA input signal, in such a way that the output of the PA is a replica of the desired signal. The parameters of this nonlinear function are continually adapted to minimize the difference between the input signal and the PA output. It's a control system. The predistorter counters the compression characteristic of the PA gain, so it is an expansive function. In the frequency domain, we see that the intermodulation and adjacent channel leakage power is compensated by the DPD. We can think of this as the predistorter adding frequency components to the PA input to counter these distortion products. More details of how the DPD operates can be found in [4].

The outcome of adding DPD to the transmitter is that the bandwidth of the predistorted signal is much wider than the original data bandwidth; up to five times wider is a common guideline. This means that the transmit chain from the DPD algorithm, through the DACs and RF section, must be made wide bandwidth. For LTE where carrier aggregation can result in a signal bandwidth of 100MHz, this is a significant design and implementation challenge. The observation path should also be of this bandwidth to be able to capture the high-order distortion products produced by the PA for the DPD system. Further, the observation path needs to be more linear than the rest of the transmitter, as any distortion introduced here will be indistinguishable from distortion generated by the PA, as far as the DPD is concerned, and the resulting correction signal will actually introduce distortion in the PA output. The observation path needs to be low noise and have a high dynamic range to be able to discern the low-level distortion products. These are all significant design challenges, and add to the complexity and cost of the transmitter.

RF Behavioral Models

There has been a considerable body of work produced over the past two decades on the development of behavioral models for nonlinear RF and microwave components [5, 6]. The recent focus is particularly on modeling the PA as this device is the major source of nonlinearity in a typical transmitter system. During this time, the identification of so-called "memory effects" has become one of the foremost challenges in modeling transmitter systems. While the nonlinearity can be modeled fairly straightforwardly using a polynomial model, for example, the inclusion of memory effects complicates our modeling and simulation task considerably.

There are two main approaches to RF behavioral modeling: in the frequency domain, and in the time domain. We will outline both styles. The nonlinear models are usually based on the linearization of the system-under-test around some operating point, such as the DC condition, or the average RF power. Anything more general is far too complicated to construct and implement, and doesn't simplify our modeling challenge.

Frequency-Domain Models

The frequency domain is the natural home for RF and microwave engineers. We have been using S-parameters for linear design for nearly 50 years since Kurokawa's original paper [7], and the introduction of the vector network analyzer in the 1960s. Hence, the popularity of this approach.

A modern and currently popular frequency-domain model is the X-parameters® model [8], along with its related and similar model structures, S-functions [9] and the "Cardiff" model [10]. These models are all based on linearization of the nonlinear response around a single large tone—the large signal drive to the PA, for instance. The nonlinear behavior is then probed in the frequency domain by measuring the scattering responses to small harmonic signals applied in addition to the large tone. The basic principles and the mathematical underpinnings have been clearly explained by Verspecht and Root in this magazine [11]. The X-parameters model uses only first-order derivatives to model the nonlinear behavior. This is quite an elegant approach, since the first derivatives are calculated automatically by the simulator in the Jacobian that it uses for converging to the solution, so this model is simple and fast. The S-functions operate in a similar manner, but can include the DC condition also as an explicit modeling variable. The Cardiff model extends the X-parameters by including higher-order derivatives in the model, which, like a Taylor Series, extends the region of validity of the model and makes it more general.

The value of the X-parameters lies not only in the model formulation, but that the data they are constructed from can be measured using off-the-shelf equipment, such as the Agilent® PNA-X nonlinear vector network analyzer. This instrument can also build an X-parameters model file that is compatible with the Agilent ADS simulator. High-power PAs do require some extra care in setting up an external measurement system around the PNA-X, but this can extend the capability of the instrument to measure X-parameters on PAs of upwards of 100W output power [12]. We can also generate an X-parameter description of a circuit from simulation. This means that we can create a nonlinear model of our circuit or subsystem in a fairly straightforward manner, even quite early on in the design process, for use in a higher-level system simulation.

One drawback with the X-parameters approach is that these models are memoryless by construction, although there have been recent reports on how to include long- and short-term memory effects into the X-parameters model structure using a Volterra series approach based on the Nonlinear Integral Model [13]. The memory effects can be measured using pulsed techniques to observe the transient behavior, and the memory model parameters extracted using NIM methods described in [5], Chapter 3. The memory component is included into the X-parameters formulation through a product term. Since the long-term memory effects are often a result of the bias supply components and design of the bias line in the PA fixture, this suggests that it may be possible to model these circuit effects as a separate model component that could be "added in" to the X-parameters model of the transistor to yield the complete PA model.

Time-Domain Models

The natural home for nonlinear dynamical phenomena is the time domain, because we can capture transients, and therefore the energy storage or memory effects, as well as the steady state behavior. A significant historical drawback to using time-domain data for RF and microwave circuits is that the sampling rate for measurement must be extremely high, resulting either in undersampling or short time-span data sets. A similar problem arises in simulation: the time step for transient simulation must be very short, resulting in very long simulation times. These problems have nowadays been largely overcome. Recently, real-time sampling oscilloscopes with bandwidth in excess of 60GHz, enabling characterization using signals up to 50Gbps have been announced [14]. And the development of envelope transient simulation techniques has enabled the simulation of circuits using modulated RF and microwave signals in a manageable time.

The PAs used in cellular communications are usually quite narrow band, and the output matching network filter attenuates any harmonics. When such PAs are driven into compression, it is the envelope of the signal that is compressed, not the carrier signal, which remains sinusoidal (and may not, in fact, be present explicitly). This means that the nonlinear characteristics of the PA can be completely described by the envelope behavior. We can use the envelope, or more precisely, the modulation signal to characterize the PA: the timescales for the data capture are now at the modulation rate. We can also use envelope transient simulation to describe how the modulation signal is affected by the PA nonlinearities. Thus, our time-domain model of the PA can be constructed at the modulation rate, not at the microwave frequency.

The digital modulation used in modern cellular communications is usually created in the form of in-phase (I) and quadrature (Q) components. The I and Q are combined to create the desired modulation signal, usually some form of quadrature phase-shift key (QPSK) or amplitude modulation (QAM). The time-domain I and Q data can be measured or simulated at the input and output of the PA, and the nonlinear and memory behavior is observed in the AM-to-AM or gain response and the AM-to-PM (phase) response, as shown in Figure 2. What can be seen in these figures are the general trends of gain compression and phase distortion, but also that the responses are in the form of a "cloud" around some average response. It is this cloud that indicates the presence of memory effects. If no memory effects were present, these two responses would be single lines—the "instantaneous" response. What the clouds tell us is that the gain at a given output power depends not only on the input power at that instant, but also on what the value of the signal was at previous times: its history. Not every point in the cloud will have exactly the same history.

Figure 2. AM-to-AM and AM-to-PM plots of the magnitude and phase of the input-output IQ data for a power amplifier. The blue dots are the measured data, the red dots are the reduced Volterra model predictions.

Figure 2. AM-to-AM and AM-to-PM plots of the magnitude and phase of the input-output IQ data for a power amplifier. The blue dots are the measured data, the red dots are the reduced Volterra model predictions.

We can build a model of the PA by fitting a curve through the AM-to-AM and AM-to-PM characteristics, using well-known nonlinear data- or function-fitting techniques. Essentially we try to fit a known function to the data. Perhaps the most popular and simplest technique is a polynomial fit to the data. In practice, this is usually done using "Least-Squares" methods, which minimize the average Euclidean distance between the function and the data. By building the polynomial model, we are basically fitting a Taylor series expansion to the data.

But this procedure would yield an instantaneous model, failing to capture the memory effects. We need a model function that includes the history of the time domain signal explicitly in its formulation. The Volterra series comes to the rescue. Volterra series can be used to model a time-invariant nonlinear dynamical system; in other words, a nonlinear system with memory. The Volterra series can be thought of as a Taylor series with memory, and a brief outline is provided in Appendix B: Volterra Series as a Development of a Taylor Series. Since the Volterra series is related to the Taylor series—both are polynomial functions—then the Volterra series suffers from similar limitations [15]. Primarily, the nonlinearities in the system must not be "strong." In the context of PA modeling, a "strong" nonlinearity would be a discontinuity in the response of the input signal, caused by clipping of the waveform, for instance. It is often stated that the (PA) system must be weakly nonlinear for it to be amenable to Volterra analysis: what this means is that the system response must be continuous, and therefore can be represented by a finite series of contributing terms. Sometimes, "weakly nonlinear" is interpreted as a small number of terms—no more than cubic!—but this is unnecessarily artificial and constrictive.

The accuracy of the Volterra series can be improved by increasing the number of terms in the polynomial series. In other words, the model is approximating the actual data to a smaller tolerance. Whereas increasing the number of terms in a Taylor series is a straightforward exercise, with a Volterra series the cross-terms and memory terms cause the number of terms in the series to increase dramatically as the polynomial degree and memory depth are increased. This is perhaps one reason why Volterra series modeling has historically seen only limited application. With the computational power now available in modern laptop computers, such limitations on the Volterra series polynomial degree are a thing of the past.

Further, sophisticated "pruning" techniques can be used to limit the number of coefficients in the polynomial to a manageable level, without any significant loss of model fidelity. Such techniques were pioneered by Filicori, Ngoya, and coworkers [16, 17], using a technique known as Dynamic Deviation. In this method, the Volterra series is expanded explicitly in the deviation of the signal from some operating point, either the DC condition [16] or the average signal power of the PA [17]. Truncating the resulting series to first order in the dynamic deviation was shown to produce satisfactory results. Unfortunately, rewriting the Volterra series in this way meant that standard least-squares techniques for the identification of the multinomial coefficients could no longer be used, and so the model parameters were difficult to extract. By recasting the dynamic deviation expressions, Zhu recovered the linear-in-parameters structure of the Volterra series, enabling straightforward parameter extraction by standard mathematical techniques. This approach also allowed explicit control over the level of dynamics used in the model [18]. This technique is known as Dynamic Deviation Reduction, and it can produce accurate power amplifier models with a relatively small number of coefficients, typically around 30 to 50 being sufficient.

The Power Amplifier Model

The power amplifier is generally the largest contributor to nonlinearity in the transmitter, and the memory effects associated with its frequency response over the wide bandwidth demanded for digital predistortion nowadays require sophisticated nonlinear models to describe the PA behavior accurately enough for DPD. Typically, a polynomial (Volterra)-based approach is used, although there are notable exceptions to this in the marketplace. Simple memory-polynomial models used for narrowband signals have been replaced by reduced Volterra series models that include some cross-coupling between the memory terms of the series—the cross terms. These models are generally built from the demodulated IQ data, and hence are baseband models of the RF PA. They are often constructed as a mathematical model in an environment such as MATLAB. The AM-to-AM and AM-to-PM characteristics of a PA and behavioral model are shown in Figure 2. These are plots of the magnitude and phase of the output IQ against the input IQ time-domain data. Such a model can be used easily in a system simulation.

Modeling the DPD System

The predistortion function is a nonlinear function that compensates the PA gain compression, phase transfer characteristic, and memory effects, to produce the linear output from the PA. As noted above, a Volterra-based function is often used in this application.

The DPD algorithms for solving the Volterra model are usually developed in a mathematical language such as MATLAB, which affords a rich set of tools and functions for solving the nonlinear equations, optimizing the function coefficients, and so forth. Two commonly used DPD approaches for adaptive linearization are direct adaption and indirect learning, shown schematically in Figure 3. Direct adaption is a classical control approach; it compares the input signal and (scaled) PA output signal directly and uses the error to drive changes in the DPD function parameters to minimize the error on the next calculation. Indirect learning is a slightly more complex mathematical process, but can appear to be easier to implement. Here we compare the predistorted input signal with the postdistorted PA output; we use the same nonlinear predistortion function. It has been shown mathematically for Volterra series that pre- and post-distortion produce the same behavior, within certain limits. Again we use the error to drive changes in the predistortion function parameters. This seems easier to implement because the DPD function parameters are used in the calculation of the two signals used for the error (cost) function, and the updated parameters are a direct outcome of the minimization routine.

The adaption routine used for estimating the new set of DPD function coefficients is usually a least-squares solver. The least-squares approach can be used since the nonlinear Volterra model used for the DPD is set up as a linear-in-parameters expression, as noted earlier. Least-squares minimization routines such as least mean squares (LMS) are often used, and several thousand data (IQ) samples are taken to solve this over-determined set of equations. The LMS routine is generally quite stable, though it can be slow to converge. More aggressive minimization techniques such as recursive least squares (RLS) and affine projection (AP) have been demonstrated [19], offering faster convergence, although they can be more prone to instability arising from noise in the input data. This can be overcome with care in the implementation.

Figure 3. Schematic representations of DPD functions: (a) is adaptive control, (b) is indirect learning.

Figure 3. Schematic representations of DPD functions: (a) is adaptive control, (b) is indirect learning.

Once the DPD algorithms have been demonstrated in the MATLAB environment, the DPD model can then be imported into the system simulator, and the linearization of the PA in a more complex, multisimulator environment can be simulated. After the system performance has been verified, the MATLAB model code can then be downloaded directly into an FPGA using development software provided by FPGA manufacturers such as Xilinx and Altera. The mathematical model can then be run in hardware, in real time, and any practical implementation errors can be addressed quickly.

Mixed-Signal Models and Simulation

We are now left with modeling and simulating the "glue" circuits that connect the digital pieces—the predistorter—to the RF power amplifier. Often, these circuits and components are omitted from the system specification, with the verification of the linearized performance focusing only on the PA and DPD models. But actually, these components do rather more than act as glue. As we mentioned earlier, the DPD observation path is a critical component of this linearized transmitter system, and its performance can have major impact on the overall operation and capability of the transmitter.

With suitable models for the data converters, modulators and demodulators (mod/demod), and the RF components such as driver and low-noise amplifiers, we can simulate the behavior of the complete transmitter in detail. We can use the simulation to study the impact of these components' specifications on the overall performance of the linearized transmitter. For example, we can study the effects of the phase noise performance of the local oscillator in the mod/demod circuit on the accuracy and dynamic range of the observation path, and hence test the limits of the linearization capability of the complete system. The models that we choose for these components and circuits will depend largely on how sophisticated we want the simulated behavior to be, and how important the impairments are to the linearized.

The Data Converters

Models for the digital-to-analog converters (DACs) and analog-to-digital converters (ADCs) range from simple input/output models, through Verilog descriptions of the functional behavior, to sophisticated nonlinear models that approach (or even exceed) the complexity of the PA nonlinear models. The data converters are nonlinear devices. At a high level, the nonlinearity can be modeled as harmonic generation in the frequency domain. More sophisticated models will use a polynomial or Volterra approach. At the system-level description for the transmitter, it may be sufficient to include a simple nonlinearity and the noise figure, particularly for the ADC in the observation path, to model and investigate the limitations on the linearization capability arising from low distortion signal powers.

System-level simulation tools such as Agilent's SystemVue and AWR's Virtual System Simulator (VSS) come with built-in models for many commercial data converters. While this is useful for verification of the final system, using the more generic data-converter models available in these tools allows us to investigate some of the more straightforward limitations of these components, such as the number of data bits, noise figure, clock frequency and frequency response, and so forth. These investigations can give us a better understanding of the nature of the limitations of these factors or impairments on the overall system performance.

The IQ Modulator/Demodulator Circuits

In our simulation, we can use ideal models for the IQ modulation and demodulation functions, or analog circuit descriptions or macromodels that include many physical behaviors that contribute to the performance, and performance limitations of these devices. Our main concerns are likely to be the impacts of these impairments to ideal demodulator behavior on the performance of the observation path. Such impairments include the injection of noise from the local oscillator (LO), so the LO phase noise performance is of interest, and the introduction of cross-talk between the I and Q paths in the demodulator.

These IQ demodulator imperfections can be modeled using a simple analytical circuit model described by Cavers [20] and shown in Figure 4. The output from the ideal demodulator, which is usually available as a built-in model in the RF or system-level simulator, is fed through the two linear amplifiers, allowing a gain imbalance to be modeled. The phase imbalance, Φ, or the difference from quadrature phase between the I and Q paths, is modeled using the cross talk and in-line amplifiers in sin(Φ) and cos(Φ), respectively. The output of this analytical model enables the inclusion of DC offsets into the output of the modulator. This is important for zero-IF downconversion architectures.

Figure 4. Analytical circuit model for IQ demodulator impairments.

Figure 4. Analytical circuit model for IQ demodulator impairments.

The RF Amplifiers and Filters

The PA driver amplifiers in the transmit path are usually not driven too hard, so their nonlinear behavior is less of a concern than that of the PA itself. This means that we can use either simpler behavioral models for these components, or the nonlinear models that are provided by the component vendors for use in the RF circuit simulator. These models should be checked that they can be used in a transient simulation before we use them, though, since the system simulation will use a time-stepping (transient) algorithm. Any distortion produced by the driver amplifiers' nonlinearities will be included in the net output from the PA, and will in practice be accommodated by the predistorter. Our main concern in the system-level simulation is that these nonlinearities are captured sufficiently well and that their impact on the overall nonlinear behavior of the transmitter is described. As noted above, the major source of nonlinear behavior is the PA itself. Therefore, it is most important that the PA's nonlinear dynamical behavior is properly described.

The low-noise amplifier (LNA) in the observation path plays an important role in determining the noise level or sensitivity of the predistortion system. Again, it is usually sufficient to use a system-level model available in the system or RF simulator, and adjust the gain, noise figure, and frequency response of this component, to understand and ultimately control the contributions of these performance specifications on the overall performance of the linearized system.

The transmit and observation paths will also contain passive components such as filters and attenuators, to control the frequency response and signal levels at the inputs to the main parts of the system: the PA drivers and the LNA. We can use the S-parameter descriptions of these components, provided that there isn't too much dispersion, directly in the system simulator. The S-parameter model will be subject to convolution to produce a time-domain description.

Putting It All Together

We now have a set of models, in a variety of simulation tools, that optimally describe the components of the transmitter system. We will then usually turn to a commercial system-level simulation tool such as Agilent's SystemVue and the AWR® VSS to provide the framework, and some of the models, to put the system description together and manage the simulation. I've used the word "manage" here; it gives a nice feeling of how the simulation proceeds. The system simulator is usually a time-stepping simulator, and so we watch the data proceed through our system step by step. Because not all of the models that we want to use work in this manner, or may not be described in the system simulator's native language, there is provision for "cosimulation."

Figure 5. Screenshot of Agilent SystemVue software DPD example project indicating the building of a memory polynomial DPD algorithm from the PA AM-to-AM characteristic. © 2011 Agilent Technologies, Inc. Used with permission.

Figure 5. Screenshot of Agilent SystemVue software DPD example project indicating the building of a memory polynomial DPD algorithm from the PA AM-to-AM characteristic. © 2011 Agilent Technologies, Inc. Used with permission.

By cosimulation, we mean that the host system simulator will start, and control, another simulator engine, such as MATLAB or ADS Harmonic Balance or Circuit Envelope, for example; it will perform the simulation of the PA model or DPD algorithm to obtain the appropriate data for the next time step in the system simulation. Described this way, it sounds like system simulation would take a huge amount of time. But this isn't necessarily so. The secret is how the cosimulation is managed efficiently and that the relevant data can be transferred at the right time. System simulation can be very fast compared to circuit simulation, as we can "bundle" the main nonlinearities into just a few components, whereas in a circuit simulation with more than a few transistors, the convergence of the simulation containing several nonlinear models can become slow. These other simulation engines are run "in the background," without any need for explicit control by the user.

Figure 6. Screenshot of AWR VSS showing how National Instruments Labview tools can be used to import real-time measured data from a PA into the system simulation. ©2012 National Instruments, used with permission.

Figure 6. Screenshot of AWR VSS showing how National Instruments Labview tools can be used to import real-time measured data from a PA into the system simulation. ©2012 National Instruments, used with permission.

What if we don't have the models we need? This is not a valid excuse for not conducting the system simulation and verifying your design. For example, we can import measured data, say from the PA, directly into the system simulation, and get an idea of how the complete system will work. This is illustrated in Figure 6, using National Instruments® LabVIEW® tools with the AWR VSS as an example. This is a very useful way of checking how the system will be put together; it allows changes to be made in the different parts of the system to accommodate real, measured effects. This may be the case if you are developing a DPD system that should be generic enough to work with several different target PAs, as is usually the case. The models for the PAs are not always available or accurate, and cosimulation and characterization provides a way forward.


Concluding Remarks


In this tutorial, we have had but a glimpse of how a system simulation can be put together, using models, and indeed measurement, from different sources. These models can be combined together and the system simulator manages the cosimulation of the models in their native environments, quickly and efficiently. We have described how some of these models can be constructed, and used at the system level to investigate how specifications can be set, by studying the impairments on ideal performance of the crucial components in the system.


Acknowledgments


Thanks to Professor Slim Boumaiza and Mr. Farouk Mkadem of the University of Waterloo for providing the LTE IQ signal measurements used for the Volterra model extraction shown in Figure 2. Thanks to Nilesh Kamdar of Agilent Technologies for providing the screenshot image used in Figure 5. Thanks to Ms. Maegan Quejada of National Instruments for providing the screenshot image used in Figure 6.

Appendix A: Memory Effects

"Memory effects" is the term used to describe the influence of the history of the signal on its present value. In other words, the values of the signal at past time instances contribute to the present value. This behavior should come as no surprise to an electrical engineer: a simple series R-C circuit exhibits memory effects, because the current flowing in the circuit depends on how much charge has already built up on the capacitor. Memory effects are a result of energy storage in the system. They are also a manifestation of the dynamical behavior of the system, which for linear systems can be described by the time derivative or time-delay expressions. For nonlinear systems such as a power amplifier, the history of the signal influences the output. The dynamic effects in a PA can occur over a wide range of timescales.

"Short-term" memory effects occur roughly at the RF carrier rate. Examples of contributors to short-term memory are the frequency-dependent matching networks in the amplifier, and the internal capacitances and charge transit time of the transistor itself. If these capacitances are also nonlinear, as functions of the supply voltage, for instance, then the memory effect is also nonlinear.

"Long-term" memory effects occur at a much slower rate, and can affect the behavior of the PA over some time. Examples of long-term memory effects include energy storage in the bias and supply lines to the PA, through the decoupling capacitance and printed line inductance, and temperature changes caused by the heating and cooling of the transistor under drive by modulated signals; the thermal time constant of the semiconductor is much longer than the modulation frequency.

Appendix B: Volterra Series as a Development of a Taylor Series

One way of building up a Volterra series expression is as a development of a nonlinear instantaneous model that can be described by a Taylor series expansion. We take as our prototype a polynomial model to describe a nonlinear instantaneous system:

Equation 1.

The polynomial coefficients an are found from the Taylor series of the input-output relationship, expanded around some operating point, u0:

Equation 2.

Now, instead of the instantaneous relationship, let's consider the output y(t) as a function of u(t) and the values of u at some previous times, thus describing the memory effects:

y(t) = ƒ(u, u1, u2, ..., un)

where

u = u(t), u1 = u(t - τ1), u2 = u(t - τ2), ...

The Taylor series expansion for this is:

Equation 5.

which is a multinomial series. The memory terms in (u1 - u0) and the so-called Volterra cross-terms in (u - u0)(u1 - u0) can be readily observed in this expression.

The Volterra series can be thought of as a Taylor series with memory. The nonlinearities described by such a series must satisfy some "smoothness" criteria for the series to be convergent. This is another way of saying that the series will approximate the true value of the function to within some specified tolerance, that is, the truncation error becomes smaller than this tolerance value.