Abstract

This paper reviews some of the promising doors that functional analysis techniques have recently opened in the field of electronic circuit simulation. Because of the modulated nature of radio frequency (RF) signals, the corresponding electronic circuits seem to operate in a slow time scale for the aperiodic information and another, much faster, time scale for the periodic carrier. This apparent multirate behavior can be appropriately described using partial differential equations (PDEs) within a bivariate framework, which can be solved in an efficient way using hybrid time-frequency techniques. With these techniques, the aperiodic information dimension is treated in the discrete time domain, while the periodic carrier dimension is processed in the frequency domain, in which the solution is evaluated within a space of harmonically related sinusoidal functions. The objective of this paper is thus to provide a general overview on the most important hybrid time-frequency techniques, as the ones found in commercial tools or the ones recently published in the literature.

1. Introduction

Numerical simulation plays an important role in electronics, helping engineers to verify correctness and debug circuits during their design, and so avoiding breadboarding and physical prototyping. The advantages of numerical simulation are especially significant in integrated circuits design, where manufacturing is expensive and probing internal nodes is difficult or prohibitive.

Circuit simulation has emerged in the early 1970’s, and many numerical techniques have been developed and improved along the years. Radio frequency (RF) and microwave system design is a field that was an important driver for numerical simulation development, and continues to be so nowadays. Indeed, computing the solution of some current electronic circuits, as is the case of modern wireless communication systems, is still today a hot topic. In effect, serious difficulties arise when these nonlinear systems are highly heterogeneous circuits operating in multiple time scales. Current examples of these are wireless RF integrated circuits (RFICs), or systems-on-a-chip (SoC), combining RF, baseband analog, and digital blocks in the same the circuit.

Signals handed by wireless communication systems can usually be described by a high frequency RF carrier modulated by some kind of slowly varying baseband information signal. Hence, the analysis of any statistically relevant information time frame requires the processing of thousands or millions of time points of the composite modulated signal, turning any conventional numerical integration of the circuit’s system of ordinary differential equations (ODEs) highly inefficient, or even impractical. However, if the waveforms produced by the circuit are not excessively demanding on the number of harmonics for a convenient frequency-domain representation, this class of problems can be efficiently simulated with hybrid time-frequency techniques. Handling the response to the slowly varying baseband information signal in the conventional time step by time-step basis, but representing the reaction to the periodic RF carrier as a small set of Fourier components (a harmonic balance algorithm for computing the steady-state response to the carrier) new circuit simulators are taking an enormous profit from functional analysis techniques. But, beyond overcoming the signals’ time-scale disparity, one of the recently proposed hybrid time-frequency techniques is also able to deal with highly heterogeneous RF circuits in an efficient way, by applying different numerical strategies to state variables in different parts (blocks) of the circuits.

2. Theoretical Background Material

2.1. Mathematical Model of an Electronic Circuit

The behavior of an electronic circuit can be described with a system of equations involving voltages, currents, charges, and fluxes. This system of equations can be constructed from a circuit description using, for example, nodal analysis, which involves applying the Kirchhoff current law to each node in the circuit, and applying the constitutive or branch equations to each circuit element. Systems generated this way have, in general, the following form where and stand for the excitation (independent voltage or current sources) and state variable (node voltages and branch currents) vectors, respectively. stands for all linear or nonlinear elements, as resistors, nonlinear voltage-controlled current sources, and so forth, while models dynamic linear or nonlinear elements, as capacitors (represented as linear or nonlinear voltage-dependent electric charges), or inductors (represented as linear or nonlinear current-dependent magnetic fluxes).

The system of (1) is, in general, a differential algebraic equations’ (DAE) system, which represents the general mathematical formulation of lumped problems. However, as reviewed in [1], this DAE circuit model formulation could even be extended to include linear distributed elements. For that, these are substituted, one-by-one, by their lumped-element equivalent circuit models or are replaced, as whole sub-circuits, by reduced order models derived from their frequency-domain characteristics whenever larger distributed linear networks are dealt with.

The substitution of distributed devices by lumped-equivalent models is especially reasonable when the size of the circuit elements is small in comparison to the wavelengths, as is the case of most emerging RF technologies (e.g., new systems on chip (SoCs), or systems in package (SiPs), integrating digital high-speed CMOS baseband processing and RFCMOS hardware).

2.2. Steady-State Simulation

The most natural way of simulating an electronic circuit is to numerically time-step integrate, in time domain, the ordinary differential system describing its operation. This straightforward technique was used in the first digital computer programs of circuit analysis and is still widely used nowadays. It is the core of all SPICE (which means simulation program with integrated circuit emphasis) [2] or SPICE-like computer programs. The dilemma is that these tools focus on transient analysis, and sometimes electronics designers, as is the case of RF and microwave designers, are not interested in the circuits’ transient response, but, instead, in their steady-state regimes. This is because certain aspects of circuits’ performance are better characterized, or simply only defined, in steady-state (e.g., distortion, noise, power, gain, impedance, etc.). Time-step integration engines, as linear multistep methods, or Runge-Kutta methods, which were tailored for finding the circuit’s transient response, are not adequate for computing the steady-state because they have to pass through the lengthy process of integrating all transients and expecting them to vanish. In circuits presenting extremely different time constants, or high resonances, as is typically the case of RF and microwave circuits, time-step integration can be very inefficient. Indeed, in such cases, frequencies in steady-state response are much higher than the rate at which the circuit approaches steady-state or the ratio between the highest and the lowest frequency is very large. Thus, the number of discretization time steps used by the numerical integration scheme will be enormous because the time interval over which the differential equations must be numerically integrated is set by the lowest frequency or by how long the circuit takes to achieve steady-state, while the size of the time steps is constrained by the highest frequency component.

It must be noted that there are several different kinds of steady-state behavior that may be of interest. The first one is DC steady-state. Here, the solution does not vary with time. Stable linear circuits driven by sinusoidal sources may exhibit a sinusoidal steady-state regime, which is characterized as being purely sinusoidal except, possibly, for some DC offset. If the steady-state response of a circuit consists of generic waveforms presenting a common period, then the circuit is said to be in a periodic steady-state. Directly computing the periodic steady-state response of an electronic circuit, without having to first integrate its transient response, involves finding the initial condition, , for the differential system that describe the circuit’s operation, such that the solution at the end of one period matches the initial condition, that is, , where is the period. Problems of this form, those of finding the solution to a system of ordinary differential equations that satisfies constraints at two or more distinct points in time, are referred to as boundary value problems. In this particular case, we have a periodic boundary value problem that can be formulated as where the condition is known as the periodic boundary condition.

In the following, we will focus our attention to the most widely used technique for computing the periodic steady-state solution of RF and microwave electronic circuits: the harmonic balance method [35].

2.3. Harmonic Balance

Harmonic balance (HB) is a mature computer steady-state simulation tool that operates in the frequency domain [3]. Frequency-domain methods differ from time-domain steady-state techniques in the way that, instead of representing waveforms as a collection of time samples, they represent them using coefficients of sinusoids in trigonometric series. The main advantage of the trigonometric series approach is that the steady-state solution can often be represented accurately with a small number of terms. For example, if the circuit is linear and its inputs are all sinusoidal of the same frequency, only two terms (magnitude and phase) of the trigonometric series will represent the solution exactly, whereas an approximate time-domain solution would require a much larger number of sample points.

Another advantage of operating directly in the frequency-domain is that linear dynamic operations, like differentiation or integration, are converted into simple algebraic operations, such as multiplying or dividing by frequency, respectively. For example, when analyzing linear time-invariant circuit devices, the coefficients of the response are easily evaluated by exploiting superposition within phasor analysis [6]. Computing the response of nonlinear devices is obviously more difficult than for linear devices, in part because superposition no longer applies, and also because, in general, the coefficients of the response cannot be computed directly from the coefficients of the stimulus. Nevertheless, in the case of moderate nonlinearities, the steady-state solution is typically achieved much more easily in frequency-domain than in time-domain simulators.

HB handles the circuit, its excitation and its state variables in the frequency domain, which is the format normally adopted by RF designers. Because of that, it also benefits from allowing the direct inclusion of distributed devices (like dispersive transmission lines) or other circuit elements described by frequency-domain measurement data, for which we cannot find an exact time-domain representation.

In order to provide a brief and illustrative explanation of the conventional HB theory, let us start by considering again the boundary value problem of (2), describing the periodic steady-state regime of an electronic circuit. For simplicity, let us momentarily suppose that we are dealing with a scalar problem, that is, that we have a simple circuit described with a unique state variable , and that this circuit is driven by a single source , verifying the periodic condition . Since the steady-state response of the circuit will be also periodic with period , both the excitation and the steady-state solution can be expressed as the Fourier series where is the fundamental frequency. By substituting (3) into (2), and adopting a convenient harmonic truncation at some order , we will obtain

The HB method consists in converting this differential system into the frequency domain, in way to obtain an algebraic system of equations, in which the unknowns are the Fourier coefficients . It must be noted that since and are, in general, nonlinear functions, it is not possible to directly compute the Fourier coefficients in this system. In fact, we only know a priori the trivial solution for . So, we can possibly guess an initial estimate to and then adopt an iterative procedure to compute the steady-state response of the circuit. For that, we use a first-order Taylor-series expansion, in which each initial expansion point corresponds to the previous iterated solution. Indeed, we expand the left hand side of the DAE system in (2) to obtain which results in

The difficulty now arising in solving (6) is that we want to transform this system entirely into the frequency domain, but we do not know how to compute the Fourier coefficients of , , , and at each iteration . So, one possible way to do that consists of computing each of these nonlinear functions in the time domain and then calculate their Fourier coefficients. Therefore, according to the properties of the Fourier transform, the time-domain products and will become spectral convolutions, which can be represented as matrix-vector products using the conversion matrix formulation [5, 7]. This way, and because of the orthogonality of the Fourier series, (6) can be expressed in the form where

In (7), and are vectors containing the Fourier coefficients of and , respectively, and and denote the conversion matrices (Toeplitz) [5, 7] corresponding to and . If we rewrite (7) as we can obtain in which is known as the harmonic balance equation, and the composite conversion matrix is known as the Jacobian matrix of the error function .

The iterative procedure of (5)–(12) is the so-called harmonic-Newton algorithm. In order to achieve the final solution of the problem, we have to do the following operations at each iteration : (i) perform inverse Fourier transformation to obtain from ; (ii) evaluate , , , and in time domain; (iii) calculate their Fourier coefficients to obtain , , , and , and thus and ; (iv) solve the linear system of algebraic equations of (10) to compute the next estimate . Consecutive iterations will be conducted until a final solution satisfies the HB equation of (11) with a desired accuracy, that is, until where tol is an allowed error ceiling and stands for some norm of the error function .

Since in a digital computer, both time and frequency domains are represented by discrete quantities, the mathematical tools used to perform Fourier and inverse Fourier transformations are, respectively, the discrete Fourier transform (DFT) and the inverse discrete Fourier transform (IDFT) or their fast algorithms, the fast Fourier transform (FFT) and the inverse fast Fourier transform (IFFT).

The system of (10) is typically a sparse linear system in the case of a generic circuit with state variables. In general, several methods can be used to solve this system, such as direct solvers, sparse solvers, or iterative solvers. However, for very large systems, iterative solvers are usually preferred. Krylov subspace techniques [8] are a class of iterative methods for solving sparse linear systems of equations. An advantage of Krylov techniques is that (10) does not need to be fully solved in each iteration. The iterative process needs only to proceed until is such that decreases the error function. This approach to the solution, called inexact Newton, can provide significantly improved efficiency. Today, there is a general consensus that a technique called the generalized minimum residual (GMRES) [9] is the preferred one among the many available Krylov subspace techniques, for harmonic-balance analysis [1012].

The generalization of the above described harmonic-Newton algorithm to the case of a generic electronic circuit with state variables is obviously straightforward. Indeed, in such case we will simply have where each one of the , , is a vector containing the Fourier coefficients of the corresponding state variable . The matrix will be defined as and the Jacobian matrix will have a block structure, consisting of an matrix of square submatrices (blocks), each of one with dimension . Each block contains information on the sensitivity of changes in a component of the error function , resulting from changes in a component of . The general block of row and column can be expressed as where and denote, respectively, the Toeplitz conversion matrices [7] of the vectors containing the Fourier coefficients of and .

3. Hybrid Time-Frequency Simulation

3.1. Modulated Signals

Signals containing components that vary at two or more widely separated rates are usually referred to as multirate signals and have a special incidence in RF and microwave applications, such as mixers (up/down converters), modulators, demodulators, power amplifiers, and so forth. Multirate signals can appear in RF systems due to the existence of excitation regimes of widely separated time scales (e.g., baseband stimuli and high frequency local oscillators) or because the stimuli can be, themselves, multirate signals (e.g., circuits driven by modulated signals). The general form of an amplitude and phase-modulated signal can be defined as where and are, respectively, the amplitude, or envelope, and phase slowly varying baseband signals, modulating the fast-varying carrier. Circuits driven by this kind of signals, or presenting themselves state variables of this type, are common in RF and microwave applications. Since the baseband signals have a spectral content of much lower frequency than the carrier, that is, because they are typically slowly varying signals while the carrier is a fast-varying entity, simulating nonlinear circuits containing this kind of signals is often a very challenging issue. Because the aperiodic nature of the signals obviates the use of any steady-state technique, one might think that conventional time-step integration would be the natural method for simulating such circuits. However, the large time constants of the bias networks determine long transient regimes and, as a result, the obligation of simulating a large number of carrier periods. In addition, computing the RF carrier oscillations long enough to obtain information about its envelope and phase properties is, itself, a colossal task. Time-step integration is thus inadequate for simulating this kind of problems because it is computationally expensive or prohibitive.

3.2. Hybrid Time-Frequency ETHB Technique

The envelope transient harmonic balance (ETHB) [1316] is a hybrid time-frequency technique that was conceived to overcome the inefficiency revealed by SPICE-like engines (time-step integration schemes) when simulating circuits driven by modulated signals or presenting state variables of this type. It consists in calculating the response of the circuit to the baseband and the carrier by treating the envelope and phase in the time domain and the carrier in the frequency domain. For that, it assumes that the envelope and phase baseband signals are extremely slow when compared to the carrier, so that they can be considered as practically constant during many carrier periods. Taking this into account, ETHB samples the baseband signals in an appropriately slow time rate and assumes a staircase version of both amplitude and phase, which will conduct to a new modulated version of these signals. The steady-state response of the circuit to this new modulated version is then computed at each time step with the frequency-domain HB engine.

In order to provide a very brief theoretical description of the ETHB technique, let us suppose that we have a circuit driven by a single source of the form of in (17). If we rewrite as and assume that the circuit is stable, then all its state variables can be expressed as time-varying Fourier series where represents the time-varying Fourier coefficients of , which are slowly varying in the baseband time scale. Now, if we take into consideration the disparity between the baseband and the carrier time scales and assume that they are also uncorrelated, which is normally the case, then we can rewrite (17) and (19) as where is the slow baseband time scale and is the fast carrier time scale. Then, if we discretize the slow baseband time scale using a grid of successive time instants and adopt a convenient harmonic truncation at some order , we will obtain for each a periodic boundary value problem that can be solved in the frequency domain with HB. In order to compute the whole response of the circuit, a set of successive HB equations of the form has to be solved, in which and represent the vectors containing the time-varying Fourier coefficients of the excitation and the solution, respectively.

Two different ways can be conceived to evidence the system’s dynamics to the time-varying envelope, depending on whether the circuit’s elements’ constitutive relations are described in the frequency domain or they can be formulated in the time domain.

In one possibility, we rely on the frequency-domain description of each of the constitutive elements, and so of the entire system represented in (22). Assuming that the envelope time evolution is much slower than that of the carrier, we no longer consider that each harmonic component of the carrier occupies a single frequency (constant amplitude and phase carrier) but spreads through its vicinity (slowly varying amplitude and phase modulation). For example, any dynamic linear component whose frequency-domain representation is can be approximated by a Taylor series (or any other polynomial or rational function) in the vicinity of each of the carrier harmonics, , that is, , where is a slight frequency perturbation, as which leads to with and being the low-pass equivalent of and . Since is a constant, and can be interpreted as the ’th order derivative of the time-domain with respect to time , (25) can be rewriten as which, substituted in (22), would evidence the desired system’s dynamics to the amplitude and phase modulations. Therefore, the ETHB technique consists in the transient simulation, in an envelope time-step by time step basis, , of the harmonic balance equation of (22).

This formulation of ETHB is, nowadays, a mature technique in the RF simulation community. However, its basic assumption constitutes also its major drawback. By requiring the envelope and phase to be extremely slowly varying signals when compared to the carrier frequency, this mixed frequency-time technique becomes restricted to circuits whose stimuli occupy only a small fraction of the available bandwidth.

In an alternative ETHB formulation, we assume that every element can be described in the time domain. Hence, we can substitute the time-varying Fourier description of (21) into (1) and then treat the carrier time, , in the frequency domain—converting the DAE system into an algebraic one—but keeping the envelope time, , in the time domain. This way, we obtain another hybrid time-frequency description of the system that no longer suffers from the narrow bandwidth restriction just mentioned and whose formulation and solution will be discussed in more detail in Section 3.4.

3.3. Multivariate Formulation

We will now introduce a powerful strategy for analyzing nonlinear circuits handling amplitude and/or phase modulated signals, as with any other kind of multirate signals. This strategy consists in using multiple time variables to describe the multirate behavior, and it is based on the fact that multirate signals can be represented much more efficiently if they are defined as functions of two or more time variables, that is, if they are defined as multivariate functions [17, 18]. With this multivariate formulation, circuits will be no longer described by ordinary differential algebraic equations in the one-dimensional time but, instead, by partial differential algebraic systems.

Let us consider the amplitude and phase-modulated signal of (17), and let us define its bivariate form as where is the slow envelope time scale and is the fast carrier time scale. As can be seen, is a periodic function with respect to but not to , that is, and, in general, this bivariate form requires far fewer points to represent numerically the original signal, especially when the and time scales are widely separated [17, 18].

Let us now consider the differential algebraic equations’ (DAEs) system of (1), describing the behavior of a generic RF circuit driven by the envelope-modulated signal of (17). Taking the above considerations into account, we will adopt the following procedure: for the slowly varying parts (envelope time scale) of the expressions of vectors and , is replaced by ; for the fast-varying parts (RF carrier time scale), is replaced by . The application of this bivariate strategy to the DAE system of (1) converts it into the following multirate partial differential algebraic equations’ (MPDAEs) system [17, 18]: The mathematical relation between (1) and (29) establishes that if and satisfy (29), then the univariate forms and satisfy (1) [18]. Therefore, the univariate solutions of (1) are available on diagonal lines , , along the bivariate solutions of (29), that is, may be retrieved from its bivariate form , by simply setting . Consequently, if one wants to obtain the univariate solution in a generic interval due to the periodicity of the problem in the dimension we will have on the rectangular domain , where represents the remainder of division of by . The main advantage of this MPDAE approach is that it can result in significant improvements in simulation speed when compared to DAE-based alternatives [1720].

Envelope-modulated responses to excitations of the form of (17) correspond to a combination of initial and periodic boundary conditions for the MPDAE. This means that the bivariate forms of these solutions can be obtained by numerically solving the following initial-boundary value problem [18] on the rectangle . is a given initial-condition function defined on , satisfying , and the periodic boundary condition is due to the periodicity of the problem in the fast carrier time scale. The reason why bivariate envelope-modulated solutions do not need to be evaluated on the entire domain (which would be computationally very expensive and would turn the multivariate strategy useless), and are restricted to the rectangle , is because the solutions repeat along the time axis.

3.4. Multitime Envelope Transient Harmonic Balance

Multitime envelope transient harmonic balance is an improved version of the previously described ETHB technique, which is based on the multivariate formulation [21, 22]. For achieving an intuitive explanation of the multitime envelope transient harmonic balance let us consider the initial-boundary value problem of (31), and let us also consider the semidiscretization of the rectangular domain in the slow time dimension defined by the grid where is the total number of steps in . If we replace the derivatives of the MPDAE in with a finite-differences approximation (e.g., the Backward Euler rule), then we obtain for each slow time instant , from to , the periodic boundary value problem defined by where . This means that, once is known, the solution on the next slow time instant, , is obtained by solving (33). Thus, for obtaining the whole solution in the entire domain , a total of boundary value problems have to be solved. With multitime ETHB, each one of these periodic boundary value problems is solved using the harmonic balance method. The corresponding HB system for each slow time instant is the algebraic equations set given by where and are the vectors containing the Fourier coefficients of the excitation sources and of the solution (the state variables), respectively, at . and are unknown functions, is the diagonal matrix (15), and the vector can be expressed as where each one of the state variable frequency components, ,  , is a vector defined as

As seen in Section 2.3, since and are in general nonlinear functions, one possible way to compute and in (34) consists in evaluating and in the time domain and then calculate its Fourier coefficients. The HB system of (34) can be rewriten as or, in its simplified form, as in which is the error function at . In order to solve the nonlinear algebraic system of (38) a Newton-Raphson iterative solver is usually used. In this case, the Newton-Raphson algorithm conducts us to which means that at each iteration , we have to solve a linear system of equations to compute the new estimate . Consecutive Newton iterations will be computed until a desired accuracy is achieved, that is, until , where tol is the allowed error ceiling.

The system of (39) involves the derivative of the vector , with respect to the vector . The result is a matrix, the so-called Jacobian of , In the same way as in Section 2.3, this matrix has a block structure, consisting of an matrix of square submatrices (blocks), each one with dimension . The general block of row and column can now be expressed as

In summary, multitime ETHB handles the solution dependence on in frequency domain, while treating the course of the solution to in time domain. So, it is a hybrid time-frequency technique which is similar to the ETHB engine previously reported in Section 3.2. However, an important advantage of multitime ETHB over conventional ETHB is that it does not suffer from bandwidth limitations [21]. For example, in circuits driven by envelope modulated signals, the only restriction that has to be imposed is that the modulating signal and the carrier must not be correlated in time (which is typically the case).

4. Advanced Hybrid Time-Frequency Simulation

One limitation of the ETHB and multitime ETHB engines is that they do not perform any distinction between nodes or blocks within the circuit, that is to say that they treat all the circuit’s state variables in the same way. Thus, if the circuit evidences some heterogeneity, as is the case of modern wireless architectures combining radio frequency, baseband analog, and digital blocks in the same circuit, these tools cannot benefit from such feature. To overcome this difficulty an innovative mixed mode time-frequency technique was recently proposed by the authors [23, 24]. This technique splits the circuit’s state variables (node voltages and mesh currents) into fast and slowly varying subsets, treating the former with multitime ETHB and the later with a SPICE-like engine (a time-step integration scheme). This way, the strong nonlinearities of the circuit are appropriately evaluated in the time domain, while the moderate ones are computed in the frequency domain [23, 24].

4.1. Time-Domain Latency within the Multivariate Formulation

In order to provide an illustrative explanation of the issues under discussion in this section, let us start by considering an RF circuit in which some of its state variables (node voltages and branch currents) are fast carrier envelope-modulated waveforms, while the remaining state variables are slowly varying aperiodic signals. For concreteness, let us suppose that the signals are two distinct state variables in different parts of the circuit. represents the Fourier coefficients of , which are slowly varying in the baseband time scale, is the carrier frequency, and is a slowly varying aperiodic baseband function. We will denote signals of the form of as active and signals of the form of as latent. The latency revealed by indicates that this variable belongs to a circuit block where there are no fluctuations dictated by the fast carrier. Consequently, due to its slowness, it can be represented efficiently with much less sample points than . On the other hand, since it does not evidence any periodicity, it cannot be processed with harmonic balance. On the contrary, if the number of harmonics is not too large, the fast carrier oscillation components of can be efficiently computed in the frequency domain. Therefore, it is straightforward to conclude that if we want to simulate circuits having such signal format disparities in an efficient way, distinct numerical strategies will be required.

Let us now consider the bivariate forms of and denoted by and and defined as where and are, respectively, the slow envelope time dimension and the fast carrier time dimension. As we can see, has no dependence on , so it has no fluctuations in the fast time axis. In fact, it is so because does not oscillate at the carrier frequency. Consequently, for each slow time instant defined on the grid of (32), while is a waveform that has to be represented by a certain quantity of harmonic components, is merely a constant (DC) signal that can be simply represented by the component. Therefore, there is no necessity to perform the conversion between time and frequency domains for , which means that this state variable can be processed in a purely time-domain scheme.

4.2. Mixed Mode Time-Frequency Technique

In the above, we illustrated that bivariate forms of latent state variables have no undulations in the fast time scale. So, while active state variables have to be represented by a set of harmonic components arranged in vectors of the form of (36), latent state variables can be represented as scalar quantities, that is,

By considering this, it is straightforward to conclude that the size of the vector defined by (35) can be considerably reduced, as can be the total number of equations in the HB system of (37). An additional and crucial detail is that there is no longer obligation to perform the conversion between time and frequency domains for the latent state variables expressed in the form of (44), as well as for the components of corresponding to latent blocks of the circuit. Since the order Fourier coefficient is exactly the same as the constant time value , the use of the discrete Fourier transform (DFT) and the inverse discrete Fourier transform (IDFT)—or their fast algorithms, that is, the fast Fourier transform (FFT) and the inverse fast Fourier transform (IFFT)—will be required only for components in the HB system of (37) having dependence on active state variables. Significant Jacobian matrix size reductions will be achieved, too. In effect, by taking into consideration this multirate characteristic (the subset circuit latency), some of the blocks of (40) will be merely scalar elements that contain dc information on the sensitivity of changes in components of resulting from changes in latent components of .

With this strategy of partitioning the circuit into active and latent subcircuits (blocks), significant computation and memory savings can be achieved when finding the solution of (37). Indeed, with the state variable and the error function vector size reductions, as also the resulting Jacobian matrix size reduction, it is possible to avoid dealing with large linear systems in the iterations of (39). Thus, a less computationally expensive Newton-Raphson iterative solver is required.

5. Performance of the Methods

The performance and the efficiency of the ETHB and multitime ETHB techniques were already attested and recognized by the RF and microwave community. In the same way, the performance and the efficiency of the advanced hybrid technique described in the previous section (the mixed mode time-frequency simulation technique) were also already demonstrated through its application to several illustrative examples of practical relevance. Indeed, electronic circuits with distinct configurations and levels of complexity were especially selected to illustrate the significant gains in computational speed that can be achieved when simulating the circuits with this method [23, 24]. Nevertheless, in order to provide the reader with a realistic idea of the potential of this recently proposed technique, we included in this section a brief comparison between this method and the previous state-of-the-art multitime ETHB. For that, we considered two distinct circuits: the resistive FET mixer depicted in Figure 1 and the RF polar transmitter described in [23] and depicted in Figure 2.

The circuits were simulated in MATLAB with the mixed mode time-frequency simulation technique versus the multitime ETHB. In our experiments a dynamic step size control tool was used in the slow time scale, and we considered as the maximum harmonic order for the HB evaluations. Numerical computation times (in seconds) for simulations in the [0,0.5 μs] and [0,5.0 μs] intervals are presented in Tables 1 and 2.

As we can see, speedups of approximately 2 times were obtained for the simulation of the resistive FET mixer, and speedups of more than one order of magnitude were obtained for the RF polar transmitter. These efficiency gains were achieved without compromising accuracy. Indeed, for both cases, the maximum discrepancy between solutions (for all the circuits’ state variables) was on the order of 10−8.

The choice of these two circuits, which have different levels of complexity, was to illustrate how the computational efficiency is more evident as the ratio between the number of active and latent state variables is increased. In the first example, this ratio is 1, whereas in the second one this ratio is 4.5.

6. Conclusion

Although significant advancement has been made in RF and microwave circuit simulation along the years, the use of more elaborate functional analysis techniques has kept this subject a hot topic of scientific and practical engineering interest. Indeed, emerging wireless communication technologies continuously bring new challenges to this scientific field, as is now the case of heterogeneous RF circuits containing state variables of distinct formats and running on widely separated time scales. Taking into account the popularity of HB, but mostly ETHB, in the RF and microwave community, in this paper we have briefly reviewed the use of some functional analysis methods to address numerical simulation challenges using hybrid time-frequency techniques. A comparison between two state-of-the-art hybrid techniques in terms of computational speed is also included to evidence the efficiency gains that can be achieved by partitioning heterogeneous circuits into blocks, treating latent blocks in a one-dimensional space, and active ones in a bidimensional space.

Acknowledgment

The authors would like to acknowledge the financial support provided by the Portuguese Science and Technology Foundation (FCT) under the Project SOPAS-PTDC/EEA-TEL/114530/2009.