Abstract

Electronic circuit simulation, especially for radio frequency (RF) and microwave telecommunications, is being challenged by increasingly complex applications presenting signals of very different nature and evolving on widely separated time scales. In this paper, we will briefly review some recently developed ways to address these challenges, by describing some advanced numerical simulation techniques based on multirate Runge-Kutta schemes, which operate in the one-dimensional time and also within multidimensional frameworks.

1. Introduction

In recent years, radio frequency (RF) and microwave electronic circuit simulation has been conducted to an increasingly demanding scenario of heterogeneous broadband and strongly nonlinear wireless communication circuits, presenting a wide variety of slowly varying and fast changing state variables (node voltages and branch currents). In such circuits, the baseband analog blocks, the digital blocks, and the RF blocks may be all intricately mixed. Classical simulation tools are not capable of handling this kind of circuits in an efficient way because they do not perform any distinction between nodes or blocks within the circuit, considering their time constants and/or excitation regimes. Thus, all the blocks in the circuits are treated in the same way, which means that the same numerical algorithm is required to simultaneously compute the response of the digital blocks, the baseband analog blocks, and the RF blocks. Taking into account that signals in different blocks (different parts of the circuit) have completely different formats and evolve on widely separated time scales (which may differ from three, or more, orders of magnitude), it is easy to conclude that the application of the same numerical scheme to all the blocks will result in high inefficiency.

To cope with this scenario, some innovative time-domain techniques have been proposed in the literature in the last few years. In order to benefit from the different rates of variation of slowly varying and fast-varying currents and voltages (circuits’ state variables) in different parts of the circuit, several advanced numerical techniques based on multirate Runge-Kutta schemes have been proposed to operate within one-dimensional and multivariate frameworks. Such techniques include diverse circuit partitioning strategies, which allow the simulator to automatically split the network into subcircuits according to the different time rates of change of their state variables. In order to provide a general overview on these simulation techniques, this paper is organized as follows.

After this brief introduction, Section 2 provides some general background material on standard time-domain circuit simulation techniques commonly used for computing the numerical solution of ordinary unirate problems. Then, Section 3 is essentially devoted to the presentation of some mathematical details of the numerical simulation algorithms based on multirate Runge-Kutta schemes. Finally, Section 4 concludes this paper.

2. Time-Domain Circuit Simulation Fundamentals

2.1. Mathematical Model of an Electronic Circuit

The behavior of an electronic circuit can be described with a system of equations involving voltages, currents, charges, and fluxes. This system of equations can be constructed from a circuit description, using, for example, nodal analysis, which involves applying the Kirchhoff current law to each node in the circuit, and applying the constitutive or branch equations to each circuit element. In general, systems generated this way have the form 𝐩(𝐲(𝑡))+𝑑𝐪(𝐲(𝑡))𝑑𝑡=𝐱(𝑡),(2.1) where 𝐱(𝑡)𝑛 and 𝐲(𝑡)𝑛 stand for the excitation (independent voltage and current sources) and state variable (node voltages and branch currents) vectors, respectively. 𝐩(𝐲(𝑡)) represents memoryless linear or nonlinear elements, as resistors, nonlinear voltage-controlled current sources, and so forth, while 𝐪(𝐲(𝑡)) models dynamic linear or nonlinear elements, as capacitors (represented as linear or nonlinear voltage-dependent electric charges), or inductors (represented as linear or nonlinear current-dependent magnetic fluxes).

The system of (2.1) is, in general, a differential algebraic equations’ (DAE) system. For achieving an intuitive explanation of the mathematical formulation of (2.1), let us consider the basic illustrative example depicted in Figure 1. This circuit is composed of a current source connected to a linear inductance and two nonlinear circuit elements commonly used when modeling semiconductor devices (a nonlinear capacitance and a nonlinear voltage-dependent current source). These nonlinearities are assumed as quasistatic [1, 2] and thus are described by algebraic constitutive relations of voltage-dependent charge and voltage-dependent current. A nodal analysis of this circuit leads to the following system of equations in the capacitor voltage 𝑣𝐶(𝑡) and the inductor current 𝑖𝐿(𝑡): 𝑖NL𝑣𝐶(𝑡)+𝑖𝐿𝑑(𝑡)+𝑞𝑑𝑡NL𝑣𝐶(𝑡)=𝑖𝑠𝑣(𝑡),𝐶𝑑(𝑡)+𝑑𝑡𝐿𝑖𝐿(𝑡)=0.(2.2) This system can be seen as a particular case (in 2) of the DAE system of (2.1), in which the excitation vector, 𝐱(𝑡), and the vector of state variables, 𝐲(𝑡), are given by 𝑖𝐱(𝑡)=𝑠0𝑣(𝑡),𝐲(𝑡)=𝐶𝑖(𝑡)𝐿(𝑡).(2.3)

The DAE system of (2.1) may obviously be written in other forms. For instance, if we apply the chain differentiation rule to the dynamic term of its left hand side, we can obtain 𝑑𝐪(𝐲)𝑑𝐲𝑑𝐲(𝑡)𝑑𝑡=𝐱(𝑡)𝐩(𝐲(𝑡)),(2.4) or 𝐌(𝐲(𝑡))𝑑𝐲(𝑡)𝑑𝑡=𝐱(𝑡)𝐩(𝐲(𝑡)),(2.5) in which 𝐌(𝐲(𝑡)) is usually known as the mass matrix. If this matrix is nonsingular, then the DAE system of (2.4) may degenerate into the following ordinary differential equations’ (ODE) system: 𝑑𝐲(𝑡)𝑑𝑡=𝐌(𝐲(𝑡))1𝐱(𝑡)𝐩(𝐲(𝑡)),(2.6) which can be rewritten in the classical form 𝑑𝐲(𝑡)𝑑𝑡=𝐟(𝐭,𝐲(𝑡)),(2.7) commonly used in the mathematical literature. When 𝐌(𝐲(𝑡)) is singular, the DAE system of (2.4) will not degenerate into an ODE system, but it is often possible to express it as a set of algebraic equations combined with a set of differential equations of the form (2.7).

From the above, we conclude that, in many cases, electronic circuits may be described by ODE systems instead of DAE systems. For example, if we return to the simple nonlinear dynamic circuit of Figure 1 and rewrite (2.2) as 𝑑𝑞NL𝑣𝐶𝑑𝑣𝐶𝑑𝑣𝐶(𝑡)𝑑𝑡=𝑖𝑠(𝑡)𝑖NL𝑣𝐶(𝑡)𝑖𝐿𝐿(𝑡),𝑑𝑖𝐿(𝑡)𝑑𝑡=𝑣𝐶(𝑡),(2.8) or, in its vector-matrix form, as 𝑑𝑞NL𝑣𝐶𝑑𝑣𝐶00𝐿𝑑𝑣𝐶(𝑡)𝑑𝑡𝑑𝑖𝐿(𝑡)=𝑖𝑑𝑡𝑆(𝑡)𝑖NL𝑣𝐶(𝑡)𝑖𝐿𝑣(𝑡)𝐶(𝑡),(2.9) we can easily see that if 𝑑𝑞NL(𝑣𝐶)/𝑑𝑣𝐶0, then the mass matrix is nonsingular, and the circuit may be described by an ODE system expressed in the classical form of (2.7), which, in this case, will simply result in 𝑑𝑣𝐶(𝑡)=𝑑𝑡𝑑𝑞NL𝑣𝑐𝑑𝑣𝐶1𝑖𝑆(𝑡)𝑖NL𝑣𝑐(𝑡)𝑖𝐿.(𝑡)𝑑𝑖𝐿(𝑡)=1𝑑𝑡𝐿𝑣𝑐(𝑡).(2.10)

2.2. Transient Analysis: Time-Step Integration

Obtaining the solution of (2.1) over a specified time interval [𝑡0,𝑡Final] with a specific initial condition 𝐲(𝑡0)=𝐲0 is what is usually known as an initial value problem, and computing such solution is frequently referred to as transient analysis. The most natural way to evaluate 𝐲(𝑡) is to numerically time-step integrate (2.1) directly in time domain. One possible way to do that consists in simply converting the differential equations into difference equations, in which the time derivatives are approximated by appropriate incremental ratios. With this strategy, the nonlinear differential equations’ system of (2.1) is converted into a purely nonlinear algebraic system.

The above formulation derives directly from the intuitive idea that derivatives can be approximated, and thus simply replaced, by finite-difference schemes. Although this technique can be used to compute the transient response of a generic electronic circuit described by (2.1), there is as alternative strategy which is more often employed to find the solution of initial value problems. Such strategy consists in using initial value solvers, as linear multistep methods [25] or Runge-Kutta methods [35] (the most popular time-step integrators). Both of these classes of methods can provide a wide variety of explicit and implicit numerical schemes, with very distinct properties in terms of order (accuracy) and numerical stability. However, since a substantial part of the research work reviewed in this paper is based on modern multirate Runge-Kutta schemes, we will restrict our presentation to only these numerical techniques. Linear multistep methods will not be addressed.

2.3. Runge-Kutta Methods

In view of the fact that the well-established theory of numerical integration is oriented toward the solution of standard ODEs, we will now consider the form of (2.7) for the mathematical description of a circuit’s operation. So, let us consider a generic initial value problem with 𝑛 state variables, expressed in its classical form by the system of (2.7) and the initial condition 𝐲(𝑡0)=𝐲0, that is, 𝑑𝐲(𝑡)𝑡𝑑𝑡=𝐟(𝐭,𝐲(𝑡)),𝐲0=𝐲0,𝑡0𝑡𝑡Final,𝐲(𝑡)𝑛.(2.11)

Definition 2.1 (Runge-Kutta (RK) method). A standard s-stage RK method expressed by its Butcher tableau (𝐛,𝐀,𝐜) [5] 𝑐1𝑎11𝑎12𝑎1𝑠𝑐2𝑎21𝑎22𝑎2𝑠𝑐𝑠𝑎𝑠1𝑎𝑠2𝑎𝑠𝑠𝑏1𝑏2𝑏𝑠(2.12) for obtaining the numerical solution of (2.11) at the time instant 𝑡1=𝑡0+ is defined as [3, 5] 𝐲1=𝐲0+𝑠𝑖=1𝑏𝑖𝐤𝑖𝑡𝐲0+,(2.13) where 𝐤𝑖𝑡=𝐟0+𝑐𝑖,𝐲0+𝑠𝑗=1𝑎𝑖𝑗𝐤𝑗,𝑖=1,2,,𝑠.(2.14)
The algorithm defined by (2.13), (2.14) allows the numerical solution 𝐲𝑖 at any generic time instant 𝑡𝑖 to be evaluated from its previous calculated value 𝐲𝑖1. If we have 𝑎𝑖𝑗=0 for 𝑗𝑖,𝑖=1,2,,𝑠, then each of the 𝐤𝑖 in (2.14) is explicitly given in terms of the previously computed 𝐤𝑗, 𝑗=1,2,,𝑖1, and the method is then an explicit Runge-Kutta method. If this is not the case, then the method is implicit and, in general, it is necessary to solve a nonlinear system of 𝑛×𝑠 algebraic equations to simultaneously compute all the 𝐤𝑖. In general, any iterative technique (e.g., fixed-point iteration or Newton-Raphson iteration) may be used to solve the nonlinear system of (2.14).

Runge-Kutta methods are universally utilized for time-step integrating initial value problems and differ from linear multistep methods in several aspects. Since they present a genuine one-step format, one of their main advantages is that there is no difficulty in changing the steplength in a dynamic time-step integration process (in opposition to multistep methods where considerable difficulties may be encountered when we want to change steplength [5]). It must be noted that small step sizes can provide a good accuracy in the simulation results but may conduct to large computation times. On the contrary, large step sizes will reduce the computation time but will definitely conduct to poorer accuracy. A good compromise between accuracy and simulation time is achieved when is dynamically selected according to the solution’s rate of change. The automatic step size control is based on the estimation of local errors, for which diverse techniques can be used, such as extrapolation techniques or embedded RK formulas. All theoretical and technical details of implementation of these techniques, as well as many other aspects of the RK methods, such as consistency, convergence, order conditions, numerical stability, and so forth, can be seen, for example, in [3] or [5].

2.4. Conventional Transient Circuit Simulation: SPICE

In the above, we have seen that the most natural way of simulating an electronic circuit is to numerically time-step integrate the ordinary differential system describing its operation. So, it should be of no surprise that this straightforward technique was used in the first digital computer programs of circuit analysis and is still nowadays the most widely used numerical method for that purpose. It is present in all SPICE (which means simulation program with integrated circuit emphasis) [6] or SPICE-like computer programs.

SPICE was initially developed at the Electronics Research Laboratory of the University of California, Berkeley, in the early 1970s. The real popularity of SPICE started with SPICE2 [6] in 1975, which was a much-improved program than its original version (SPICE1), containing several analyses (AC analysis, DC analysis, DC transfer curve analysis, transient analysis, etc.) and device models needed to design integrated circuits of that time. Other versions of SPICE have been developed along the years, and today many commercial simulators are based on SPICE. However, its application to RF circuits may cause some problems resulting from the specific behavior of RF systems. To understand this, we must recall that RF signals are typically narrowband. This means that a data signal with a relatively low bandwidth is transmitted at a very high carrier frequency. To simulate a sufficient portion of the data signal, for example, to estimate bit error rate in modern wireless systems, a large number of carrier periods must be time-step integrated, and thus a huge number of time samples is required (a large consumption of memory and computational time).

2.5. Steady-State Analysis

Although some simulation tools focus on transient analysis (SPICE-like simulation), the steady-state behavior of the circuits is of primary interest to RF and microwave designers. The main reason for this is that wireless systems are expected to handle a sinusoidal RF carrier modulated by a slowly varying envelope, so that most aspects of system performance are easier to characterize and verify in the carrier’s steady state. For instance, harmonic or intermodulation distortion, noise, power, and transfer characteristics such as gain, or impedance are examples of quantities that are best defined in the frequency domain and thus measured when the circuit is in steady state.

Time-step integration engines, as the ones mentioned above, and which are tailored for finding the circuit’s transient response, are not adequate for computing the steady-state. As discussed above, time-step integration is a numerical technique intended to obtain the solution of an initial value problem, as it evaluates 𝐲(𝑡) for a set of successive time instants (time steps) given an initial condition 𝐲(𝑡0). However, if the objective is the determination of the steady state, there is no alternative other than to pass through the lengthy process of integrating all transients, and expecting them to vanish. In circuits presenting extremely different time constants, or high 𝑄 resonances, as is typically the case of narrowband RF and microwave circuits, time-step integration can be very inefficient. Indeed, in such cases, frequencies in steady-state response are much higher than the rate at which the circuit approaches this regime, or, in other words, the ratio between the state variables’ highest and the lowest frequency components is very large. Thus, the number of discretization time steps used by the numerical integration scheme will be enormous, because the time interval over which the differential equations must be numerically integrated is set by the lowest frequency, or by how long the circuit spends in achieving the steady state, while the size of the time steps is determined by the highest frequency component.

If the steady-state response of a circuit consists of generic waveforms presenting a common period, then the circuit is said to be in periodic steady-state. Computing the periodic steady state response of an electronic circuit involves finding the initial condition, 𝐲(𝑡0), for the differential system that describes the circuit’s operation, such that the solution at the end of one period matches the initial condition, that is, 𝐲(𝑡0)=𝐲(𝑡0+𝑇), where 𝑇 is the period. Problems of this form, that is, those of finding the solution to a system of ordinary differential equations that satisfies constraints at two or more distinct points in time, are referred to as boundary value problems. In this particular case, we have a periodic boundary value problem that can be formulated as 𝐩(𝐲(𝑡))+𝑑𝐪(𝐲(𝑡))𝐲𝑡𝑑𝑡=𝐱(𝑡),0𝑡=𝐲0+𝑇,𝑡0𝑡𝑡0+𝑇,𝐲(𝑡)𝑛,(2.15) or, in the classical ODE form, as 𝑑𝐲(𝑡)𝐲𝑡𝑑𝑡=𝐟(𝐭,𝐲(𝑡)),0𝑡=𝐲0+𝑇,𝑡0𝑡𝑡0+𝑇,𝐲(𝑡)𝑛,(2.16) where the condition 𝐲(𝑡0)=𝐲(𝑡0+𝑇) is known as the periodic boundary condition.

Solving (2.16) involves computing a numerical solution that simultaneously satisfies the differential system and the two-point periodic boundary condition. Certainly, there is no shortage of mathematical literature describing methods for solving boundary value problems. However, a particular technique has been found especially useful for electronic circuit problems. This technique is known as the shooting-Newton method [7].

2.6. The Shooting-Newton Method

The time-domain method most commonly used for numerically evaluating the periodic steady-state solution of an electronic circuit is the shooting method. Shooting solves boundary value problems by computing the solution to a succession of initial value problems with progressively improved guesses of an initial condition, which ultimately results in the steady state. In a circuit’s steady-state simulation, shooting begins by simulating the circuit for one period using some guessed initial condition (generally determined from a previous DC analysis). Then, the computed solution at the end of the period is checked, and if it does not agree with the initial condition, the initial condition is wisely modified. The circuit is then resimulated with the adjusted initial condition, and this process is repeated until the solution after one period matches the initial condition.

In order to provide some mathematical details on the implementation of the shooting method, let us consider (2.16). Now suppose that we want to numerically time-step integrate the differential system in (2.16) with an initial value solver. As stated above, time-step integration is tailored for transient analysis but is inadequate for computing steady-state responses. The problem comes from the fact that we do not know a priori which initial condition 𝐲(𝑡0) must be considered that will lead to the steady-state solution in the period 𝑇, that is, that will satisfy the periodic boundary condition 𝐲(𝑡0)=𝐲(𝑡0+𝑇). So, we are trying to solve a boundary value problem with an initial value solution technique. One possible way to convert the initial value solution procedure into a boundary value problem solver consists of guessing the initial estimate of 𝐲(𝑡0), or shooting for 𝐲(𝑡0), time-step integrating the differential system from 𝑡=𝑡0 until 𝑡=𝑡0+𝑇, comparing the resulting 𝐲(𝑡0+𝑇) with 𝐲(𝑡0), and then wisely updates the initial estimate. So, shooting is an iterative solver that uses an initial value technique to solve a boundary value problem. In the end, it relies on finding the solution of 𝐲𝑡0𝑡=𝐲0𝑡+𝑇𝐲0𝑡𝐲0+𝑇=0.(2.17) Let us now define 𝐲(𝑡0+𝑇)=𝜙(𝐲(𝑡0),𝑇) and rewrite (2.17) as 𝑡𝜙𝐲0𝑡,𝑇𝐲0=0,(2.18) where 𝜙 is the state-transition function [7, 8]. An easy way to solve (2.18) consists in using the fixed-point iteration solver, which, in this case, would simply result in 𝐲[𝑟+1]𝑡0𝐲=𝜙[𝑟]𝑡0,𝑇.(2.19) However, shooting with the fixed-point iteration technique is obviously equivalent to integrating the original differential system from 𝑡=𝑡0 until all transients decay. So, it is generally a useless technique because the convergence to the periodic steady-state solution may be extremely slow. A well-known tactic to accelerate the route to steady state, that is, to accelerate the convergence of (2.18) to its solution, is the so-called shooting-Newton technique. As happens with any other shooting technique, shooting Newton is based on guessing initial conditions. However, it can take advantage of the fact that, although electronic circuits can be strongly nonlinear, their state-transition functions are usually only mildly nonlinear. This means that slight perturbations on the initial condition (starting state) produce almost proportional perturbations in the subsequent time states. Taking this into account, it is easy to conclude that (2.18) can be iteratively solved in an efficient way with the Newton’s method, which in this case will lead us to 𝜙𝐲[𝑟]𝑡0,𝑇𝐲[𝑟]𝑡0+𝐲𝑡𝜕𝜙0,𝑇𝑡𝜕𝐲0|||||𝐼𝐲(𝑡0)=𝐲[𝑟](𝑡0)𝐲[𝑟+1]𝑡0𝐲[𝑟]𝑡0=0,(2.20) where 𝐼 is the 𝑛×𝑛 identity matrix. The only entity of (2.20) that is difficult to compute is the Jacobian of the state-transition function (usually referred to as the sensitivity matrix). In order to compute this matrix, we must take into consideration the chain differentiation rule. In fact, since 𝜙(𝐲(𝑡0),𝑇) is nothing but the numerical value 𝐲𝐾, with 𝐾 being the total number of time steps in the interval [𝑡0,𝑡0+𝑇], which depends on the previous value 𝐲𝐾1, which, itself, depends on 𝐲𝐾2, and so forth, the sensitivity matrix can be given by 𝐲𝑡𝜕𝜙0,𝑇𝑡𝜕𝐲0=𝜕𝐲𝐾𝜕𝐲𝐾1𝜕𝐲𝐾1𝜕𝐲𝐾2𝜕𝐲1𝜕𝐲0.(2.21)

It is easy to see that all the matrices in (2.21) can be individually computed along the time-step integration process. For concreteness, let us suppose that a standard Runge-Kutta method (2.13), (2.14) is being used to perform time-step integration on the consecutive iterations of the shooting method. If, for example, we want to evaluate the first matrix, then we have 𝜕𝐲1𝜕𝐲0=𝐼+𝑠𝑖=1𝑏𝑖𝜕𝐤𝑖𝜕𝐲0.(2.22) Now, if we rewrite (2.14) as 𝐤𝑖𝑡=𝐟0+𝑐𝑖,𝐲0+𝑠𝑗=1𝑎𝑖𝑗𝐤𝑗𝑡=𝐟0+𝑐𝑖,𝐘𝑖,𝑖=1,2,,𝑠,(2.23) we obtain 𝜕𝐤𝑖𝜕𝐲0=𝜕𝐟||||𝜕𝐲𝑡0+𝑐𝑖,𝐘𝑖𝜕𝐘𝑖𝜕𝐲0,𝑖=1,2,,𝑠,(2.24) with 𝜕𝐘𝑖𝜕𝐲0=𝐼+𝑠𝑗=1𝑎𝑖𝑗𝜕𝐤𝑗𝜕𝐲0.(2.25)

Although solving (2.20) and computing the sensitivity matrix may involve some extracomputational cost, shooting Newton converges to the steady-state solution much faster than the normal time-step integration procedure (shooting with fixed-point iteration). This is the reason why it is the time-domain steady-state engine most widely used in circuit simulation.

3. Numerical Simulation Algorithms Based on Multirate Runge-Kutta Schemes

3.1. Time-Step Integration with Different Step Sizes

As stated above, dynamic behavior of some electronic circuits involves signals with widely separated rates of variation. Analog circuits presenting extremely different time constants, coupled systems of analog and digital networks, or combined technologies of RF and baseband analog (or even digital) blocks in the same circuit are typical examples where the corresponding state variables may evolve according to very distinct time scales. In such cases, node voltages or branch currents presenting slow (latent) and very fast (active) time evolution rates may coexist in the same problem. This phenomenon, in which some of the state variables are varying very slowly (or even being practically constant) within a specific time interval, while other variables exhibit fast variations in that interval, is frequently referred to as time-domain latency [915]. Another common situation where time-domain latency can be found refers to purely digital circuits. For example, in large-scale integration (LSI) or very large-scale integration (VLSI) applications, usually, only a small part of the circuits (a small number of state variables) is active in a certain time interval, whereas the majority is latent (a large number of state variables may possibly remain practically constant within that interval). In summary, many kinds of electronic circuits may exhibit time-domain latency.

As described in Section 2, time-step integration is a conventional technique that is used by SPICE-like computer programs for simulating electronic circuits. However, when integrating differential systems whose components (state variables) evolve according to different time rates, one would like to use numerical schemes that do not expend unnecessary work on slowly changing components. In fact, in such cases, traditional time-step integrators, like standard Runge-Kutta or linear multistep methods—which use the same step size for all system’s components—become inefficient. To cope with this, some modern multirate Runge-Kutta (MRK) schemes have been proposed in the literature in the recent years [1114]. These powerful schemes split the differential system of (2.11) into coupled active and latent subsystems 𝑑𝐲𝐴(𝑡)𝑑𝑡=𝐟𝐴𝑡,𝐲𝐴,𝐲𝐿,𝐲𝐴𝑡0=𝐲𝐴,0,𝑑𝐲𝐿(𝑡)𝑑𝑡=𝐟𝐿𝑡,𝐲𝐴,𝐲𝐿,𝐲𝐿𝑡0=𝐲𝐿,0,(3.1) with 𝐲𝐲=𝐴𝐲𝐿,𝐲𝐴𝑛𝐴,𝐲𝐿𝑛𝐿,𝑛𝐴+𝑛𝐿=𝑛,(3.2) where 𝐲𝐴 is the active (fast-varying) state-variable components’ vector, and 𝐲𝐿 is the latent (slowly varying) state-variable components’ vector. The active components are then integrated with a small step size (microstep), while the latent components are integrated with a much larger step size 𝐻 (macrostep). The number of microsteps within each macrostep is an integer that will be denoted by 𝑚, that is to say, 𝐻=𝑚. This is illustrated in Figure 2.

In summary, the vector of active state variables is calculated at each of the time instants 𝑡0+,𝑡0+2,𝑡0+3,,𝑡1=𝑡0+𝑚, defined in the fine grid 𝑡=𝑡0+𝜆,𝜆=1,2,,𝑚, whereas the vector of latent state variables is evaluated only in the coarse time instant 𝑡1=𝑡0+𝐻. A general definition of a multirate Runge-Kutta time-step integrator is presented in the following.

Definition 3.1 (multirate Runge-Kutta (MRK) method). Consider two Runge-Kutta methods of 𝑠 and 𝑠 stages, that can be, but do not necessarily have to be, the same, expressed by their Butcher tableaus (𝐛,𝐀,𝐜) and (𝐛,𝐀,𝐜): 𝑐1𝑎11𝑎12𝑎1𝑠𝑐2𝑎21𝑎22𝑎2𝑠𝑐𝑠𝑎𝑠1𝑎𝑠2𝑎𝑠𝑠𝑏1𝑏2𝑏𝑠𝑐1𝑎11𝑎12𝑎1𝑠𝑐2𝑎21𝑎22𝑎2𝑠𝑐𝑠𝑎𝑠1𝑎𝑠2𝑎𝑠𝑠𝑏1𝑏2𝑏𝑠.(3.3) The resulting multirate Runge-Kutta method for obtaining the numerical solution of the partitioned system of (3.1), using a microstep for the active components and a macrostep 𝐻 for the latent components, is defined as follows [11, 12]: (i)the active (fast-varying) components 𝐲𝐴 are given by 𝐲𝐴,𝜆+1=𝐲𝐴,𝜆+𝑠𝑖=1𝑏𝑖𝐤𝜆𝐴,𝑖𝐲𝐴𝑡0+(𝜆+1),𝜆=0,1,,𝑚1,(3.4) where 𝐤𝜆𝐴,𝑖=𝐟𝐴𝑡0+𝜆+𝑐𝑖,𝐲𝐴,𝜆+𝑠𝑗=1𝑎𝑖𝑗𝐤𝜆𝐴,𝑗,𝐘𝜆𝐿,𝑖,𝑖=1,2,𝑠,(3.5) with 𝐘𝜆𝐿,𝑖𝐲𝐿𝑡0+𝜆+𝑐𝑖,𝑐𝑖=𝑠𝑗=1𝑎𝑖𝑗,(3.6)(ii)the latent (slowly varying) components 𝐲𝐿 are given by 𝐲𝐿,1=𝐲𝐿,0+𝐻𝑠𝑖=1𝑏𝑖𝐤𝐿,𝑖𝐲𝐿𝑡0+𝐻,(3.7) where 𝐤𝐿,𝑖=𝐟𝐿𝑡0+𝑐𝑖𝐘𝐻,𝐴,𝑖,𝐲𝐿,0+𝐻𝑠𝑗=1𝑎𝑖𝑗𝐤𝐿,𝑗,𝑖=1,2,,𝑠,(3.8) with 𝐘𝐴,𝑖𝐲𝐴𝑡0+𝑐𝑖𝐻,𝑐𝑖=𝑠𝑗=1𝑎𝑖𝑗.(3.9)

As can be seen, the coupling between the active and latent subsystems is performed by the intermediate stage values 𝐘𝜆𝐿,𝑖 and 𝐘𝐴,𝑖. There are several strategies for computing these values, as the ones suggested in [9, 10], which are based on interpolation and/or extrapolation techniques, or the ones more recently proposed in [11, 12] or [13], which are based on coupling coefficients.

It must be noted that, in general, the partition into fast and slow subsystems, as well as the number of microsteps within a macrostep, may vary throughout the integration process. Technical details of MRK code implementation, such as active-latent partitioning strategies, step size control tools, or even stiffness detection stratagems, are described in detail, for example, in [11] or [15].

3.2. Multitime Formulation

Signals handled by wireless communication systems can be usually described by a high-frequency RF carrier modulated by some kind of baseband information signals, as amplitude and/or phase signals. The general form of an amplitude and phase-modulated signal can be expressed as 𝑥𝜔(𝑡)=𝑒(𝑡)cos𝐶𝑡+𝜙(𝑡),(3.10) where 𝑒(𝑡) and 𝜙(t) are, respectively, the slowly varying amplitude and phase baseband signals (the complex envelope in a constellation diagram), modulating the cos(𝜔𝐶𝑡) fast-varying carrier.

Circuits driven by envelope- (amplitude and/or phase-) modulated signals, or presenting themselves state variables of this type, are common in RF and microwave applications. Since the baseband signals have a spectral content of much lower frequency than the carrier, that is, because they are typically slowly varying signals, while the carrier is a fast-varying entity, simulating nonlinear circuits—in a computationally efficient way—containing this kind of signals is often a very challenging task. Because the aperiodic nature of the envelope modulation signals obviates the use of any steady-state technique, one might think that conventional time-step integration (SPICE-like transient simulation) would be the natural method for simulating such circuits, as long as the time scale of the signals’ slowly varying components is comparable to the larger time constants involved. However, the large time constants of the bias networks determine long transient regimes and, as a result, the obligation of simulating a large number of carrier periods. In addition, computing the RF carrier oscillations long enough to obtain information about its envelope properties is, itself, a colossal task. Time-step integration is thus inadequate for simulating this kind of problems.

In this section, we will introduce a powerful strategy for analyzing nonlinear circuits handling multirate signals, that is, signals containing two, or more, entities running on widely separated time scales. This strategy is suitable to deal with the above-described amplitude and/or phase-modulated signals, as also with any other kind of multirate signals. It uses multiple time variables to describe multirate behavior, and it is based on the fact that multirate signals can be represented much more efficiently if they are defined as functions of two or more time variables, that is, if they are defined as multivariate functions [17, 18]. As we will see, with this multivariate formulation, circuits will be no longer described by ordinary differential systems in the one-dimensional time t, but, instead, by partial differential systems.

The advantages of using multivariate representations are easily illustrated by considering a bidimensional problem. Thus, let us consider, for example, an amplitude modulated RF carrier of the form 𝑥(𝑡)=𝑒(𝑡)sin2𝜋𝑓𝐶𝑡,(3.11) where 𝑒(𝑡) is an envelope, slowly varying in time, while sin(2𝜋𝑓𝐶𝑡) is a fast-varying RF carrier. As explained, simulating a circuit with this kind of stimulus, and using conventional time-step integration schemes, tends to be highly inefficient because it requires time steps closely spaced in time (for representing each fast RF cycle accurately) and during a very long time window determined by the envelope. So, let us now consider a bivariate representation for 𝑥(𝑡), defined as 𝑡̂𝑥1,𝑡2𝑡=𝑒1sin2𝜋𝑓𝐶𝑡2,(3.12) where 𝑡1 is the slow envelope time scale, and 𝑡2 is the fast carrier time scale. In this particular case, ̂𝑥(𝑡1,𝑡2) is a periodic function with respect to 𝑡2 but not to 𝑡1, that is, 𝑡̂𝑥1,𝑡2𝑡=̂𝑥1,𝑡2+𝑇2,𝑇2=1𝑓𝐶.(3.13) Figures 3 and 4 depict the univariate and bivariate forms defined in (3.11) and (3.12), respectively, for a [0,50𝜇s] time interval and a rectangular region [0,50𝜇s]×[0,𝑇2]. An envelope 𝑒(𝑡)=5sinc(40000(𝑡35×106))+2.5 and a carrier frequency of 𝑓𝐶=1MHz, were considered in this basic illustrative example.

By comparing Figures 3 and 4, it can be seen that ̂𝑥(𝑡1,𝑡2) does not have as many undulations as 𝑥(𝑡), allowing thus a more compact representation with fewer samples. Furthermore, due to the periodicity of ̂𝑥(𝑡1,𝑡2) in 𝑡2, we know that its plot repeats over the rest of this time axis. Thus, the bivariate form plotted in Figure 4 contains all the information necessary to recover the original univariate form depicted in Figure 3. In order to get a realistic idea of the savings that can be achieved by using the bivariate formulation, let us suppose that 20 points were sufficient to represent the slowly varying envelope signal 𝑒(𝑡) accurately in the [0,50𝜇s] interval, and that 20 points were also used per each carrier cycle. In such case, the total number of samples required to represent the bivariate form ̂𝑥(𝑡1,𝑡2) will be 20×20=400. On the other hand, since there are 50 carrier cycles in the [0,50𝜇s] interval, the number of samples required to represent the univariate form 𝑥(𝑡) will be 50×20=1000. Now, if we consider a realistic RF scenario, with a much higher value for the carrier frequency (e.g., 𝑓𝐶=1GHz), then the number of points necessary to represent 𝑥(𝑡) will considerably increase to 50000×20=106, while the number of points necessary to represent ̂𝑥(𝑡1,𝑡2) will remain exactly the same (20×20=400).

Let us now consider a general nonlinear RF circuit described by the differential algebraic equations’ (DAE) system of (2.1), and let us suppose that this circuit is driven by an envelope-modulated signal of the form (3.11). Taking the above considerations into account, we will adopt the following procedure: for the slowly varying parts (envelope time scale) of the expressions of 𝐱(𝑡) and 𝐲(𝑡), 𝑡 is replaced by 𝑡1; for the fast-varying parts (RF carrier time scale), 𝑡 is replaced by 𝑡2. This results in bivariate representations for the excitation ̂𝐱(𝑡1,𝑡2), and the solution ̂𝐲(𝑡1,𝑡2) and the application of this bivariate strategy to the system of (2.1) converts it into the following multirate partial differential algebraic equations’ (MPDAE) system [17, 18]: 𝐩̂𝐲𝑡1,𝑡2+̂𝐲𝑡𝜕𝐪1,𝑡2𝜕𝑡1+̂𝐲𝑡𝜕𝐪1,𝑡2𝜕𝑡2=̂𝐱𝑡1,𝑡2.(3.14)

The mathematical relation between (2.1) and (3.14) establishes that if ̂𝐱(𝑡1,𝑡2) and ̂𝐲(𝑡1,𝑡2) satisfy (3.14), then the univariate forms ̂𝐱(𝑡)=𝐱(𝑡,𝑡) and ̂𝐲(𝑡)=𝐲(𝑡,𝑡) satisfy (2.1) [17]. Therefore, univariate solutions of (2.1) are available on diagonal lines 𝑡1=𝑡,𝑡2=𝑡, along the bivariate solutions of (3.14), that is, 𝐲(𝑡) may be retrieved from its bivariate form ̂𝐲(𝑡1,𝑡2), by simply setting 𝑡1=𝑡2=𝑡. Consequently, if we want to obtain the univariate solution in the generic [0,𝑡Final] interval, due to the periodicity of the problem in the 𝑡2 dimension, we will have 𝐲̂𝐲(𝑡)=𝑡,𝑡mod𝑇2,(3.15) on the rectangular domain [0,𝑡Final]×[0,𝑇2], where 𝑡mod𝑇2 represents the remainder of division of 𝑡 by 𝑇2. Equation (3.15) defines the sawtooth path illustrated in Figure 5.

The multivariate formulation can be employed to solve many types of multirate problems. It can be adopted to deal with either weakly or strongly nonlinear regimes, as well as with a large class of multirate signals. The main advantage of this MPDAE approach is that it can result in significant improvements in simulation speed when compared to DAE-based alternatives [1622].

3.3. Envelope following Methods Using RK Schemes

In order to compute the bivariate solutions, some initial/boundary conditions must be added to (3.14). These are determined by the existence, or not, of periodicity in the signal components. As stated above, a typical case of significant practical interest is the envelope-modulated regime, which leads to an initial-boundary value problem. Indeed, envelope-modulated responses to excitations of the form of (3.10) correspond to a combination of initial and periodic boundary conditions in (3.14). This means that the bivariate forms of these solutions can be obtained by numerically solving the following initial-boundary value problem [17]: 𝐩̂𝐲𝑡1,𝑡2+̂𝐲𝑡𝜕𝐪1,𝑡2𝜕𝑡1+̂𝐲𝑡𝜕𝐪1,𝑡2𝜕𝑡2=̂𝐱𝑡1,𝑡2,̂𝐲0,𝑡2𝑡=𝐠2,̂𝐲𝑡1=̂𝐲𝑡,01,𝑇2,(3.16) on the rectangle [0,𝑡Final]×[0,𝑇2]. 𝑔() is a given initial-condition function defined on [0,𝑇2], satisfying 𝐠(0)=𝐠(𝑇2)=𝐲(0) [17], and the periodic boundary condition ̂𝐲(𝑡1̂,0)=𝐲(𝑡1,𝑇2) is due to the periodicity of the problem in the 𝑡2 fast carrier time scale.

The envelope transient over shooting [16, 17, 19, 23, 24] is an efficient time-domain method that can be used to obtain the numerical solution of circuits described by initial-boundary value problems of the form (3.16). This method is a particular implementation of a general technique that is often referred to as envelope following [25] and consists in replacing the derivatives of the 𝑡1 slow aperiodic time scale by finite-difference approximations (e.g., the Backward Euler rule), to then obtain a set of successive boundary value problems with periodic boundary conditions 𝐩̂𝐲𝑖𝑡2+𝐪̂𝐲𝑖𝑡2̂𝐲𝐪𝑖1𝑡21,𝑖+̂𝐲𝑑𝐪𝑖𝑡2𝑑𝑡2=̂𝐱𝑡1,𝑖,𝑡2,̂𝐲𝑖̂𝐲(0)=𝑖𝑇2,(3.17) where ̂𝐲𝑖(𝑡2̂)=𝐲(𝑡1,𝑖,𝑡2),1,𝑖=𝑡1,𝑖𝑡1,𝑖1, and 𝑇2 is the period in the periodic fast time scale 𝑡2. This means that once ̂𝐲𝑖1(𝑡2) is known, the solution on the next slow time instant ̂𝐲𝑖(𝑡2) is obtained by solving (3.17). Thus, for obtaining the whole solution ̂𝐲(𝑡1,𝑡2) in the entire domain [0,𝑡Final]×[0,𝑇2], a total of 𝐾1 boundary value problems have to be solved, with 𝐾1 being the number of steps in 𝑡1. With the envelope transient over shooting technique, each of the periodic boundary value problems of (3.17) is solved using the shooting method described in Section 2.6, where RK methods can be used to perform the successive time-step integrations in the consecutive shooting iterations.

3.4. Envelope following Methods Using MRK Schemes

Recently, a very powerful computer-aided design tool especially conceived for the efficient time-domain simulation of highly heterogeneous nonlinear RF circuits has been proposed [16, 19]. This technique is based on an ingenious modification of the above-described envelope transient over shooting technique. It splits the circuits into two distinct parts, according to the time rates of change of their state variables, and, instead if using classical time-step integrators, it uses modern multirate Runge-Kutta (MRK) schemes to perform time-step integration with different step sizes in each of the consecutive shooting iterations needed to solve (3.17). Indeed, the system of (3.17) is partitioned into the following active and latent subsystems:𝐩𝐴̂𝐲𝐴,𝑖𝑡2,̂𝐲𝐿,𝑖𝑡2+𝐪𝐴̂𝐲𝐴,𝑖𝑡2,̂𝐲𝐿,𝑖𝑡2𝐪𝐴̂𝐲𝐴,𝑖1𝑡2,̂𝐲𝐿,𝑖1𝑡21,𝑖+𝑑𝐪𝐴̂𝐲𝐴,𝑖𝑡2,̂𝐲𝐿,𝑖𝑡2𝑑𝑡2=̂𝐱𝑡1,𝑖,𝑡2,𝐩𝐿̂𝐲𝐴,𝑖𝑡2,̂𝐲𝐿,𝑖𝑡2+𝐪𝐿̂𝐲𝐴,𝑖𝑡2,̂𝐲𝐿,𝑖𝑡2𝐪𝐿̂𝐲𝐴,𝑖1𝑡2,̂𝐲𝐿,𝑖1𝑡21,𝑖+𝑑𝐪𝐿̂𝐲𝐴,𝑖𝑡2,̂𝐲𝐿,𝑖𝑡2𝑑𝑡2=̂𝐱𝑡1,𝑖,𝑡2,(3.18) where ̂𝐲𝑖𝑡2=̂𝐲𝐴,𝑖𝑡2̂𝐲𝐿,𝑖𝑡2,̂𝐲𝐴,𝑖𝑡2𝑛𝐴,̂𝐲𝐿,𝑖𝑡2𝑛𝐿,𝑛𝐴+𝑛𝐿=𝑛.(3.19)̂𝐲𝐴,𝑖(𝑡2) is the active (fast-varying) state variable components’ vector at the slow time instant 𝑡1,𝑖, and ̂𝐲𝐿,𝑖(𝑡2) is the latent (slowly varying) state variable components’ vector at the same slow time instant. The active components will be integrated in 𝑡2 with a small step size 2 (microstep), while the latent components will be integrated with a much larger step size 𝐻2=𝑚2 (macrostep).

In summary, this powerful technique (envelope transient over shooting with MRK) can be seen as a multirate scheme (different time-step integration sizes to state variables that present significantly disparate rates of change) coupled with a multirate excitation regime (multiple time-scale representations). Consequently, it is able to benefit from the circuits’ heterogeneities, and also from the stimuli time-rate disparities, to significantly reduce the computational effort required for simulating the circuits.

3.5. Performance of the Methods

The performance and the efficiency of all the numerical algorithms based on MRK schemes described in this paper were already successfully attested through their application to several illustrative examples of practical relevance. Indeed, electronic circuits with distinct configurations and levels of complexity were especially selected to illustrate the significant gains in computation speed that can be achieved when simulating the circuits with these methods.

Until 2006, multirate Runge-Kutta methods were only used to obtain the numerical solution of univariate initial value problems (transient electronic circuit simulation in the one-dimensional time). For example, in [11, 13], or [15], multirate Runge-Kutta methods were used to obtain, in an efficient way, the numerical solution of two digital electronic circuits: a digital inverter chain and a pulse generator. Significant reductions in the computational effort were obtained with the methods when compared to standard RK schemes.

Since 2007, multirate Runge-Kutta methods were also included in numerical algorithms conceived for operating in multivariate frameworks, that is, for solving multi-dimensional problems described by partial differential systems. For example, in [16, 19] multirate Runge-Kutta methods were used within a bidimensional envelope following technique to efficiently compute the numerical solution of two RF circuits operating in two distinct time scales: a resistive field effect transistor (FET) mixer and a polar power amplifier used in wireless transmitters. Later, in [20, 21] multirate Runge-Kutta methods were also used within a three-dimensional envelope following technique to simulate a polar power amplifier operating in three separated time scales. The efficiency gains provided by the use of different step sizes (MRK schemes) within the classical univariate time-step integration, but mostly within the bivariate formulation, were also evidenced in [22].

In order to provide a realistic idea of the efficiency (in terms of computational speed) of the methods reviewed in this paper, we decided to include in this section a brief comparison between standard RK and multirate MRK schemes, within the univariate and bivariate formulations. For that, we considered the RF polar transmitter PA described in [16], and depicted in Figure 6, as our illustrative application example. The most relevant components’ values of this circuit are 𝑣DD = 25 V, 𝐿1 = 2 μH, 𝐶1 = 3.2 nF, 𝐿2 = 40 nH, 𝐶2 = 16 pF, 𝐿3 = 0.4 nH, 𝐶3 = 16 pF, and R = 50 Ω. The MOSFETs (metal-oxide semiconductor field effect transistors) are represented by the following simplified nonlinear device model: 𝑖𝐷𝑆𝑣GS,𝑣DS1=𝛽2𝑣+ln(𝑒𝑣+𝑒𝑣)tanh𝛼𝑣DS,𝑣=𝐾𝑇𝑣GS𝑉𝑇,(3.20) with 𝛼=1 V−1, β=0.25 A/V, 𝐾𝑇=2, and 𝑉𝑇=3 V. Similarly, the diode current-voltage characteristic is given by 𝑖𝐷𝑣𝐷=𝐼𝑆𝑒𝑣𝐷/𝜂𝑉Temp1,(3.21) where 𝐼𝑆=1μA, 𝜂=2, and VTemp = 0.026 V.

In this circuit, we have a combination of periodic (RF carrier and digital clock) and aperiodic [AM(t) and PM(t)] forcing functions, of very distinct time scales, with a mixture of heterogeneous state variables with widely disparate rates of variation. For instance, while voltages and currents in the output band-pass filter are all very fast (active state variables), voltages and currents in the AM branch are much slower (latent state variables). The circuit was simulated in MATLAB, and numerical computation times for two different simulation time intervals are presented in Table 1. These results were obtained using (i) a time-step integrator based on a RK scheme (SPICE-like simulation), (ii) a time-step integrator based on an MRK scheme, (iii) an envelope transient over shooting (ETS) algorithm using an RK scheme and (iv) an ETS algorithm using an MRK scheme. Standard RK and modern MRK schemes of order 3 based on the Bogacki-Shampine formulas [26] were used to perform the necessary numerical time-step integration.

Table 1 evidences the efficiency gains that can be achieved with the use of MRK schemes, in comparison to classical RK ones, within the univariate and the bivariate formulations. It also shows the advantage of operating with multiple time variables, instead of working in the natural one-dimensional time. It must be noted that these gains in computational speed were obtained without compromising the accuracy of the results. Indeed, the maximum discrepancy between the solutions obtained with any of the methods under analysis was of the order 10−8 for all the circuit’s state variables.

4. Conclusion

Despite the scientific field of numerical simulation of electronic problems has appeared in the 1970s with the advent of SPICE, this subject is still today a hot topic. Indeed, serious difficulties arise when we have heterogeneous circuits composed of different types of blocks and/or operating in multiple time scales. In this paper, we have briefly reviewed some ways to circumvent these difficulties, operating strictly in the time domain, and which consist in using multirate Runge-Kutta schemes encapsulated in (i) one-dimensional engines and (ii) multidimensional algorithms (by decoupling the components of the signals into different time dimensions). A key aspect that contributes to the success of these numerical methods is the partitioning of the circuits into subcircuits according to the time rates of change of their state variables.

Acknowledgment

The authors would like to acknowledge the financial support provided by the Portuguese Science and Technology Foundation (FCT) under the Project SOPAS-PTDC/EEA-TEL/114530/2009.