Journal of Engineering

Volume 2013, Article ID 414109, 16 pages

http://dx.doi.org/10.1155/2013/414109

## One More Tool for Understanding Resonance and the Way for a New Definition

^{1}Electrical Engineering Department, Faculty of Engineering, Kinneret College on the Sea of Galilee, Jordan Valley 15132, Israel^{2}Electrical Engineering Department, Faculty of Engineering, Tel-Aviv University, Israel

Received 20 August 2012; Accepted 2 November 2012

Academic Editor: H. P. S. Abdul Khalil

Copyright © 2013 Emanuel Gluskin et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

We propose the application of graphical convolution to the analysis of the resonance phenomenon. This time-domain approach encompasses both the finally attained periodic oscillations and the initial transient period. It also provides interesting discussion concerning the analysis of nonsinusoidal waves, based not on frequency analysis but on direct consideration of waveforms, and thus presenting an introduction to Fourier series. Further developing the point of view of graphical convolution, we arrive at a new definition of resonance in terms of time domain.

#### 1. Introduction

##### 1.1. General

The following material fits well into an “Introduction to Linear Systems,” or “Mechanics,” and is relevant to a wide range of technical and physics courses, since the resonance phenomenon has long interested physicists, mathematicians, chemists, engineers, and, nowadays, also biologists.

The complete resonant response of an initially unexcited system has two different, distinguishable parts, and there are, respectively, two basic definitions of resonance, significantly distanced from each other.

In the widely adopted textbook [1] written for physicists, resonance is defined as a *linear increase of the amplitude of oscillations in a lossless oscillatory system*, obtained when the system is pumped with energy by a sinusoidal force at the correct frequency. Figure 1 schematically shows the “envelope” of the resonant oscillations being developed.

Thus, a lossless system under resonant excitation absorbs more and more energy, and a steady state is never reached. In other words, in the lossless system, the amplitude of the steady state and the “quality factor” (having a somewhat semantic meaning in such a system) are infinite at resonance.

However, the slope of the envelope is always finite; it depends on the amplitude of the input function, and not on . Though the steady-state response will never be reached in an ideal lossless system, the linear increase in amplitude by itself has an important sense. When a realistic physical system absorbs energy resonantly, say in the form of photons of electromagnetic radiation, there indeed is some period (still we can ignore power losses, say, some back radiation) during which the system’s energy increases linearly in time. The energy absorption is immediate upon appearance of the influence, and the rate of the absorption directly measures the intensity of the input.

One notes that the energy pumping into the system at the initial stage of the resonance process readily suggests that the sinusoidal waveform of the input function is not necessary for resonance; it is obvious (think, e.g., about swinging a swing by kicking it) that the energy pumping can occur for other input waveforms as well. This is a heuristically important point of the definition of [1].

The physical importance of the initial increase in oscillatory amplitude is associated not only with the energy pumping; the informational meaning is also important. Assume, for instance, that we speak about the *start* of oscillations of the spatial positions of the atoms of a medium, caused by an incoming electromagnetic wave. Since this start is associated with the *appearance* of the wave, it can be also associated with *the registration of a signal*. Later on, the *established* steady-state oscillations (that are associated, because of the radiation of the atoms, with the refraction factor of the medium) influence the velocity of the electromagnetic wave in the medium. As [2] stresses—even if this velocity is larger than the velocity of light (for refraction factor , i.e., when the frequency of the incoming wave is *slightly* higher than that of the atoms oscillators)—this does not contradict the theory of relativity, because there is already no signal. Registration of any signal and its group velocity is associated with a (forced) transient process.

A more pragmatic argument for the importance of analysis of the initial transients is that for any application of a steady-state response, especially in modern electronics, we have to know how much time is needed for it to be attained, and this relates, in particular, to the resonant processes. This is relevant to the frequency range in which the device has to be operated.

Contrary to [1], in textbooks on the theory of electrical circuits (e.g., [3–5]) and mechanical systems, resonance is defined as the *established* sinusoidal response with a relatively high amplitude proportional to . Only this definition, directly associated with *frequency domain analysis*, is widely accepted in the engineering sciences. According to this definition, the envelope of the resonant oscillations (Figure 2) looks even simpler than in Figure 1; it is given by two horizontal lines. This would be so for *any* steady-state oscillations, and the uniqueness is just by the fact that the oscillation amplitude is proportional to .

After being attained, the steady-state oscillations continue “forever,” and the parameters of the “frequency response” can be thus relatively easily measured. Nevertheless, the simplicity of Figure 2 is a seeming one, because it is not known *when* the steady amplitude becomes established, and, certainly, the “frequency response” is *not* an immediate response to the input signal.

Thus, we do not know via the definition of [1] when the slope will finish, and we do not know via the definition of [3–5] when the steady state is obtained.

We shall call the definition of [1] “the “*Q-t*” definition,” since the value of can be revealed via *duration* of the initial/transient process in a real system. The commonly used definition [3–5] of resonance in terms of the parameters of the sustained response will be called “the “*Q*-*a*” definition,” where “*a*” is an abbreviation for “amplitude.”

Figure 3 illustrates the actual development of resonance in a second-order circuit. The damping parameter will be defined in Section 3.

The *Q*-*t* and *Q*-*a* parts of the resonant oscillations are well seen. For such a not very high (i.e., not much larger than the period of the oscillations) the period of fair initial linearity of the envelope includes only some half period of oscillations, but for a really high it can include many periods. The *whole curve* shown is the resonant response. This response can be obtained when the external frequency is closing the self-frequency of the system, from the beats of the oscillations (analytically explained by the formulae found in Section 3) shown in Figure 4.

Note that the usual interpretation is somewhat different. It just says that the linear increase of the envelope, shown in Figure 1, can be obtained from the first beat of the *periodic* beats observed in a *lossless* system. Contrary to that, we observe the beats in a system with losses, and after adjustment of the external frequency obtain the *whole* resonant response shown in Figure 3.

Our treatment of the topic of resonance for teaching purposes is composed of three main parts shown in Figure 5. The first part briefly recalls traditional “phasor” material relevant only to the *Q-a* part, which is necessary for introduction of the notations. The next part includes some simple, *though usually omitted,* arguments showing why the phasor analysis is insufficient. Finally, the third part includes the new tool, which is complementary to the classical approach of [1], and leads to a nontrivial generalization of the concept of resonance.

Our notations need minor comments. As is customary in electrical engineering, the notation for is . The small italic Latin “v,” , is *voltage* in the *time domain* (i.e., a real value), means* phasor*, that is, a complex number in the frequency domain. is the *dummy variable* of integration in a definite integral of the convolution type. It is measured in seconds, and the difference , where is time, often appears.

#### 2. Some Advice to the Teacher

First, we deal here with a lot of *pedagogical* science—in principle the issues are not new but are often missed in the classroom; as far as we know, no such complete scheme of the necessary arguments for teaching resonance exists. Perhaps this is because some issues indeed require a serious revisiting and time is often limited due to overloaded teaching plans and schedules. That the results of this “economy” are not bright is seen first of all from the already mentioned fact that electrical engineering (EE) students often learn resonance only via phasors and are not concerned with the time needed for the very important steady state *to be established*. The resonance phenomenon is so physically important that it is taught to technical students many times: in mechanics, in EE, in optics, and so forth. *However*, all this repeated teaching is actually equivalent to the use of phasors, that is, relates only to the established steady state.

Furthermore, the teachers (almost all of them) miss the very interesting possibility to exhibit the power of *the convolution-integral analysis* for studying the development of a resonant state. In our opinion, this demonstration makes the convolution integral a more interesting tool; this really is one of the best applications of the “graphical convolution,” which should not be missed in any program. The convolution outlook well *unites* the view of resonance as a steady state by engineers and the view of resonance as energy pumping into a system, by physicists. The arguments of the graphical convolution also enable one to easily see (before knowing Fourier series) that a nonsinusoidal periodic input wave can cause resonance *just as* the sinusoidal one does. Thus, these arguments can be used also as an explanation of the physical meaning of the Fourier expansion. Our classroom experience shows that the average student can understand this material and finds it interesting.

Thus, regarding the use of the pedagogical material, we would advise the teacher of the EE students, *to return to the topic of resonance (previously taught via phasors), when the students start with convolution*.

Finally, the present work includes some new science, which can be also related to teaching, but perhaps at graduate level, depending on the level of the students or the university. We mean the generalization of the concept of resonance considered in Section 5. It is logical that if the convolution integral can show resonance (or resonant conditions) *directly*, not via Fourier analysis, then this “showing” exposes a general definition of resonance. Furthermore, since mathematically the convolution integral can be seen—with a proper writing of the impulse response in the integrand—as a *scalar product*, it is just natural to introduce into the consideration the outlook of Euclidean space.

The latter immediately suggests a geometric interpretation of resonance in functional terms, because it is clear what is the condition (here, the resonant one) for optimization of the scalar product of two normed vectors. As a whole, we simply replace the traditional requirement of *equality of some frequencies* to the condition of *correlation of two time functions*, which includes the classical sinusoidal (and the simplest oscillator) case as a particular one.

The geometrical consideration leads to a symmetry argument: since the impulse response is the only given “vector,” any optimal input “vector” has to be similarly oriented; there simply is no other selected direction. The associated writing , that is often used here just for brevity, precisely means the adjustment of the waveform of to that of by the following two steps.(1) Set in the interval . (2) Continue this waveform periodically for .It is relevant here that for weak power losses typical for all resonant systems, the damping of *in the first period* can be ignored, which should be a simplifying circumstance for creation of the periodic . The way of the adjustment of reflects the fact that the Euclidean space can relate to one period.

Both because of the somewhat higher level of the mathematical discussion and some connection with the theory of “matched filters,” usually related to special courses (which could not be discussed here), it seems that this final material should be rather given for graduate students. However, we also believe that a teacher will find here some pedagogical motivation and will be able to convey more lucid treatment than we succeeded to doing. Thus, the question regarding the possibility of teaching the generalized resonance to undergraduate students remains open.

Some other nontrivial points, deserving pedagogical judgement or analytical treatment, appear already in the use of the convolution. This means the replacement of the weakly damping of an oscillatory system by the *not damping but cut* function , shown in Figure 11, and the problem of definition of the damping parameter for the tending to zero of a complicated oscillatory circuit. A possible way for the latter can be by observation (this is not yet worked out) of some averages, for example, how the integral of , or of , over the fixed-length interval is decreased with increase in .

#### 3. Elementary Approaches

##### 3.1. The Second-Order Equation

The background formulae for both the *Q*-*t* and *Q*-*a* parts of the resonant response can be given by the Kirchhoff *voltage equation* for the electrical current in a series RLC (resistor-inductor-capacitor) circuit driven from a source of sinusoidal voltage with amplitude :

Differentiating (1) and dividing by , we obtain with the damping factor and the resonant frequency .

For purely resonant excitation, the input sinusoidal function is at frequency , or at a very close frequency , as defined below in (6).

##### 3.2. The Time-Domain Argument

The full solution of (2) can be explicitly composed of *two* terms; the first, denoted as , originates from the *homogeneous* () equation, and the second, denoted as , represents the *finally obtained* () periodic oscillations, that is, is the *simplest* (but not the only possible!) partial solution of the *forced* equation:

It is important that the zero initial conditions cannot be fitted by the second term in (3), , continued backward in time to . (Indeed, no sinusoidal function satisfies both the conditions and , at any point.) Thus, it is obvious that a nonzero term is needed in (3). This term is where at least one of the constants and nonzero.

Furthermore, it is obvious from (4) that the time needed for to decay is of the order of (compare to (9)). However, *according to the two-term structure of* (3), *the time needed for * *to be established, that is, for * *to become *, *is just the time needed for * *to decay. Thus, the established “frequency response” is attained only after the significant time of order *.

Unfortunately, this elementary logic argument following from (3) is missed in [3–5] and many other technical textbooks that ignore the *Q*-*t* part of the resonance and directly deal only with the *Q*-*a* part.

However form (3) is also not optimal here because it is not explicitly shown that for zero initial conditions not only but also the decaying are directly proportional to the amplitude (or scaling factor) of the input wave.

*That is, from the general form* (3) *alone it is not obvious that, when choosing zero initial conditions, we make the response function as a whole (including the transient) to be proportional to *, *appearing in* (1), *that is, to be a tool for studying the input function*, at least in the scaling sense.

It would be better to have *one* expression/term from which this feature of the response is well seen. Such a formula appears in Section 4.

##### 3.3. The Phasor Analysis of the *Q*-*a* Part

Let us now briefly recall the standard phasor (impedance) treatment of the final *Q-a* (steady-state) part of a system’s response. We can focus here only on the results associated with the amplitude, the phase relations follow straightforwardly from the expression for the impedance [3, 4].

In order to characterize the *Q-a* part of the response, we use the common notations of [3, 4]: the damping factor of the response , the resonant frequency , the quality factor
and the frequency at which the system self-oscillates:
Note that it is assumed that , and thus and are practically indistinguishable. Thus, although we *never ignore * *per se*, the much smaller value can be ignored. When speaking about “precise resonant excitation,” we shall mean setting with *this* degree of precision, but when writing , we shall mean that , and not . Larger than deviations of from are irrelevant to resonance.

The impedance of the series circuit is , and the phasor approach simply gives the *amplitude* of the steady-state solution of (2) as:
For , when ,

From (8), the frequencies at “half-power level,” for which , are defined by the equality , from which we obtain and , that is, for the circuit’s frequency “pass-band” we have, with the precision taken in the derivation of (8), that .

It is remarkable that *however small is *, *it is easy, while working with the steady state, to detect differences of order * *between * *and *, *using the resonant curve/response described by* (8).

Figure 6 illustrates the resonance curve. Though this figure is well known, it is usually not stressed that since each point of the curve corresponds to some steady state, a certain time is needed for the system to pass on from one point of the curve to another one, and the sharper the resonance is the more time is needed. The physical process is such that for a small the establishment of this response takes a (long) time of the order of which is not directly seen from the resonance curve.

The relation for the transient period should be remembered regarding* any* application of the resonance curve, in any technical device. The case of a mistake caused by assuming a quicker performance for measuring input frequency by means of passing on from one steady state to another is mentioned in [2]. This mistake is associated with using only the resonance curve, that is, thinking only in terms of the frequency response.

#### 4. The Use of Graphical Convolution

We pass on to the constructive point, the convolution integral presenting the resonant response, and its graphical treatment. It is desirable for a good “system understanding” of the topic that the concepts of *zero input response* (ZIR) and *zero state response* (ZSR), especially the latter one, be known to the reader.

Briefly, ZSR is the *partial response* of the circuit, which satisfies the zero initial conditions. As (and only then), it becomes the final steady-steady response, that is, becomes the *simplest* partial response (whose waveform can be often guessed).

The appendix illustrates the concepts of ZIR and ZSR in detail, using a first-order system and stressing the distinction between the forms ZIR + ZSR and (3) of the response.

Our system-theory tools are now the *impulse* (*or shock*) *response * (*or Green’s function*) and the integral response to for zero initial conditions:

The convolution integral (10) is an example of ZSR, and it is the most suitable tool for understanding the resonant excitation.

It is clear (contrary to (3)) that the total response (10) is directly proportional to the amplitude of the input function.

Figure 7 shows our schematic system.

Of course, the *system-theory outlook* does not relate only to electrical systems; this “block-diagram” can mean influence of a mechanical force on the position of a mass, or a pressure on a piston, or temperature at a point inside a gas, and so forth.

Note that if the initial conditions are zero, they are simply not mentioned. If the input-output map is defined solely by (e.g., when one writes in the domain of Laplace variable ), it is always ZSR.

In order to treat the convolution integral, it is useful to briefly recall the simple example [5] of the first-order circuit influenced by a single square pulse. The involved physical functions are shown in Figure 8, and the associated “*integrand* situation” of (10) is shown in Figure 9.

It is *graphically obvious* from Figure 9 that the maximal value of is obtained for , when the rectangular pulse already fully overlaps with , but still “catches” the initial (highest) part of . This simple observation shows the strength of the graphical convolution for a qualitative analysis.

##### 4.1. The (Resonant) Case of a Sinusoidal Input Function Acting on the Second-Order System

For the second-order system with weak losses, we use for (10)

As before, we apply

Figure 10 builds the solution (10) step by step; first our and (compare to Figure 9), then the product of these functions, and finally the integral, that is, .

On the upper graph, the “train” travels to the right, starting at , on the middle graph we have the integrand of (10). The area under the integrand’s curve appears as the final result on the third graph.

The extreme values of are , obviously. For odd these are positive maxima because the overlaps in the upper drawing are then “+” with “+” and “–” with “–.” For even these are negative minima because we multiply the opposite polarities in the overlap each time. Thus, and .

In view of the basic role of the overlapping of with , it is worthwhile to look forward a little and compare Figure 10 to Figures 14 and 15 that relate to the case of an input *square wave*. For the upper border of integration in (10) be and for very weak damping of the situations being compared are very similar. The distinction is that, in order to obtain the extremes of , we integrate in Figure 15 the absolute value of several *sinusoidal pieces *(*half-waves*), while in Figure 10 we integrate the *squared sinusoidal pieces*. Since we integrate, in each case, similar pieces (all positive, giving a maximum of , or all negative, giving a minimum), the result of each such integration is directly proportional to .

Thus, if , when is strictly periodic, from the *periodic nature* of also , it follows that
for any integer , which is a linear increase in the envelope for the two very different input waves, in the spirit of Figure 1.

For a small but finite , , the initial linear increase has high precision only for some first few when , that is, , or . (The damping of may be ignored *for these *.)

Observe that the finally obtained periodicity of follows only from that of , while the linear increase requires periodicity of both and .

The above discussion suggests the following simplification of the impulse response of the circuit, useful for analysis of the resonant systems. This simplification is a useful preparation for the rest of the analysis.

##### 4.2. A Simplified and the Associated Envelope of the Oscillations

Considering that the parameter appears in the above (and in Figure 3) as some symbolic border for the linearity, let us take a constructive step by suggesting a geometrically clearer situation when this border is artificially made sharp by introducing an idealization/simplification of , which will be denoted as .

In this idealization—that seems to be no less reasonable and suitable in qualitative analysis than the usual use of the vague expression “*somewhere at* “”* of order *”, we replace by a finite “piece” of nondamping oscillations of total length .

We thus consider that however weak the damping of is, for sufficiently large , when , we have , that is, the oscillations become strongly damped with respect to the first oscillation. For the further “movement” of the function to the right (see Figure 10 again) becomes less effective; the exponentially decreasing tail of the oscillating influences (10), via the overlap, more and more weakly, and as , ceases to increase and becomes periodic, obviously.

We simplify this qualitative vision of the process by assuming that up to , there is no damping of , but, starting from , completely disappears. That is, we replace the function by the function , where is the unit step function. The factor here is a “cutting window” for . This is the formal writing of the “piece” of the nondamping self-oscillations of the oscillator. See Figure 11.

For , it is obvious that when the “train” crosses in Figure 10 the point , the graphical construction of (10), that is, , becomes a periodic procedure. Figuratively speaking, we can compare with a railway station near which the infinite train passes; some wagons go away, but similar new ones enter and the total overlapping is repeated periodically.

The same is also analytically obvious, since when setting, for , the upper limit of integration in (10) as , we have, because of the periodicity of , the integral: as a periodic function of .

As is illustrated by Figure 12—which is an approximation to the envelope shown in Figure 3—the envelope of the output oscillations becomes completely saturated for .

Figure 12 clearly shows that both the amplitude of the finally established steady-state oscillations and the time needed for establishing these oscillations are proportional to , while the initial slope is obviously independent of .

It is important that can be also constructed for more complicated functions (for which it may be, for instance, ) and also then the graphical convolution is easier formulated in terms of . As an example relevant to the theoretical investigations— approximately presenting the maximal values of the established oscillations, obtained for , as we can easily reduce, using periodicity of for any oscillatory (and ), the analysis of the interval to that of a small interval, as was for in Figure 10.

##### 4.3. Nonsinusoidal Input Waves

The advantage of the graphical convolution is not so much in the calculation aspect. It is easy for imagination (insight) procedure, and it is a flexible tool in the qualitative analysis of the time processes. The graphical procedure makes it absolutely clear that the really basic point for a resonant response is not sinusoidality, but *periodicity* of the input function. Not being derived from the spectral (Fourier) approach, this observation heuristically completes this approach and may be used (see the following) in an introduction to Fourier analysis.

Thus, let us now take as the rectangular wave shown in Figure 13 and follow the way of Figures 9 and 10, in the sequential Figures 14 and 15.

Here too, the envelope of the resonant oscillations can be well outlined by considering at instances ; first of all , , and , for which we, respectively, have the first maximum, the first minimum, and the second maximum of .

There are absolutely the same qualitative (geometric) reasons for resonance here, and Figure 15 explains that if the damping of is weak, that is, some first sequential half-waves of are similar, then the respective extreme values of form a linear increase in the envelope.

Figure 16 shows at these extreme points.

Though it is not easy to find the precise everywhere, for the envelope of the oscillations, which passes through the extreme points, the resonant increase in the response amplitude is absolutely clear.

Figures 10, 14, 15, and 16 make it clear that many other waveforms with the correct period would likewise cause resonance in the circuit. Furthermore, for the overlapping to remain good, we can change not only , but also . Making the form of the impulse response more complicated means making the system’s structure more complicated, and thus graphical convolution is also a valuable starting point for studying resonance in complicated systems in terms of the waveforms. This point of view will be realized in Section 5 where we generalize the concept of resonance.

Thus, using the algorithm of the graphical convolution, we make two more methodological steps; a pedagogical one in Section 4.4 and the constructive one in Section 5.

##### 4.4. Let Us Try to “Discover” the Fourier Series in Order to Understand It Better

The conclusion regarding the possibility of obtaining resonance using a nonsinusoidal input reasonably means that when pushing a swing with a child on it, it is unnecessary for the father to develop a sinusoidal force. Moreover, the nonsinusoidal input even has some obvious advantages. While the sinusoidal input wave leads to resonance only when its frequency has the correct value, exciting resonance by means of a nonsinusoidal wave can be done at very different frequencies (one need not to kick the swing at every oscillation), which is, of course, associated with the Fourier expansions of the force.

Let us see how, using graphical convolution, we can reveal harmonic structure of a function, *still not knowing anything about Fourier series*. For that, let us continue with the case of square wave input, but take now such a waveform with a period that is 3 times longer than the period of self-oscillations of the oscillator. Consider Figure 17.

This time, the more distant instances, , , and , are obviously most suitable for understanding how the envelope of the oscillations looks.

One sees that also for , *the same* geometric “resonant mechanism” exists, but the transfer from to makes the excitation significantly less intensive. Indeed, see Figure 18 comparing the present extreme case of to the extreme case of of Figure 15.

We see that each extreme overlap is now only *one-third* as effective as was the respective maximum overlap in the previous case. That is, at , we now have what we previously had at , which means a much slower increase in the amplitude in time.

Since is now increased at a much slower rate, but is the same (i.e., the transient lasts the same time), the amplitude of the final periodic oscillations is respectively smaller, which means weaker resonance in terms of frequency response.

Let us compare the two cases of the square wave thus studied to the initial case of the sinusoidal function. The case of the “nonstretched” square wave corresponds to the input , while according to the conclusions derived in Figure 18, the case of the “stretched” wave corresponds to the input . We thus simply (and roughly) reduce the change in period of the nonsinusoidal function to the equivalent change in amplitude of the sinusoidal function.

Let us now try—as a tribute to Joseph Fourier—to speak not about the same circuit influenced by different waves, but about the same wave influencing different circuits. Instead of increasing , we could decrease , thus testing the ability of the same square wave to cause resonance in the different oscillatory circuits. For the new circuit, the graphical procedure remains the same, obviously, and the ratio 1/3 of the resonant amplitudes in the compared cases of and remains.

In fact, we are thus testing the square wave using *two* simple oscillatory circuits of different self-frequencies. Namely, connecting in parallel to the source of the square wave voltage two simple oscillatory circuits with self-frequencies and , we reveal for one of them the action of the square wave as that of and for the other as that of .

This associates the square wave of height , with the series (which precisely is ).

Let us check this result by using the arguments in the inverse order. The first sinusoidal term of series (17) roughly corresponds to the square wave with (i.e., ), and in order to make the *second term* resonant, we have to change the self-frequency of the circuit to , that is, make , or , which is our second “experiment” in which the reduced to 1/3 intensity of the resonant oscillations is indeed obtained, in agreement with (17).

It is possible to similarly graphically analyze a triangular wave at the input, or a sequence of periodic pulses of an arbitrary form (more suitable for the father kicking the swing) with a period that is an integer of .

One notes that such figures as Figure 18 are relevant to the standard integral form of Fourier coefficients. However on the way of graphical convolution, this similarity arises *only* for the extremes , and this way is independent and visually very clear.

#### 5. A Generalization of the Definition of Resonance in Terms of Mutual Adjustment of and

After working out the examples of the graphical convolution, we are now in position to formulate a wider -domain definition of resonance.

In terms of the graphical convolution, the analytical symmetry of (10): means that besides observing the overlapping of and , we can observe overlapping of and . In the latter case, the graph of starts to move to the right at , as was in the case with .

Though equality (18) is a very simple mathematical fact, similar to the equalities and , in the context of graphical convolution, there is a nontriviality in the *motivation* given by (18), because the possibility to move also suggests changing the *form* of , that is, starting to deal with a *complicated system* (*or structure*) to be resonantly excited. We thus shall try to define resonance, that is, the optimization of the peaks of (or its *r.m.s.* value), in the terms of more arbitrary waveforms of , while the case of the sinusoidal , that is, of simple oscillator, appears as a particular one.

##### 5.1. The Optimization of the Overlapping of and in a Finite Interval and Creation of the Optimal Periodic

Let us continue to assume that the losses in the system are small, that is, that is decaying so slowly that we can speak about at least few oscillatory spikes (labeled by ) through which the envelope of the oscillations passes during its linear increase.

Using notation of Figures 15 and 16, we speak about the extreme points of the graph of the resulting function , that is, about the points whose coordinates are , or In view of the examples studied, the extreme points of are obtained when are the zero-crossings of , because only then the overlapping of with can be made maximal.

*Comment*. Assuming that the parameters of the type of the different harmonic components of are different, one sees that for a nonsinusoidal damping , the distribution of the zero-crossings of can be changed with the decay of this function, and thus for a periodic the condition
or
considered for need not be satisfied in the whole interval of the integration () related to the case of . However since both the amplitude-type decays and the change in the intervals between the zeros are defined by the same very small damping parameters, the resulted effects of imprecision are of the same smallness. Both problems are not faced when we use the “generating interval” and employ instead of the precise . The fact that any use of is anyway associated with error of order points to the expected good precision of the generalized definition of resonance.

Thus, , measured with respect to the time origin, that is, with respect to the moment when and arise, is assumed to be given by the known . Of course, we assume the system to be an oscillatory one, for the parameters and of our graphical constructions to be meaningful.

Having the linearly increasing sequence belonging to the envelope of the oscillations and wishing to increase the finally established oscillations, obviously we have to increase the factor .

However since and the whole intensity of can be increased not only by the proper wave-form of , but also by an amplitude-type *scaling factor*, for the general discussion, some *norm* for , has to be introduced.

For the definitions of the norm and the scalar products of the functions appearing during adjustment of to , it is sufficient to consider a *certain* (for a fixed, not too large ) *interval *—the one in which we can calculate . This interval can be simply or .

The norm over the chosen interval is taken as

For instance, calculated over interval or is , as is easy to find by using the equality .

Respectively, the scalar product of two functions is taken as

With these definitions, the set of functions defined for the purpose of the optimization in the interval forms an (infinite-dimensional) Euclidean space.

For the quantities that interest us, we have from (23) for the absolute values where (see Figures 10, 14, and 15) it is set for simplicity of writing

Not ascribing to “” index “” is justified by the fact that the *particular* interval to be actually used, is finally chosen very naturally.

The basic relation means that any local extremum of is a sum of such scalar products as (23).

Observe that the physical dimensions of and are

Observe also from (22) and (23) that and that if we take , that is, then (27) is generalized to

Indeed, using (27), and then the obvious equalities and , we obtain The factor means, in particular, that excitation of an oscillatory circuit can be equivalently done by either an or . (Consider the concept of “overlapping” in this view.)

It follows from (30) that if (28) is provided, then

Furthermore, we use that the following general inequality takes place. In view of (22) and (23), (32) is just the known Cauchy-Bunyakovsky integral inequality.

Comparing (32) with (31), we see that condition (28) provides optimization of . Applied to and , that is, to , this conclusion regarding optimization says that the condition optimizes . Thus, optimizes the extremes of the system’s response.

Thus, we finally have the following two points.(a) We find the proper interval for creating the optimal periodic . (b) The proportionality in this interval is the optimal case of the influence of an oscillatory circuit by .

Items (a) and (b) are our definition of the generalized resonance. The case of sinusoidal is obviously included since the proportionality to requires to also be sinusoidal of the same period.

This mathematical situation is the constructive point, but the discussion of Sections 5.3 and 5.5 of the optimization of from a more physical point of view is useful, leading us to very compact formulation of the extended resonance condition. However, let us first of all use the simple oscillator checking how essential is the direct proportionality of to , that is, what may be the quantitative miss when the waveform of differs from that of in the chosen interval .

##### 5.2. An Example for a Simple Oscillator

Let us compare the cases of the *square* (Figures 13, 14, and 15) and *sinusoidal* (Figure 10) input waves of the same period for defined in the interval . Of course, the norms of the input functions have to be equal for the comparison of the respective responses. (Note that in the consideration of the above figures, equality of the norms was *not* provided, and thus the following result cannot be derived from the previous discussions.)

Let the height of the square wave be 1. Then, everywhere, and according to (22) the norm is obtained as . For obtaining the same norm for a sinusoidal input, we write it as and find so that that is,

Because of the symmetry of the sinusoidal and square-wave inputs, in both cases in the interval . For either of the input waveforms the norm of now equals , and for of the simple oscillator (the damping in this interval is ignored), we have, according to (24) and (32), as the upper bund.

Thus, while for the response to the square wave we have
*only*, for the response to the input we have, for the found,
as (35).

The “relative *missing* the optimality”, in the sence of , in the case of the square wave, which we wanted to find, is

##### 5.3. Analogy with the Usual Vectors

In the mathematical sense, the set of functions that can be used for the optimization of is analogous to the set of usual vectors.

For the scalar product of two usual vectors and , we have (compare to (32))
(meaning “”), where the *equality* is obtained only when the vectors are mutually proportional (“” ), that is, similarly (may be opposite) directed:
The latter relation is *obvious*, in particular because it is obvious that while rotation of the usual vector (say a pencil) when directing it in parallel to another vector (another pencil), the length of this vector is unchanged. This point is much more delicate regarding the norm of a function being adjusted to , which is the “rotation” of the “vector” in the function space. Since the waveform of the function is being changed, its norm can be also changed.

Thus, the usual physical space *very simply* gives the extreme value of as :

Since our “vectors” are the time functions and the functional analog of (40) is (for simplicity, we sometimes write instead of )
we very simply obtain, *by the mathematical equivalence of the function and the vector spaces*, condition (31); that is, only an that is directly proportional to can give an extreme value for .

For the vectors of the same length (e.g., for unit vectors) , and the condition of optimality, , becomes . In the functional space, the latter means that if , then in order to have maximal we should take .

##### 5.4. Comments

One can consider to be both a generalization and a direct *analogy* to the condition of the standard definitions of [1, 3–5]. Then, both of the equalities, and , appear in the associated theories as sufficient conditions for obtaining resonance in a linear oscillatory system. The norms become important at the next step, namely, regarding the theoretical conditions of system’s linearity, which always include some limitations on intensity of the function/process, in any application. For applications, the real properties of the physical source of (e.g., a voltage source) whose *power* will here be proportional to obviously require to be limited.

The requirement of preserving the norm during realization of also necessarily originates from the practically useful formulation of the resonance problem as the *optimization problem* that requires *calculation* of the optimized peaks (or rms value) of .

If , then the interval in which the scalar products (i.e., the Euclidean functional space) are defined has to be taken over the whole period of , that is, as . ( is a necessary condition.)

The interval in which we define can be named the “generating” interval.

We can finally write the optimal that resulted from the optimal as where the function is in the generating interval, periodically continued for .

We turn now to an informal “physical abstraction,” suggested by the comparison of the two Euclidean spaces. This abstraction leads us to a very compact formulation of the generalized definition of resonance.

##### 5.5. A Symmetry Argument for Formulation of the Generalized Definition of Resonance

For the usual vector space, we have well-developed vectorial analysis, in which *symmetry arguments* are widely employed. The mathematical equivalence of the two spaces under consideration suggests that such arguments—as far as they are related to the scalar products—are legitimized also in the functional space.

Recall the simple field problem in which the scalar field (e.g., electrical potential) is given by means of a constant vector , and it is asked in what direction to go in order to have the steepest change of .

As the methodological point, one need *not* know how to calculate gradient. It is just *obvious* that only , or a *proportional vector*, can show the direction of the gradient, since there is only *one fixed vector given*, and it is simply impossible to “construct” from the given data any other constant vector, defining another direction for the gradient.

We thus consider the *axial symmetry* introduced by in the physical space that can be seen *ad hoc* as the “space of the radius-vectors” and conclude that while catching the steepest increases of , we must go with some .

Let us compare this very lucid situation with that of the functional space. In the problem of making the envelope of the *convolution * (for the whole interval ) to increase as steep as possible, we have, in view of the relation , to optimize the scalar product . This is quite similar to (44), because here is the only fixed “vector” involved, that is, no other “directions” in the functional space are given.

Thus, by the direct analogy to the fact that the gradient must be proportional to , the optimal *must be* proportional to .

We thus can say that *in terms of ZSR, that is, in terms of the convolution integral response, resonance is a use of (or “obeying”) the axial symmetry introduced by * *in the space of the input functions convolving with *, or .

This argument makes the generalized definition compact and easy to remember. One just should not forget that we optimize the factor in a certain interval, say the first period of .

#### 6. Discussion

The traditional teaching of resonance in technical textbooks in terms of a purely steady-state, that is, frequency response and phasors, not deepening into the time process, that is, into the *establishment* of the steady-state, is seen to be unsatisfactory.

The general tendency of engineering teachers to work only in the frequency domain is explained but is not justified by the importance of the fields of communication and signal processing. A good understanding of the time processes is needed in physics, chemistry, biology, and also power electronics. We hope that the use of convolution integral suggested here can, to some extent, close any such logical gap when it appears and can make the topic of resonance more interesting to a student. The described graphical application of convolution is also important for understanding the convolution integral per se. Last but not least, we hope that our generalized definition of resonance in terms of optimization of a scalar product in an interval will be useful.

On the way to the generalized definition, our hero was the father swinging a swing and not the definitions of [1, 3, 4]. Everything relevant (even the Fourier series) can be directly understood from the *freedom* that the father has when enhancing the swing’s oscillations.

In the historical plane, the simplicity of the mathematical treatment of the sinusoidal case once defined the general point of view on resonance and the standard classroom treatment, but we see that the convolution integral has become a sufficiently simple and common tool to make this definition wider.

The present criticism of the usual teaching resonance well correlates with the “old” pedagogical advice by Guillemin [6] not to hurry with the frequency-domain analysis and to let the physical reality first be well understood in the time domain.

Direct study of waveforms (not necessarily using the graphical convolution) also reveals some specific resonant effects that are *not obtained at all* for a sinusoidal input [7, 8]. Thus, for some rectangular-wave periodic input waves, a *resonant suppression* of the response oscillations of a simple oscillator can occur *at certain, periodically repeated time intervals*, and only a direct analysis of the *waveforms* reveals this suppression [7–9]. It appears that the singularity of the waveform and its symmetry [7–9], and *not* Fourier (spectral) representation, reveal these “pauses” in the oscillatory function. Remarkably, since singularity and symmetry aspects are applied also to a nonlinear oscillatory circuit, these “pauses” in the oscillations can be similarly simply explained [7–9] for such a nonlinear circuit.

The topic of resonance is an important scientific and pedagogical point from which different mathematical and physical interpretations can be developed, and it should be revisited by a teacher.

#### Appendix

#### A. The Representation of the Circuit Response as ZIR() + ZSR() (Some Basic System-Theory Terminology for Physicists)

Besides the standard mathematical representation (3) of the solution of a linear equation, system theory commonly uses another representation in which the output function is composed of a *Zero Input Response* (ZIR) and a *Zero State Response* (ZSR).

The ZSR is influenced by the generator inputs and satisfies *zero initial conditions* (this is the meaning of the words “zero state”), and the ZIR is defined *only* by nonzero initial conditions, that is, is *not* influenced by the generator’s inputs, which is the meaning of the words “zero-input response”.

Since both the generator-type input functions and the initial conditions can be defined freely, they are both legitimized inputs and altogether form a *generalized input*.

##### A.1. The Superposition with Respect to the Generalized Input

The concept of *generalized input* (Figure 19) fully explains the construction of ZIR and ZSR via the superposition. Indeed, in the classical way of (3), is found from the homogeneous equation which is *not* the given one but is artificially introduced. That is, the determination of is an *auxiliary problem* in which the generator’s inputs (that define the right-hand side of the given equation) are zero. The concept of generalized input requires doing *the same* also for the initial conditions, that is, to additionally use the given equation with the artificially introduced initial zero conditions. Thus, according to the two different groups of the inputs we have two parts of the whole solution, obtained from the following auxiliary *independently solvable* problems. *For ZIR*. Homogeneous equation (zero generator inputs) plus the needed initial conditions. *For ZSR*. Given equation with zero initial conditions.

Figure 19 schematically illustrates this presentation of the linear response.

Figure 7 reduces Figure 19 to what we actually need for the processes with zero initial conditions. The logical advantage of the presentation ZIR + ZSR over (3) becomes clear in the terms of the superposition.

The ZSR includes both, the decaying transient needed to satisfy zero initial conditions and the final steady state given in its general form by the following integral (A.8). The oscillations shown in Figure 3 are examples of ZSR.

The separation of the solution function into ZIR and ZSR is advantageous, for example, when the circuit is used to analyze the input signal, that is, when we wish to work only with the ZSR, when nonzero initial conditions are just redundant inputs.

The convolution integral (10) is ZSR. When speaking about system with constant parameters having one input and one output, the Laplace transform of ZSR() equals where is the “transfer function” of the system, that is, the Laplace transform of *h*(*t*). *Each time when we speak about transfer function, we speak about ZSR, that is, zero initial conditions*.

It is easy to write for our problem. Using the known formula for Laplace transform of periodic function and setting the optimal , , that is, , , where is the period (in the sense of the generating interval) of , we have, for the periodically continued , the Laplace transform of our as (see (43))
(the integration only over *the first* period and, finally, “+” everywhere), which is relevant to different oscillatory .

##### A.2. Example

Consider for the following simplest example of the first-order system/equation where and are constants. Here, the solution of type (8), , is first , and when involving the initial condition, finally, with the initial conditions and the generator function “mixed” in the first term.

The ZIR + ZSR representation is obtained by rewriting this expression as The first term depends on the initial condition, that is, is ZIR, and the second term depends on the generator input ( is the unit-step function), that is, is ZSR.

It is easy to check that ZIR can be *independently* found from the equation and the given initial condition, and ZSR can be *independently* found from the given equation and the zero initial condition.

For , , which can be also written as that is (as (10)), as where is the impulse response of the first-order circuit.

Considering (A.4), one sees that the ZSR includes (as ) decaying components of the same *type* as the ZIR and that the asymptotic response originates from the ZSR as and *not at all* from the ZIR.

##### A.3. as

If (as in the above example) exists, then it is obtained as but if does not exist, then as the time function of the final state is given by making the upper limit of the integration infinity: (i.e., the roles of the argument “” in (A.6) are different for the different places in which it appears).

The integral in (A.8) can be rewritten as

Dealing with the asymptotic solution (A.9) is typical for *stochastic problems*, where, contrary to our statement of the resonance problem, the initial conditions are not important.

When speaking about convolution only in the form one misses the effects of the initial conditions which are important for our analysis, and it is inevitable that only the spectral approach appears be relevant.

##### A.4. A Case When ZIR + ZSR Is Directly Obtained

When a differential equation can be directly solved by integration, the solution is directly obtained in the form of ZIR + ZSR. Thus, for Newton’s equation written in the usual notations,
we have
which obviously is ZIR + ZSR. Superposition *with respect to the force * is realized only by the ZSR.

Consider also for and given.

The presentation ZIR + ZSR is generally relevant to *linear time-variant* (LTV, “parametric”) equations that include the equations with constant parameters as a special case. For instance, if the mass in (A.11) depends on time, the integrand in ZSR in (A.12) would be . Generally, LTV equations are very difficult, but for *any linear homogenous equation* (*e.g., equation of parametric resonance*), for which ZSR need not be found, *it follows from the linearity* that the solution (which then is just ZIR) has the form
with all the functions known. Since are legitimized inputs (Figure 19), this is the usual linear superposition.

#### References

- L. D. Landau and E. M. Lifschitz,
*Mechanics*, Pergamon, New York, Ny, USA, 1974. - L. I. Mandelstam,
*Lectures on the Theory of Oscillations*, Nauka, Moscow, Russia, 1972. - W. H. Hayt and J. E. Kemmerly,
*Engineering Circuit Analysis*, McGraw-Hill, New York, NY, USA, 1993. - J. D. Irwin,
*Basic Engineering Circuit Analysis*, Wiley, New York, NY, USA, 1998. - C. A. Desoer and E. S. Kuh,
*Basic Circuit Theory*, McGraw Hill, New York, NY, USA, 1969. - E. A. Guillemin, “Teaching of system theory and its impact on other disciplines,”
*Proceedings of the IRE*, pp. 872–878, 1961. View at Google Scholar - E. Gluskin, “The internal resonance relations in the pause states of a nonlinear LCR circuit,”
*Physics Letters A*, vol. 175, no. 2, pp. 121–132, 1993. View at Google Scholar · View at Scopus - E. Gluskin, “The asymptotic superposition of steady-state electrical current responses of a nonlinear oscillatory circuit to certain input voltage waves,”
*Physics Letters A*, vol. 159, no. 1-2, pp. 38–46, 1991. View at Google Scholar · View at Scopus - E. Gluskin, “The symmetry argument in the analysis of oscillatory processes,”
*Physics Letters A*, vol. 144, no. 4-5, pp. 206–210, 1990. View at Google Scholar · View at Scopus