Abstract
We try to pave a smooth road to a proper understanding of control problems in terms of mathematical disciplines, and partially show how to number-theorize some practical problems. Our primary concern is linear systems from the point of view of our principle of visualization of the state, an interface between the past and the present. We view all the systems as embedded in the state equation, thus visualizing the state. Then we go on to treat the chain-scattering representation of the plant of Kimura 1997, which includes the feedback connection in a natural way, and we consider the -control problem in this framework. We may view in particular the unit feedback system as accommodated in the chain-scattering representation, giving a better insight into the structure of the system. Its homographic transformation works as the action of the symplectic group on the Siegel upper half-space in the case of constant matrices. Both of - and PID-controllers are applied successfully in the EV control by J.-Y. Cao and B.-G. Cao 2006 and Cao et al. 2007, which we may unify in our framework. Finally, we mention some similarities between control theory and zeta-functions.
1. Introduction and Preliminaries
It turns out there is great similarity in control theory and number theory in their treatment of the signals in time domain and frequency domain which is conducted by the Laplace transform in the case of control theory while, in the theory of zeta-functions, this role is played by the Mellin transform, both of which convert the signals in time domain to those in the right half-plane. For integral transforms, compare Section 11.
Section 5 introduces the Hardy space which consists of functions analytic in โright half-plane .
2. State Space Representation and the Visualization Principle
Let , , and be the state function, input function, and output function, respectively. We write for . The system of (differential equations) DEs is called a state equation for a linear system, where , are given constant matrices.
The state state is not visible while the input and output are so, and the state may be thought of as an interface between the past and the present information since it contains all the information contained in the system from the past. The being invisible, (2.1) would read which appears in many places in the literature in disguised form. All the subsequent systems, for example, (3.1), are variations of (2.2). And whenever we would like to obtain the state equation, we are to restore the state to make a recourse to (2.1), which we would call the visualization principle. In the case of feedback system, it is often the case that (2.2) is given in the form of (3.8). It is quite remarkable that this controller works for the matrix variable in the symplectic geometry (compare Section 4).
Using the matrix exponential function , the first equation in (2.1) can be solved in the same way as for the scalar case:
Definition 2.1. A linear system with the input called an autonomous system, is said to be asymptotically stable if for all initial values, approaches a limit as .
Since the solution of (2.4) is given by the system is asymptotically stable if and only if
A linear system is said to be stable if (2.6) holds, which is the case if all the eigenvalues of have negative real parts. Compare Section 5 in this regard. It also amounts to saying that the step response of the system approaches a limit as time elapses, where step response means a response with the unit step function as the input function, which is 0 for and 1 for .
Up here, the things are happening in the time domain. We now move to a frequency domain. For this purpose, we refer to the Laplace transform to be discussed in Section 11. It has the effect of shifting from the time domain to frequency domain and vice versa. For more details, see, for example, [1]. Taking the Laplace transform of (2.1) with , we obtain which we solve as where where indicates the identity matrix, which is sometimes denoted by to show its size.
In general, supposing that the initial values of all the signals in a system are 0, we call the ratio of output/input of the signal, the transfer function, and denote it by , , and so forth. We may suppose so because, if the system is in equilibrium, then we may take the values of parameters at that moment as standard and may suppose the initial values to be 0.
Equation (2.10) is called the state space representation (form, realization, description, characterization) of the transfer function of the system (2.1) and is written as
According to the visualization principle above, we have the embedding principle. Given a state space representation of a transfer function , it is to be embedded in the state equation (2.1).
Example 2.2. If then it follows from (2.10) that
The principle above will establish the most important cascade connection (concatenation rule) [1, (2.13), page 15]. Given two state space representations their cascade connection is given by
Proof of (2.15). We have the input/output relation (2.10)
which means that
Eliminating , we conclude that
Hence
whence we conclude (2.15).
Example 2.3. Given two state space representations (2.14), their parallel connection is given by
Indeed, we have (2.17), and for (2.18), we have
Hence for (2.20), we have
whence (2.21) follows.
As an example, combining (2.15) and (2.21) we deduce
Example 2.4. For (2.1), we consider the inversion. Solving the second equality in (2.1) for โโwe obtain
Substituting this in the first equality in (2.1), we obtain
Whence
Example 2.5. If the transfer function has a state space representation then we are to embed it in the linear system
3. Chain-Scattering Representation
Following [1, pages 7 and 67], we first give the definition of a chain-scattering representation of a system.
Suppose , , , and are related by where
According to the embedding principle, this is to be thought of as corresponding to the second equality in (2.1).
Equation (3.1) means that
Assume that is a (square) regular matrix (whence ). Then from the second equality of (3.3), we obtain Substituting (3.4) in the first equality of (3.3), we deduce that
Hence putting which is usually referred to as a chain-scattering representation of , we obtain an equivalent form of (3.1)
Suppose that is fed back to by where is a controller. Multiplying the second equality in (3.3) by and incorporating (3.8), we find that
whence .
Let the closed-loop transfer function be defined by
is given by Equation (3.11) is sometimes referred to as a linear fractional transformation and denoted by Substituting (3.8), (3.7) becomes
whence we deduce that the linear fractional transformation (which is referred to as a homographic transformation and denoted by ), where in the last equality we mean the action of on the variable . We must impose the nonconstant condition . Then .
If is obtained from under the action of , , then its composition with (3.14) yields , that is, which is referred to as the cascade connection or the cascade structure of and .
Thus the chain-scattering representation of a system allows us to treat the feedback connection as a cascade connection.
Suppose a closed-loop system is given with , , , and and given by (3.2).
-Control Problem. Find a controller such that the closed-loop system is internally stable and the transfer function satisfies for a positive constant . For the meaning of the norm, compare Section 5.
4. Siegel Upper Space
Let denote the conjugate transpose of a square matrix: , and let the imaginary part of defined by . Let be the Siegel upper half-space consisting of all the matrices (recall (3.8)) whose imaginary parts are positive definite (โimaginary parts of all eigen values are positive) and satisfies : and let denote the symplectic group of order :
The action of on is defined by (3.14) which we restate as
Theorem 4.1. For a controller living in the Siegel upper space, its rotation lies in the right half-space , that is, stable having positive real parts. For the controller , the feedback connection is accommodated in the cascade connection of the chain-scattering representation (3.15), which is then viewed as the action (3.15) of on : where is subject to the condition with . An FOPID controller (in Section 6), being a unity feedback connection, is also accommodated in this framework.
Remark 4.2. With action, we may introduce the orbit decomposition of and whence the fundamental domain. We note that, in the special case of , we have and and the theory of modular forms of one variable is well known. Siegel modular forms are a generalization of the one variable case into several variables. As in the case of the sushmna principle in [2], there is a need to rotate the upper half-space into the right half-space , which is a counter part of the right-half plane . In the case of Siegel modular forms, the matrices are constant, while in control theory, they are analytic functions (mostly rational functions analytic in ). A general theory would be useful for controlling theory. See Section 7 for physically realizable cases. There are many research problems lying in this direction.
5. Norm of the Function Spaces
The norm is defined to be the Euclidean norm or by the sup norm or anything that satisfies the axioms of the norm. They introduce the same topology on .
The definition of the norm of a matrix should be given in a similar way by viewing its elements as an -dimensional vector, that is, embedding it in . If , then or otherwise.
The sup norm is a limit of the -norm as . For ,
Supposeโโ. Then for any .
On the other hand, since , we obtain
For , the Bernoulli inequality gives as . Hence the right-hand side of (5.5) tends to .
The proof of (5.4) can be readily generalized to give
The -norm in (5.6) is defined by
where is any Euclidean norm. Note that the functions are not ordinary functions but classes of functions which are regarded as the same if they differ only at measure 0 set. is a Banach space (i.e., a complete metric space), and in particular is a Hilbert space. The 2-norm is induced from the inner product where refers to the transposed complex conjugation.
The Parseval identity holds true if and only if the system is complete.
However, the restriction that as excludes signals of infinite duration such as unit step signals or periodic ones from . To circumvent the inconvenience, the notion of averaged norm, or similar, is important and the power norm has been introduced:
Remark 5.1. In mathematics and in particular in analytic number theory, studying the mean square in the form of a sum or an integral is quite common. Especially, this idea is applied to finding out the true order of magnitude of the error term on average. Such an average result will give a hint on the order of the error term itself.
Example 5.2. Let denote the Riemann zeta-function defined for , in the first instance, where it is analytic and then continued meromorphically over the whole complex plane with a simple pole at . It is essential that it does not vanish on the line for the prime number theorem () to hold. The plausible best bound for the error term for the is equivalent to the celebrated Riemann hypothesis () to the effect that the Riemann zeta-function does not vanish on the critical line . Since the values on the critical line are expected to be small, the averaged norm or , that is, the mean value for is of great interest and there have appeared a great deal of research on the subject. The first result for is due to Ingham who used the approximate functional equation for the Riemann zeta-function to obtain for . See, for example, [3]. The main interest in such estimates as (5.10) lies in the fact that estimates for all would imply the weak Lindelรถf hypothesis () in the form for every . It is apparent that the RH implies the LH.
The Hardy space (cf. e.g., [1, page 39]) is well known. It consists of all which are analytic in โright half-plane such that , in particular, with sup norm. Thus -control problem is about those (rational) functions which are analytic in , a fortiori stable, with regard to the sup norm. Thus the above-mentioned mean-value problem for the Riemann zeta-function is related to the -control problem with finite Dirichlet series (main ingredients in the approximate functional equation). Since the -control problem asks for all individual values, it flows afar from the -control problem and goes up to the LH or the RH.
6. (Unity) Feedback System
The synthesis problem of a controller of the unity feedback system, depicted inโโFigure 1, refers to the sensitivity reduction problem, which asks for the estimation of the sensitivity function multiplied by an appropriate frequency weighting function : is a transfer function from to , where is a compensator and is a plant. The problem consists in reducing the magnitude of over a specified frequency range , which amounts to finding a compensator stabilizing the closed-loop system such that for a positive constant .
To accommodate this in the control problem (3.1), we choose the matrix elements of in such a way that the closed-loop transfer function in (3.11) coincides with . First we are to choose . Then we would choose . Then becomes . Hence choosing , we have . Hence we may choose, for example,
Example 6.1. First we treat the case of general feedback scheme. Denoting the Laplace transforms by the corresponding capital letters, we have
whence . Now if it so happens that and is replaced by , that is, in the case of unity FD, we derive (6.1) directly fromโโFigure 2. We have , so that . Solving in , we deduce that .
We take into account the disturbance ,โโandโโwe obtain since
whence . It follows that . In the case where , being the open-loop transfer function, we have is the tracking error for the input . Hence (6.1) holds true.
7. -Lossless Factorization and Dualization
In this section we mostly follow Helton ([4โ6]) who uses the unit ball in place of . They shift to each other under the complex exponential map. For conventional control theory, the unit ball is to be replaced by the critical line (). In practice what appears is the algebra of functions ([5, page 2]), (Table 1) or still larger algebra consisting of those functions which have (pseudo)meromorphic continuations ([5, footnote 6, page 27]). The occurrence of the gamma function [5, Figureโโ2.5, page 17] justifies our incorporation of more advanced special functions and ultimately zeta-functions in control theory (see Section 13).
Along with the algebra , one considers
Then the only mapping acting on must satisfy the -lossless property. Let denote an matrix.
Then which is interpreted to be the power preservation of the system in the chain-scattering representation (3.6) ([1, page 82]).
We now briefly refer to the dual chain-scattering representation of the plant in (3.2). We assume is a square invertible matrix (whence ). Then the argument goes in parallel to that leading to (3.7). Defining the dual chain-scattering matrix by we obtain
8. FOPID
โFOโ means โFractional orderโ and โPIDโ refers to โProportional, Integral, Differential,โ whence โProportionalโ means just constant times the input function , โIntegralโ means the fractional order integration of (), and โDifferentialโ the fractional order differentiation of ().
The FO controller (control signal in the time domain) is one of the most refined feed-forward compensators defined by where is the input function, is the deviation, and are constant parameters which are to be specified (: the position feedback gain, : the velocity feedback gain). DE (8.1) translates into the state equation where indicate the Laplace transforms of , respectively, and is the compensators continuous transfer function
The derivation of (8.3) from (8.1) depends on the following. The general fractional calculus operator is symbolically stated as where and are the lower and upper limits of integration and is the order of calculus.
More precisely, the definition of the fractional differintegral is given by the Riemann-Liouville expression where indicates the fractional part of , with the integral part of . Thus we are also led to the Riemann-Liouville fractional integral transform:
For applications, compare Section 13.
When , (8.5) reads the th derivative of .
We will see that the definition (8.5) is a natural outcome of the general formula for the difference operator of order with difference :
If has the -th derivative , then
The special case of (8.9) with reads whose far-right hand side is .
Let be the Laplace transform of the input function . Then
9. Fourier, Mellin, and (Two-Sided) Laplace Transforms
We state the Mellin, (two-sided) Laplace, and the Fourier transforms. If for , then its Mellin transform is defined by
Under the change of variable , the Mellin transform and the two-sided Laplace transform shift each other: where we write .
The ordinary Laplace transform (one-sided Laplace transform) is obtained by multiplying the integrand by the unit step function (cf. the passage immediately after (2.7)): compare Definition 11.1.
If we fix and write , in (9.2), then it changes into the Fourier transform of .
We explain Plancherelโs theorem for functions in . Let Then is convergent to a function in : whereโโ is a short-hand for โlimit in the mean.โ The Parseval identity reads
If we apply (9.7) to a causal function , then it leads to [1, (3.19)]
Hence we see that [1, (3.19)] is indeed the Parseval identity for the Fourier (or Plancherel) transform for .
10. Examples of Second-Order Systems
10.1. Electrical Circuits
The electric current flowing an electrical circuit which consists of four ingredients, electromotive-force , resistance , coil , and condenser , satisfies
10.2. Newtonโs Equation of Motion (cf. [7])
One has where is the inertance of mass, is the viscous resistance of the dashpot, and is the spring stiffness.
Introducing the new parameters
(10.2) becomes
11. Laplace Transforms
To solve (10.1), we use the Laplace transform which has been defined by (9.3) and we state its definition independently.
Definition 11.1. Suppose for an . The Laplace transform of is defined by The integral converges absolutely in and represents an analytic function there.
Example 11.2. Letโโ. Then
valid for in the first instance. The right-hand side of (11.2) gives a meromorphic continuation of the left-hand side to the punctured domain . Furthermore, (11.2) with replaced by reads
For they reduce to familiar formulas:
Proof. By definition (11.2) clearly holds true. Since the right-hand side is analytic in , the consistency theorem establishes the last assertion. Once (11.2) is established, we have whence, for example, by Eulerโs identity, that is, (11.4).
12. Partial Fraction Expansion and Examples
As long as the input function is a sinusoidal function, Example 11.2 will suffice to compute its Laplace transform. To go back to the time domain from the frequency domain, we need to solve the DE and, for most purposes, the following partial fraction expansion will give the answer almost automatically.
The following theorem, which is well known, provides us with the partial fraction expansion.
Theorem 12.1. If the denominator of the rational function is given by where , then where the coefficients are given by
Proof. By (12.1), for each , , we may write
and has no pole at . We write
where has no pole at . By successively differentiating and setting , we obtain (12.3).
Now, the rational function
has no pole, so that it must be a polynomial. But, since (where we use the assumption ), it follows that must be zero.
Now we will give examples of (2.2) for the second-order systems which do not appear anywhere else save for [2].
Example 12.2. We find the output signal (current) described by the DE where the initial values are assumed to be 0: .
Proof. Let be the Laplace transform of . Then we have and we may obtain the partial fraction expansion where is the first primitive cube root of 1. Hence As a transfer function, the function in (12.8) is stable.
Example 12.3. The following integral may be evaluated by the partial fraction expansion above or by the residue calculus:
Example 12.4. In the same vein as with Example 12.2, we may find the solution of the DE where the initial values are assumed to be 0: .
We have or
Hence
and the transfer function in (12.14)
is unstable.
13. The Product of Zeta-Functions: -Type
In this section, we illustrate the use of fractional integrals by proving a slight generalization of the result of Chandrasekharan and Narasimhan ([8]) involving the -type functional equation, which is the first instance beyond Hecke theory of the functional equation with a single gamma factor. First we state the basic settings.
13.1. Statement of the Situation
Let be increasing sequences of positive numbers tending to , and let be complex sequences. We form the Dirichlet series and suppose that they have finite abscissas of absolute convergence , , respectively.
We suppose the existence of the meromorphic function satisfying the functional equation (of -type) of the form with a real number and having a finite number of poles : We introduce the processing gamma factor and suppose that for any real numbers uniformly in .
In the -plane we take two deformed Bromwich paths
such that they squeeze a compact set with boundary for which and all the poles of
lie to the left of and those of
lie to the right of .
Then we define the -function by
In the special case, where , the -function reduces to -functions and denoted by with other parameters remaining the same. We also define the -function by which is for one of -functions. Hereafter we always assume that , which may be extended to .
Then we have which amounts to the following.
Theorem 13.1 ([9]). One has the modular relation equivalent to (13.2):
In the special case, where , we have the following.
Theorem 13.2. One has
For many important applications, compare [9].
13.2. The Riesz Sum
Formula (13.11) in the special case of the title reads where
We treat the case . Assuming is a nonnegative integer, we put , , , , , . Then (13.12) becomes where indicates , , , , , .
We note that the -functions in (13.14) reduce to
(by the formula in [10]) and
where indicates , , , , , and , , , , , , . Hence it reduces further to
say, where, slightly more general than Wiltonโs (1.22) [11], we put Hence (13.14) reads which gives a more general form of Wiltonโs Theoremโโ1 [11].
Rewriting (13.19) slightly, we deduce an analogue of Chandrasekharan and Narasimhan result [8, Theoremโโ7.1(a)],
Theorem 13.3. For , the functional equation (13.2) implies the identity where and where where denotes , with being given by (13.18).
Corollary 13.4. For , the functional equation (13.2) implies the identity where
We are now in a position to prove an analogue of [8, Theorem 7.1(b)] (although Theorem 13.3 contains [8, Theorem 7.2], too) by the Riemann-Liouville fractional integral transform.
Lemma 13.5 (Riemann-Liouville integral of Bessel functions). For the well-known Bessel functions and , one has where stands for the Lommel function.
Equation (13.26) is [12, (63), page 194] and (13.27) is [12, page 196]. We only need (13.27) and (13.27) is for treating the -Bessel function.
Arguing in the same way as in [8], we may prove the following.
Theorem 13.6. With a -function and a certain constant one has for integral , , , .
Acknowledgment
This work is supported by the SMX SUDA CO. (No. SDJN1001).