Research Article  Open Access
On the Solution of the Eigenvalue Assignment Problem for DiscreteTime Systems
Abstract
The output feedback eigenvalue assignment problem for discretetime systems is considered. The problem is formulated first as an unconstrained minimization problem, where a threeterm nonlinear conjugate gradient method is proposed to find a local solution. In addition, a cut to the objective function is included, yielding an inequality constrained minimization problem, where a logarithmic barrier method is proposed for finding the local solution. The conjugate gradient method is further extended to tackle the eigenvalue assignment problem for the two cases of decentralized control systems and control systems with time delay. The performance of the methods is illustrated through various test examples.
1. Introduction
In this work, we consider the following unconstrained minimization problem: where is generally semismooth and nonconvex and is the spectral radius of the matrix function that will be defined later on.
It is desirable to have a local solution of (1) such that . Therefore, we include such a constraint as a cut to the objective function implying the following inequality constrained minimization problem: where is a given constant.
It is well known that, for any matrix norm, it holds that , where is any square matrix. Then, we can replace the eigenvalue constraint of problem (2) by a regular inequality constraint which yields the following relaxed minimization problem: where is as defined above. This problem can be tackled easily by any constrained optimization solver. However, there is no guarantee in general that a feasible solution exists for this problem.
The three problems (1)â€“(3) concern the wellknown eigenvalue assignment problem (EAP) for discretetime systems. These problems are generally nonconvex and semismooth optimization problems. Over the past decades, a considerable amount of attention has been given to the EAP where a lot of research can be found in systems and control literature in particular for continuoustime systems (see, e.g., [1â€“3] and the references therein). In the framework of linear discretetime systems, a set of eigenvalue assignment algorithms have been developed (see, e.g., [2â€“11] and the references therein).
A related problem in output feedback control design is the linearquadratic control problem in which the goal is to design an output feedback gain matrix that minimizes a certain performance cost function while all eigenvalues of the closedloop system matrix must be within the unit circle (see, e.g., [12]). Such a controller can be calculated to this problem in case of continuoustime systems by available public software packages (e.g., HIFOO) [13]. However, for discretetime systems, to the best of our knowledge, public software has not yet been developed.
In this work, we focus on the two problems (1) and (2) where the attempt is to minimize the spectral radius of the nonsymmetric real matrix . In this regard, we apply a threeterm nonlinear conjugate gradient (CG) method [14] to find a local solution to the minimization problem (1) which is attempting to stabilize the associated control system; see the next section. In addition, a logarithmic barrier interiorpoint method is proposed to tackle the constrained minimization problem (2) for the same purpose.
Nonlinear conjugate gradient methods are widely studied and comprise a class of unconstrained optimization algorithms which are characterized by low memory requirements and strong global convergence properties (see the survey [15] and later references [3, 14]). We focus on a threeterm nonlinear CG method which has a nice performance (see [14]). The CG methods are descent direction methods, which means that, starting from a given point , these methods generate a sequence according to the following relation: where the step size satisfies the line search rule and is a descent direction for at . The update of the new search direction varies from one CG method to another.
The logarithmic barrier interiorpoint method is one of the standard methods for solving constrained optimization problems (see, e.g., [16]). This method is employed to tackle the constrained problem (2) which converts it into a sequence of unconstrained minimization problems.
This article is organized as follows. In the next section, we state the formulation of the eigenvalue assignment problem and introduce some basic concepts which are needed in the subsequent analysis. In Section 3, we evaluate the required derivatives of the objective function. In Section 4, we introduce the proposed threeterm CG method that finds a local solution of the unconstrained minimization problem. In Section 5, we extend the CG method to tackle the output feedback EAP problem for decentralized control systems. In Section 6, we reformulate the discretetime control system with time delay as an augmented system without any delay so that the EAP can be tackled by the proposed CG method. In Section 7, we introduce a logarithmic barrier method for finding a local solution to problem (2). In Section 8, we test the proposed methods on different test examples from the literature. Then, we end with a conclusion.
Notations. For vectors, the symbol is the 2norm, while for matrices denotes the Frobenius norm defined by , where is the inner product given by and is the trace operator. The eigenvalues of a matrix are denoted by ,â€‰â€‰. The Greek letter denotes the spectral radius of a square matrix and is a diagonal matrix with eigenvalues on its main diagonal. Moreover, and denote the real and imaginary parts of a complex number, respectively. Sometimes and for the sake of simplicity, we skip the arguments of the considered functions; for example, we use to denote .
2. Problem Formulation and Preliminary
The static output feedback eigenvalue assignment problem for discretetime systems can be stated as follows (see, e.g., the abovementioned citations and the references therein). Consider the linear timeinvariant discretetime system with the following state space realization: where , , and are the state, the control input, and the measured output vectors, respectively. Moreover, , , and are given constant matrices. Such a system is often closed by the control law which yields where and is the output feedback gain matrix which represents the unknown.
The following definitions are needed for later use.
Definition 1. The spectral radius of a matrix with eigenvalues is defined as
Definition 2. The discretetime control system (6) is asymptotically stable (i.e., as for any initial ) if and only if .
The eigenvalue assignment problem is to design an output feedback gain matrix providing a closedloop system in a satisfactory stage by shifting controllable eigenvalues to desirable locations in the complex plane. In particular, the EAP requires the spectral radius of the closedloop matrix to be strictly within the unit circle in the complex plane.
A necessary condition for the EAP by constant output feedback is given in [9]. Fu [7] has also shown that the EAP via static output feedback is NPhard. Systems with a symmetric state space realization, namely, , , occur in different applications such as RC networks [10]. The symmetric EAP might be stated as follows: find a matrix such that It is well known that the eigenvalues of a real symmetric matrix are not everywhere differentiable. A classical theorem (see, e.g., [17]) states that each eigenvalue of a symmetric matrix is the difference of two convex functions, which implies that the eigenvalues are semismooth functions. This fact allows us to use the theory of semismoothness to establish convergence results for the proposed methods.
Definition 3. A functional is said to be locally Lipschitz continuous at with a constant if and only if there exists a number such that where is an open ball with center at and radius .
Let be a locally Lipschitz continuous function. Radmacherâ€™s theorem (see, e.g., [18]) implies that such a mapping is differentiable almost everywhere. Let be the set of all at which is differentiable and let be its Jacobian whenever it exists. Let be the set of all at which is differentiable. Such a set is open. Therefore, it is convenient to replace by the following level set: where is a given constant. This set is assumed to be bounded.
3. Derivative of the Objective Function
The eigenvalues of the matrix of (1) are not in general differentiable at a point where has repeated eigenvalues. Therefore, let us consider the following assumption on eigenvalues of .
Assumption 4. Assume that has no multiple eigenvalues for all .
Let be diagonalizable and let and be a couple of matrices whose columns are the left and right eigenvectors of ; that is, , , and satisfy
The columns of and satisfy
The following lemma provides the firstorder derivatives of the objective function of the minimization problem (1) required by the CG methods.
Lemma 5. Suppose that satisfies Assumption 4 and is differentiable at . Let be the largest in magnitude eigenvalue of . Then, the entries of the gradient of the objective function are given by where and are the left and right eigenvectors associated with and is a matrix with zero entries except at the position where its value is one.
Proof (see [19, Lemma ]). The two eigenvectors are normalized such that which by differentiation yields where the dash denotes the firstorder derivative with respect to the entries of . The eigenvalue also satisfies that which by differentiation gives Suppose that is the largest in magnitude eigenvalue of . Then, and by the chain rule we have Since , then the result follows.
4. ThreeTerm CG Method for the EAP
In this section, we analyze and study a threeterm nonlinear CG method for computing the local solution of the minimization problem (1) (see [14]). For a given starting , the CG method generates a sequence of iterates according to the relation where is supposed to be a descent direction for at and is the step size. The new search direction for the CG method is given by the following relation: where is the gradient of at , , , and
In order to globalize this CG method, we recall Wolfe conditions (see, e.g., [16]) to update a suitable step size for the new iterate (20): where . The strong Wolfe conditions replace (24) by the following condition:
The following theorem shows that the search direction is a descent direction to the objective function.
Theorem 6. Let be generated by (20) and let the step size satisfy Wolfe conditions (23)(24). Then, evaluated by (21)(22) is a descent direction to for all .
Proof (see also [14, Proposition ]). From Wolfeâ€™s second condition (24), we obtain the curvature condition . Moreover, from (21)(22), it follows that
The threeterm nonlinear CG algorithm is stated in the following lines.
Algorithm 7 (threeterm CG method for the output feedback EAP). (0)Let , , be given constants and be the tolerance. Moreover, let be given constant matrices. Choose , compute , and set . If or , stop; otherwise, set and go to the next step.
While or , do(1)Compute that satisfies Wolfe conditions (23)(24); set and then calculate the gradient .(2)If or , stop; otherwise, go to the next step.(3)Calculate a new search direction by (21)(22).(4)Set and repeat.End (do)
Remark 8. For the considered linear control system, one of the major tasks is to achieve a stabilizing output feedback controller, that is. to calculate such that is strictly less than one. However, there is no relationship between achieving such a controller and finding , a local minimum of problem (1). Therefore, it is reasonable to stop the CG method as soon as a local solution is achieved or a stabilizing is reached with a sufficient stability margin.
To prove global convergence of the CG algorithm, we assume that for all ; otherwise, a stationary point is found.
Assumption 9. The following assumptions are assumed to hold: (a)The level set as defined in (10) is bounded; that is, there exists a constant such that, for all , .(b)In some neighborhood of the level set (10), the gradient of the objective function is Lipschitz continuous.
From Assumption 9(a), one has and from Assumption 9(b) we deduce that there exists a constant such that
The following lemma provides a lower bound to the step size ; see [14, Proposition ].
Lemma 10. Let be generated by Algorithm 7 and suppose that is a descent direction for . In addition, let Assumption 9 hold. Then,
Proof. By subtracting from both sides of (24) and using Lipschitz condition, Since is a descent direction for and is less than one, then (28) follows.
The result of the following lemma is used in proving the main global convergence theorem.
Lemma 11. Let be generated by Algorithm 7 and assume that is a descent direction. Furthermore, let Assumption 9 hold. Then,
Proof. From Wolfe condition (23) and Lemma 10, one has Then, Assumption 9 implies condition (30).
Next, we have the following result for the threeterm CG method (see, e.g., [14, Proposition ]).
Lemma 12. Let Assumption 9 hold. Consider to be generated by Algorithm 7 where is a descent direction for and satisfies Wolfe conditions (23) and (24). If then
Proof (see also [20, Lemma ]). Suppose (33) is not true. Then, there exists a constant such that Therefore, from (32) and Lemma 11, we have This contradicts Lemma 12 which completes the proof.
Under condition (30), the following global convergence result is obtained.
Theorem 13. Let be generated by Algorithm 7. Assume that Assumption 9 holds. Assume further that there exists a constant such that for any . Then, it holds that
Proof (see also [14, Theorem ]). Since for all , then By direct computation, we have Consequently, one has Then, from Lemma 12, (36) follows.
5. The EAP for Decentralized Control Systems
Consider the linear timeinvariant decentralized control system with control stations: where , , and are the state, the control input, and the measured output vectors, respectively. , , and are given constant matrices, .
The output feedback EAP for the decentralized system (40) is to find output feedback gain matrices that place, by using the control law the eigenvalues of the closedloop system matrix to strictly lie within the unit disk.
By introducing the following augmented matrices, it is straightforward to rewrite the decentralized control system (40) in the original structure (5): The corresponding closedloop system matrix is , where the output feedback gain matrix is given by the blockdiagonal matrix and the corresponding unconstrained minimization problem takes the form
The threeterm CG method of the last section can be applied to tackle the minimization problem (44). Let be the set of all at which is differentiable. For a given starting , the threeterm CG method generates a sequence of the form where is the step size that must satisfy Wolfe conditions (23)(24), while the search direction is a descent direction for at . The new search direction is updated by using (21)(22).
Algorithm 7 is restated in the following lines to tackle the output feedback EAP problem for decentralized control systems.
Algorithm 14 (threeterm CG method for the decentralized output feedback EAP). (0)Let , , , and be given constants. Moreover, let be given constant matrices. Choose . Compute and set . If or , stop; otherwise, set and go to the next step.
While or , do(1)Calculate a step size that satisfies Wolfe conditions (23)(24); set and then calculate the gradient .(2)If or , stop; otherwise, go to the next step.(3)Calculate the new search direction according to (21)(22).(4)Set and repeat.End (do)
Remark 15. The way in which the CG method is designed allows us to maintain the blockdiagonal structure of the unknown matrix efficiently without skipping any information of the data matrices of the control system.
6. The EAP for TimeDelay Systems
Consider the following linear discretetime and timedelay system (see, e.g., [21, 22]): where , , and are given constant matrices and is the time delay in the state vector.
The above timedelay system has a specific feature that it can be converted into an augmented linear system without any delay (see, e.g., [22]). This is achieved by introducing the augmented state vector The timedelay system (47) is equivalently rewritten as where
By using the control law , we close system (49) as where . For such a system, one can apply Algorithm 7 to compute an output feedback gain matrix , a local solution to the minimization problem (1). It is known that the eigenvalues of the timedelay system coincide with the eigenvalues of the augmented linear system (49).
We emphasize that the same approach can be considered in case of control systems with input delay.
7. Logarithmic Barrier Method
The attempt of this section is to improve formulation (1), where we aim to achieve a stabilizing together with minimizing the spectral radius of the closedloop system matrix. Therefore, let us consider the inequality constrained minimization problem (2). By following the idea of the logarithmic barrier method, we consider the following unconstrained minimization problem: where is the barrier parameter and the functions and are as defined in (2). According to the theory of barrier methods, the minimizer of approaches a local solution of the original problem (2) as under certain conditions.
Let us define the following strict feasible region: We assume that .
The Lagrangian functions for problem (2) are defined as where and are as defined in (2) and is the associated Lagrange multiplier.
Firstorder derivative of with respect to the entries of is obtained in the following lemma.
Lemma 16. Suppose that satisfies Assumption 4 and is differentiable at . Let be the largest in magnitude eigenvalue of . Then, the first derivative of is given by where and are the left and right eigenvectors associated with and is a matrix with zero entries except at the position where it has a value of one.
Proof. By differentiating with respect to the entries of and utilizing the derivative of the spectral radius obtained in Lemma 5, the result follows.
As can be seen in (55), all terms of the first derivative of depend on , which leads to an illconditioned Hessian matrix of the barrier function at the solution. Therefore, secondorder methods are not recommended to compute a local solution of (52), but rather firstorder methods such as nonlinear CG methods of Section 4 are recommended.
The gradient of takes the form
From (54) and (55), we see that if satisfies then the solution obtained by the proposed logbarrier method satisfies the stationary requirement of the Karushâ€“Kuhnâ€“Tucker conditions.
The logarithmic barrier interiorpoint method is stated in the following lines.
Algorithm 17 (logarithmic barrier method for the output feedback EAP). (0)Choose a starting barrier parameter , outer and innerloop tolerances , a starting feasible point . Let be given constants.
For (1)Find an approximate local minimizer of starting from ; terminate if .(2)If the final stopping test is satisfied or , stop; otherwise, go to the next step.(3)Choose a new barrier parameter and a new innerloop tolerance .(4)Choose a new starting point and set .End (do)
Remark 18. The major task of Algorithm 17 lies in Step () where an unconstrained minimization method has to be employed to obtain an approximate local solution up to the prescribed accuracy represented by the innerloop tolerance . In the implementation, we consider the threeterm CG method of Section 4 to calculate such a local solution.
8. Illustrative Numerical Examples
In this section, various test examples are provided to illustrate the performance of the proposed methods. Among the considered test problems are two examples of decentralized control systems and two examples for timedelay systems. A starting feasible point is required for the logbarrier method which might be obtained by the CG method applied on problem (1). The logbarrier method aims to obtain a local solution for the constrained problem (2) or at least achieves stabilizing output feedback that strictly lies within the unit circle.
The methods are implemented using Matlab and all computations were carried out on a Laptop with 3.07â€‰GHz and 1â€‰GB RAM. Some of the considered test problems are for continuoustime systems. The Matlab function c2d from the control toolbox is employed to provide the corresponding discretetime data matrices.
As mentioned in Remark 8, we stop the method as soon as the objective function is strictly less than one or a local solution is achieved. The CG method uses the sufficient decrease condition (23) with for the backtracking line search.
Example 1. This test problem is borrowed from [23]. The constant data matrices for the corresponding discretetime model are where the spectral radius of the system matrix is . Starting from , a matrix of ones, the proposed CG method successfully converges to a local minimizer for the unconstrained problem (1). The achieved output feedback gain matrix is and the corresponding objective function value is .
Table 1 shows the convergence behavior of the considered CG method to a local solution of problem (1), which is also a stabilizing output feedback gain to the corresponding control system (5).

Example 2. This test problem is the aircraft model in cruise flight conditions [24, AC1] for a continuoustime control system. The function c2d is used to have the discretetime counterpart with the following data matrices: The spectral radius of the system matrix is . Starting from , a matrix of ones, the CG method achieves the following stabilizing output feedback gain matrix after 12 iterations: The corresponding objective function value at both points is and .
Example 3. This test problem represents a chemical reactor model [24, REA1]. The data matrices of the corresponding discretetime system are as follows: The spectral radius of the system matrix is . The system is clearly Schur unstable. By starting with , the zero matrix, the CG method converges to a local solution after 25 iterations, where and the corresponding objective function value is .
8.1. The CG Method for Decentralized Control Systems
Example 4. This test problem is borrowed from [25] of a decentralized control system with two control stations; each station has one input and one measured output. The given constant data matrices are The uncontrolled system is discretetime Schur unstable, where . Starting with the following , the CG method requires only iterations to converge to a stabilizing output feedback gain matrix , where The objective function value at the two points is and , respectively. Although seems to be relatively small, a stationary point fails to be achieved.
Example 5. This is a fifthorder decentralized control system with two control stations. The data matrices are randomly generated which are as follows: The uncontrolled system is discretetime Schur unstable, where . Starting from the following randomly generated , the CG method reaches a stabilizing after iterations, where The corresponding objective function value at both points is and , respectively.
8.2. The CG Method for TimeDelay Systems
Example 6. This test problem represents a timedelay system (see [22]) which has the following constant data matrices: where . The data matrices for this system after converting it from continuous to discrete are as follows: The spectral radius of the system matrix is . Starting from the following , the threeterm CG method successfully converges to a local solution of the minimization problem (1). The starting and achieved local solutions are The objective function value at both points is and , respectively.
Example 7. This test problem represents a timedelay system (see [22]) which has the following constant data matrices: where . By converting the system from continuous to discretetime, the corresponding data matrices are