Table of Contents Author Guidelines Submit a Manuscript
Mathematical Problems in Engineering
Volume 2008, Article ID 513582, 25 pages
http://dx.doi.org/10.1155/2008/513582
Research Article

Explicit Solution of the Inverse Eigenvalue Problem of Real Symmetric Matrices and Its Application to Electrical Network Synthesis

1Department of Physics & Electrical Engineering, Mechanical Engineering Faculty, University of Belgrade, Kraljice Marije 16, 11120 Belgrade, Serbia
2Electrical Engineering Faculty, University of Belgrade, Bulevar Kralja Aleksandra 73, 11000 Belgrade, Serbia

Received 20 January 2008; Accepted 22 May 2008

Academic Editor: Mohammad Younis

Copyright © 2008 D. B. Kandić and B. D. Reljin. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

A novel procedure for explicit construction of the entries of real symmetric matrices with assigned spectrum and the entries of the corresponding orthogonal modal matrices is presented. The inverse eigenvalue problem of symmetric matrices with some specific sign patterns (including hyperdominant one) is explicitly solved too. It has been shown to arise thereof a possibility of straightforward solving the inverse eigenvalue problem of symmetric hyperdominant matrices with assigned nonnegative spectrum. The results obtained are applied thereafter in synthesis of driving-point immittance functions of transformerless, common-ground, two-element-kind RLC networks and in generation of their equivalent realizations.

1. Introduction

During the past few decades, many papers [116] have studied the inverse eigenvalue problems (IEPs) of various types. The solution existence of the specific IEPs was generally considered in [1, 38, 10, 11, 13, 14] without explicit formulation of the corresponding procedure for solution construction, whereas in [2, 9, 12, 15, 16] this has been accomplished. The main result of [16] is the proof that IEP of symmetric hyperdominant (hd) matrices with assigned nonnegative spectrum has at least one solution which has also been constructed. This settled an old IEP opened in [17]. Hyperdominant matrices have nonnegative diagonal and nonpositive off-diagonal entries and nonnegative hd margins of rows (hd margin of a row is the sum of entries in that row). The tool used in [16] to construct the 𝑛th-order hd matrix with assigned spectrum was the 𝑛th-order orthogonal Hessenberg matrix constructed as a special product of 𝑛1 plane rotations [15]. Hessenberg matrices naturally arise in study of symmetric tridiagonal matrices, skew symmetric, and orthogonal matrices [13, 14, 18]. A matrix is upper (lower) Hessenberg if its entry (𝑘, 𝑚) vanishes whenever 𝑘>𝑚+1(𝑚>𝑘+1).

In practical work, it is commonly assumed to be better not to form Hessenberg matrices explicitly, but to keep them as products of plane rotations. On the other hand, explicit construction of real symmetric matrices with nonnegative spectrum, which either have hd sign pattern or are truly hd, is proved to be an inevitable task in considering the synthesis of driving-point immittance functions of passive, transformerless, common-ground, two element-kind 𝑅𝐿𝐶 networks and in generation of their equivalent realizations [1719]. 𝑅𝐿𝐶 networks are comprised solely of resistors (𝑅), inductors (𝐿), and capacitors (𝐶). Driving-point immittance function of a lumped, time invariant, linear electrical network is either a driving-point impedance 𝑍(𝑠), or a driving-point admittance 𝑌(𝑠)=𝑍1(𝑠) (𝑠=𝜎+𝑗𝜔 is the complex frequency; 𝜎, 𝜔 are real numbers; 𝑗=1). It is well known that a real rational function in 𝑠 can be driving-point immittance function of 𝑅𝐿𝐶 network if and only if it is positive real function in 𝑠; or similarly, a necessary condition for a stable square matrix 𝐖(𝑠) of real rational functions in 𝑠 to be driving-point immittance matrix of a passive 𝑅𝐿𝐶 network is that 𝐖(𝑠) be positive real matrix [20, 21]. A few tests for ascertaining positive real properties of functions and/or matrices can be found in [20, 21]. In [22] it has been pointed out the role of hd matrices in synthesis of both passive and active, transformerless, common-ground multiports. Unlike [16], this paper presents explicit construction of entries of real symmetric matrices with arbitrarily assigned spectrum and the entries of the corresponding orthogonal modal matrices. It also presents explicit construction of real symmetric matrices with assigned spectrum and with specific sign patterns (including hd one). Thereof, a solution to the IEP of symmetric, truly hd matrices with assigned nonnegative spectrum is produced. Some of the obtained results are then applied in synthesis of driving-point immitances of transformerless, common-ground, two-element-kind 𝑅𝐿𝐶 networks and in generation of their equivalent realizations. The two proposed realization procedures are illustrated by an example. Throughout the paper denotes direct sum, 𝐱𝑇 denotes transpose of 𝐱, bold capital letters denote matrices and 𝐈𝑘 stands for the 𝑘th-order unit or identity matrix.

2. Explicit Solution to the IEP of Real Symmetric Matrices by Using Canonic Orthogonal Transformations

Let {𝜆1,𝜆2,,𝜆𝑛} be assigned spectrum of the sought real symmetric matrices and let 𝐆1=diag(𝜆1,𝜆2,,𝜆𝑛) be 𝑛×𝑛 spectral matrix. Consider a set of 2×2 orthogonal matrices 𝐏𝑘{𝐀𝑘,𝐁𝑘,𝐂𝑘,𝐃𝑘}(𝑘=1,,𝑛1): 𝐏𝑘𝑎=𝑘𝑏𝑘𝑐𝑘𝑑𝑘,𝐀𝑘=cos𝜃𝑘sin𝜃𝑘sin𝜃𝑘cos𝜃𝑘,𝐁𝑘=cos𝜃𝑘sin𝜃𝑘sin𝜃𝑘cos𝜃𝑘,𝐂𝑘=cos𝜃𝑘sin𝜃𝑘sin𝜃𝑘cos𝜃𝑘,𝐃𝑘=cos𝜃𝑘sin𝜃𝑘sin𝜃𝑘cos𝜃𝑘,𝜃𝑘𝜋0,2,(2.1)which are either rotators (𝐀𝑘 and 𝐂𝑘) or reflectors (𝐁𝑘 and 𝐃𝑘). A useful set of 𝑛×𝑛 orthogonal matrices is 𝐔1=𝐏1𝐈𝑛2,𝐔𝑘=𝐈𝑘1𝐏𝑘𝐈𝑛𝑘1(𝑘=2,,𝑛2),𝐔𝑛1=𝐈𝑛2𝐏𝑛1.(2.2) From the following two matrix recurrent relations 𝐆𝑘+1=𝐔𝑘𝐆𝑘𝐔T𝑘,𝐒𝑘+1=𝐔𝑛𝑘𝐒𝑘𝐔T𝑛𝑘(𝑘=1,,𝑛1),(2.3) we readily obtain 𝑛×𝑛 real symmetric matrices 𝐆𝑛 and 𝐒𝑛, which are both congruent and similar to 𝐆1𝐆𝑛=𝐔𝐆1𝐔T,𝐔=𝐔𝑛1𝐔𝑛2𝐔1;𝐒𝑛=𝐕𝐆1𝐕T,𝐕=𝐕1𝐕2𝐕𝑛1.(2.4) Columns of the orthogonal modal matrix 𝐔(𝐕) correspond to eigenvectors of 𝐆𝑛(𝐒𝑛). Out of (𝑛1)! different possibilities of using (2.3) in generation of 𝐆𝑛 and 𝐒𝑛, only the two selected by (2.4) produce explicit expressions of entries of 𝐆𝑛 and 𝐒𝑛 in terms of {𝜆1,𝜆2,,𝜆𝑛} and the entries of 𝐏𝑘(𝑘=1,,𝑛1). 𝐔(𝐕) from (2.4) will be shown later to take on lower (upper) Hessenberg form with the entries explicitly expressed too. For the sake of brevity, we will restrict our consideration only to the first of relations (2.3), bearing on mind the possibility of treating the second one similarly. For 𝑘=1 and 𝑘=2 we readily obtain 𝐆2 and 𝐆3, by using (2.1) and (2.3): 𝐆2=𝐏1𝐈𝑛2𝐆1𝐏T1𝐈𝑛2=𝑎1𝑏1𝑐1𝑑1𝟎2,𝑛2𝟎T2,𝑛2𝐈𝑛2𝜆100𝜆2𝟎2,𝑛2𝟎T2,𝑛2𝜆3𝟎𝟎𝜆𝑛𝑎1𝑐1𝑏1𝑑1𝟎2,𝑛2𝟎T2,𝑛2𝐈𝑛2=𝜆1𝑎21+𝜆2𝑏21𝜆1𝜆2𝑎1𝑐1𝜆1𝜆2𝑎1𝑐1𝜆1𝑐21+𝜆2𝑑21𝟎2,𝑛2𝟎T2,𝑛2𝜆3𝟎𝟎𝜆𝑛,𝑎Moregenerally,𝑘=1,,𝑛1itholds𝑘𝑏𝑘+𝑐𝑘𝑑𝑘=𝑎𝑘𝑐𝑘+𝑏𝑘𝑑𝑘𝑎=02𝑘+𝑏2𝑘=𝑐2𝑘+𝑑2𝑘=𝑎2𝑘+𝑐2𝑘=𝑏2𝑘+𝑑2𝑘,𝐆=13=1𝐏2𝐈𝑛3𝐆21𝐏T2𝐈𝑛3=1000𝑎2𝑏20𝑐2𝑑2𝟎3,𝑛3𝟎T3,𝑛3𝐈𝑛3𝜆1𝑎21+𝜆2𝑏21𝜆1𝜆2𝑎1𝑐1𝜆1𝜆2𝑎1𝑐1𝜆1𝑐21+𝜆2𝑑21𝟎2,𝑛2𝟎T2,𝑛2𝜆3𝟎𝟎𝜆𝑛1000𝑎2𝑐20𝑏2𝑑2𝟎3,𝑛3𝟎T3,𝑛3𝐈𝑛3=𝜆1𝑎21+𝜆2𝑏21(𝜆1𝜆2)𝑎1𝑎2𝑐1(𝜆1𝜆2)𝑎1𝑐1𝑐2(𝜆1𝜆2)𝑎1𝑎2𝑐1(𝜆1𝑐21+𝜆2𝑑21)𝑎22+𝜆3𝑏22(𝜆1𝑐21+𝜆2𝑑21𝜆3)𝑎2𝑐2(𝜆1𝜆2)𝑎1𝑐1𝑐2(𝜆1𝑐21+𝜆2𝑑21𝜆3)𝑎2𝑐2(𝜆1𝑐21+𝜆2𝑑21)𝑐22+𝜆3𝑑22𝟎3,𝑛3𝟎T3,𝑛3𝜆4𝟎𝟎𝜆𝑛.(2.5) Let 𝜆1=𝜆1, 𝜀1=(𝜆1𝜆2)𝑎1, and 𝐱2=𝑐1𝜀1, and let us firstly introduce in (2.5) the following notation: 𝐌2=𝜆1𝑎21+𝜆2𝑏21,𝜆2=𝜆1𝑐21+𝜆2𝑑21,𝐃2𝜆=diag2,𝜆3,𝐃2𝜆=diag4,,𝜆𝑛,𝐀2𝐱=[2]0T,𝐌3𝜆=1𝑎21+𝜆2𝑏21𝜆1𝜆2𝑎1𝑎2𝑐1𝜆1𝜆2𝑎1𝑎2𝑐1𝜆1𝑐21+𝜆2𝑑21𝑎22+𝜆3𝑏22,𝜆3𝜆=1𝑐21+𝜆2𝑑21𝑐22+𝜆3𝑑22,𝐃3𝜆=diag3,𝜆4,𝐃3𝜆=diag5,,𝜆𝑛,𝐀3𝜆=1𝜆2𝑎1𝑐1𝑐2𝜆1𝑐21+𝜆2𝑑21𝜆3𝑎2𝑐200.(2.6) Thereafter, observing the partition of 𝐆2 and 𝐆3 obtained in (2.5) 𝐆2=𝐌2𝐀T2𝟎1,𝑛3𝐀2𝐃2𝟎2,𝑛3𝟎T1,𝑛3𝟎T2,𝑛3𝐃2,𝐆3=𝐌3𝐀T3𝟎2,𝑛4𝐀3𝐃3𝟎2,𝑛4𝟎T2,𝑛4𝟎T2,𝑛4𝐃3,(2.7) it can readily be anticipated the partition of subsequent matrices 𝐆𝑘(𝑘=4,,𝑛2) as follows: 𝐆𝑘=𝐌𝑘𝐀T𝑘𝟎𝑘1,𝑛𝑘1𝐀𝑘𝐃𝑘𝟎2,𝑛𝑘1𝟎T𝑘1,𝑛𝑘1𝟎T2,𝑛𝑘1𝐃𝑘,𝐱𝑘𝑥=𝑘,1𝑥𝑘,2𝑥𝑘,𝑘1,𝐀𝑘𝐱=𝑘𝟎1,𝑘1,(2.8) where 𝐌𝑘 is the symmetric (𝑘1)×(𝑘1) matrix, 𝐱𝑘 is 1×(𝑘1) row vector, 𝐀𝑘 is 2×(𝑘1) matrix, 𝜆𝑘 is modified eigenvalue 𝜆𝑘, 𝐃𝑘=diag(𝜆𝑘,𝜆𝑘+1) and 𝐃𝑘=diag(𝜆𝑘+2,,𝜆𝑛). For 𝑘=2,,𝑛3 from (2.1)–(2.3), (2.8) it follows that 𝐆𝑘+1=𝐈𝑘1𝐏𝑘𝐈𝑛𝑘1𝐆𝑘𝐈𝑘1𝐏T𝑘𝐈𝑛𝑘1=𝐌𝑘𝐀T𝑘𝐏T𝑘𝟎𝑘1,𝑛𝑘1𝐏𝑘𝐀𝑘𝐏𝑘𝐃𝑘𝐏T𝑘𝟎2,𝑛𝑘1𝟎T𝑘1,𝑛𝑘1𝟎T2,𝑛𝑘1𝐃𝑘,𝐏𝑘𝐀𝑘=𝑎𝑘𝑥𝑘,1𝑎𝑘𝑥𝑘,𝑘1𝑐𝑘𝑥𝑘,1𝑐𝑘𝑥𝑘,𝑘1,𝐏𝑘𝐃𝑘𝐏T𝑘=𝜆𝑘𝑎2𝑘+𝜆𝑘+1𝑏2𝑘𝜆𝑘𝜆𝑘+1𝑎𝑘𝑐𝑘𝜆𝑘𝜆𝑘+1𝑎𝑘𝑐𝑘𝜆𝑘𝑐2𝑘+𝜆𝑘+1𝑑2𝑘.(2.9) For 𝑘=2,,𝑛3, let us define: 𝜆𝑘+1=𝜆𝑘𝑐2𝑘+𝜆𝑘+1𝑑2𝑘, 𝜀𝑘=(𝜆𝑘𝜆𝑘+1)𝑎𝑘, 𝜓𝑘𝑘=𝜆𝑘𝑎2𝑘+𝜆𝑘+1𝑏2𝑘 and thereafter 𝐃𝑘+1=diag(𝜆𝑘+3,,𝜆𝑛) and 𝐃𝑘+1=diag(𝜆𝑘𝑐2𝑘+𝜆𝑘+1𝑑2𝑘,𝜆𝑘+2). Then, from (2.8)-(2.9) it follows the identification 𝐌𝑘+1𝐌=𝑘𝑎𝑘𝐱T𝑘𝑎𝑘𝐱𝑘𝜓𝑘𝑘,𝐀𝑘+1=𝐱𝑘+1𝟎1,𝑘𝑐=𝑘𝐱𝑘𝑐𝑘𝜀𝑘𝟎1,𝑘0=𝑐𝑘𝑥𝑘,1𝑐𝑘𝑥𝑘,𝑘1𝑐𝑘𝜀𝑘000,(2.10) which enables the partition of (2.10)𝐆𝑘+1 in () to be like that of (2.10)𝐆𝑘 in (), and that partition of (2.10)𝐱𝑘+1 be rather simple 𝐆𝑘+1=𝐌𝑘+1𝐀T𝑘+1𝟎𝑘,𝑛𝑘2𝐀𝑘+1𝐃𝑘+1𝟎2,𝑛𝑘2𝟎T𝑘,𝑛𝑘2𝟎T2,𝑛𝑘2𝐃𝑘+1,𝐱𝑘+1=𝑐𝑘𝐱𝑘𝜀𝑘,𝑘=2,,𝑛3.(2.10) Let 𝜓11=𝐌2. Having uncovered the partition pattern of 𝐌𝑘+1(𝑘=2,,𝑛3), we can pursue partitioning of 𝐌𝑛2 backwardly from 𝐌𝑛2 to 𝐌2, by using (2.10)). Afterwards, we can produce 𝐆𝑛2, by using (2.10)-(2.11). The results are 𝐌𝑛2=𝜓11𝑎2𝐱T2𝑎2𝐱2𝜓22𝑎3𝐱T3𝑎3𝐱3𝜓33𝑎𝑛3𝐱T𝑛3𝑎𝑛3𝐱𝑛3𝜓𝑛3,𝑛3,𝐆𝑛2=𝐌𝑛2𝐱T𝑛2𝐱𝑛2𝜆𝑛2𝟎𝑛2,1𝟎1,𝑛2𝜆𝑛1𝟎𝑛1,1𝟎1,𝑛1𝜆𝑛.(2.12) Since 𝐆𝑛1=(𝐈𝑛3𝐏𝑛21)𝐆𝑛2(𝐈𝑛3𝐏T𝑛21) and 𝐏𝑛2𝜆𝑛200𝜆𝑛1𝐏T𝑛2=𝜆𝑛2𝑎2𝑛2+𝜆𝑛1𝑏2𝑛2𝜆𝑛2𝜆𝑛1𝑎𝑛2𝑐𝑛2𝜆𝑛2𝜆𝑛1𝑎𝑛2𝑐𝑛2𝜆𝑛2𝑐2𝑛2+𝜆𝑛1𝑑2𝑛2,(2.13) then after defining 𝜓𝑛2,𝑛2=𝜆𝑛2𝑎2𝑛2+𝜆𝑛1𝑏2𝑛2,𝜀𝑛2=(𝜆𝑛2𝜆𝑛1)𝑎𝑛2,𝜆𝑛1=𝜆𝑛2𝑐2𝑛2+𝜆𝑛1𝑑2𝑛2 and 𝐱𝑛1=𝑐𝑛2[𝐱𝑛2𝜀𝑛2], it follows from (2.12)-(2.13) 𝐆𝑛1=𝜓11𝑎2𝐱T2𝑎2𝐱2𝜓22𝑎3𝐱T3𝑎3𝐱3𝜓33𝑎𝑛3𝐱T𝑛3𝑎𝑛3𝐱𝑛3𝜓𝑛3,𝑛3𝑎𝑛2𝐱T𝑛2𝑎𝑛2𝐱𝑛2𝜓𝑛2,𝑛2𝐱T𝑛1𝐱𝑛1𝜆𝑛1𝟎𝑛1,1𝟎1,𝑛1𝜆𝑛.(2.14) Since 𝐆𝑛=(𝐈𝑛2𝐏𝑛1)𝐆𝑛1(𝐈𝑛2𝐏T𝑛1) and 𝐏𝑛1𝜆𝑛100𝜆𝑛𝐏T𝑛1=𝜆𝑛1𝑎2𝑛1+𝜆𝑛𝑏2𝑛1𝜆𝑛1𝜆𝑛𝑎𝑛1𝑐𝑛1𝜆𝑛1𝜆𝑛𝑎𝑛1𝑐𝑛1𝜆𝑛1𝑐2𝑛1+𝜆𝑛𝑑2𝑛1,(2.15) then on introducing 𝜓𝑛1,𝑛1=𝜆𝑛1𝑎2𝑛1+𝜆𝑛𝑏2𝑛1,𝜀𝑛1=(𝜆𝑛1𝜆𝑛)𝑎𝑛1, and 𝜆𝑛=𝜆𝑛1𝑐2𝑛1+𝜆𝑛𝑑2𝑛1, we obtain from (2.14)-(2.15) the partition of 𝐆𝑛 which is amenable to the production of its entries in explicit form and is suitable for further discussion about solving some specific IEPs 𝐆𝑛=𝜓11𝑎2𝐱T2𝑎2𝐱2𝜓22𝑎3𝐱T3𝑎3𝐱3𝜓33𝑎𝑛2𝐱T𝑛2𝑎𝑛2𝐱𝑛2𝜓𝑛2,𝑛2𝑎𝑛1𝐱T𝑛1𝑐𝑛1𝐱T𝑛1𝑎𝑛1𝐱𝑛1𝜓𝑛1,𝑛1𝑐𝑛1𝜀𝑛1𝑐𝑛1𝐱𝑛1𝑐𝑛1𝜀𝑛1𝜆𝑛.(2.16) For 𝑘=2,,𝑛, we consecutively obtain from 𝜆1=𝜆1 and 𝜆𝑘=𝜆𝑘1𝑐2𝑘1+𝜆𝑘𝑑2𝑘1 that generally it holds 𝜆𝑘=𝑐1𝑐2𝑐𝑘12𝜆1+𝑑1𝑐2𝑐𝑘12𝜆2𝑑++𝑘2𝑐𝑘12𝜆𝑘1+𝑑2𝑘1𝜆𝑘,𝑘=2,,𝑛.(2.17) Since (2.17)𝜓11=𝜆1𝑎21+𝜆2𝑏21 and (2.17)𝜓𝑘𝑘=𝜆𝑘𝑎2𝑘+𝜆𝑘+1𝑏2𝑘(𝑘=2,,𝑛1), then from () it follows that 𝜓𝑘𝑘=𝑐1𝑐2𝑐𝑘1𝑎𝑘2𝜆1+𝑑1𝑐2𝑐𝑘1𝑎𝑘2𝜆2𝑑++𝑘2𝑐𝑘1𝑎𝑘2𝜆𝑘1+𝑑𝑘1𝑎𝑘2𝜆𝑘+𝑏2𝑘𝜆𝑘+1,𝑘=2,,𝑛1.(2.17) Observe that it is not necessary to calculate “𝜓”s from (2.18), but only the modified eigenvalues from (2.17) since it holds 𝜀𝑘=(𝜆𝑘𝜆𝑘+1)𝑎𝑘 and 𝜓𝑘𝑘=𝜆𝑘𝑎2𝑘+𝜆𝑘+1𝑏2𝑘=(𝜆𝑘𝜆𝑘+1)𝑎2𝑘+(𝑎2𝑘+𝑏2𝑘)𝜆𝑘+1=𝑎𝑘𝜀𝑘+𝜆𝑘+1(𝑘=1,,𝑛1). As it is 𝐱2=𝑐1𝜀1, then for 𝑘=2,,𝑛2 from (2.10)) it follows that 𝐱𝑘+1=𝑐𝑘𝐱𝑘𝜀𝑘=𝑐𝑘𝑐𝑘1𝐱𝑘1𝜀𝑘1𝜀𝑘=𝑐𝑘𝑐𝑘1𝐱𝑘1𝑐𝑘𝑐𝑘1𝜀𝑘1𝑐𝑘𝜀𝑘𝑐==𝑘𝑐𝑘1𝑐2𝐱2𝑐𝑘𝑐𝑘1𝑐2𝜀2𝑐𝑘𝑐𝑘1𝜀𝑘1𝑐𝑘𝜀𝑘=𝑐𝑘𝑐𝑘1𝑐2𝑐1𝜀1𝑐𝑘𝑐𝑘1𝑐2𝜀2𝑐𝑘𝑐𝑘1𝜀𝑘1𝑐𝑘𝜀𝑘𝑎(2.19)𝑘𝐱𝑘=𝑎𝑘𝑐𝑘1𝑐𝑘2𝑐2𝑐1𝜀1𝑎𝑘𝑐𝑘1𝑐𝑘2𝑐2𝜀2𝑎𝑘𝑐𝑘1𝑐𝑘2𝜀𝑘2𝑎𝑘𝑐𝑘1𝜀𝑘1,𝑘=2,,𝑛1.(2.20) The real symmetric matrix 𝐆𝑛 with assigned spectrum {𝜆1,𝜆2,,𝜆𝑛} and the explicitly expressed entries can be derived from (2.16) and (2.20), bearing on mind that “𝜓”s and “𝜀”s are calculated by using {𝜆1,𝜆2,,𝜆𝑛}, 𝐏𝑘, modified eigenvalues (2.17) and 𝜀𝑘=(𝜆𝑘𝜆𝑘+1)𝑎𝑘(𝑘=1,,𝑛1): 𝐆𝑛=𝜓11𝑎2𝑐1𝜀1𝑎3𝑐2𝑐1𝜀1𝑎4𝑐3𝑐2𝑐1𝜀1𝒫𝒮𝑎2𝑐1𝜀1𝜓22𝑎3𝑐2𝜀2𝑎4𝑐3𝑐2𝜀2𝒰𝑎3𝑐2𝑐1𝜀1𝑎3𝑐2𝜀2𝑎4𝑐3𝑐2𝑐1𝜀1𝑎4𝑐3𝑐2𝜀2𝒫𝒰𝜓𝑛1,𝑛1𝑐𝑛1𝜀𝑛1𝒮𝑐𝑛1𝜀𝑛1𝜆𝑛,(2.21) where 𝒫 denotes 𝑎𝑛1𝑐𝑛2𝑐2𝑐1𝜀1, 𝒮 denotes 𝑐𝑛1𝑐𝑛2𝑐2𝑐1𝜀1, 𝒰 denotes 𝑎𝑛1𝑐𝑛2𝑐2𝜀2, and 𝐹 denotes 𝑐𝑛1𝑐𝑛2𝑐2𝜀2. Entries of 𝐆𝑛=𝐆T𝑛=[𝑔𝑘𝑚]𝑛×𝑛 are 𝑔𝑘𝑘=𝜓𝑘𝑘(𝑘=1,,𝑛1), 𝑔𝑛𝑛=𝜆𝑛, 𝑔𝑘𝑚=𝑎𝑘𝑐𝑘1𝑐𝑘2𝑐𝑚𝜀𝑚(𝑘>𝑚;𝑘=2,,𝑛1) and 𝑔𝑛𝑚=𝑐𝑛1𝑐𝑛2𝑐𝑚𝜀𝑚(𝑚=1,,𝑛1). They are calculated according to the following steps:

(a)Select arbitrarily the entries {𝑎𝑘,𝑏𝑘,𝑐𝑘,𝑑𝑘} of 2×2 orthogonal matrices 𝐏𝑘(𝑘=1,,𝑛1), given by (2.1);(b)with 𝜆1=𝜆1, calculate the modified eigenvalues 𝜆𝑘(𝑘=2,,𝑛), by using (2.17);(c)calculate 𝜀𝑘=(𝜆𝑘𝜆𝑘+1)𝑎𝑘 and 𝜓𝑘𝑘=𝑎𝑘𝜀𝑘+𝜆𝑘+1(𝑘=1,,𝑛1);(d)calculate the entries of 𝐆𝑛, by using (2.21).

Matrix 𝐔 (2.4) is 𝑛×𝑛 orthogonal modal matrix established from eigenvectors of 𝐆𝑛. We will now prove that 𝐔 is not only orthogonal, but also lower Hessenberg with explicitly expressed entries. Let us firstly produce 𝐔T1𝐔T2 and 𝐔T1𝐔T2𝐔T3, whose partition will enable us to anticipate the partition of 𝐔T1𝐔T2𝐔T3𝐔T𝑘(𝑘=4,,𝑛1)𝐔T1𝐔T2=𝑎1𝑎2𝑐1𝑐2𝑐1𝑏1𝑎2𝑑1𝑐2𝑑10𝑏2𝑑2𝟎3,𝑛3𝟎T3,𝑛3𝐈𝑛3,𝐔T1𝐔T2𝐔T3=𝑎1𝑎2𝑐1𝑎3𝑐2𝑐1𝑐3𝑐2𝑐1𝑏1𝑎2𝑑1𝑎3𝑐2𝑑1𝑐3𝑐2𝑑10𝑏2𝑎3𝑑2𝑐3𝑑200𝑏3𝑑3𝟎4,𝑛4𝟎T4,𝑛4𝐈𝑛4.(2.22) If we now suppose that 𝐔T1𝐔T2𝐔T𝑘=𝐇𝑘+1𝐈𝑛𝑘1(𝑘=2,,𝑛1) where 𝐇𝑘+1 is orthogonal (𝑘+1)×(𝑘+1) upper Hessenberg matrix 𝐇𝑘+1=𝑎1𝑎2𝑐1𝑎3𝑐2𝑐1𝑎𝑘𝑐𝑘1𝑐2𝑐1𝑐𝑘𝑐𝑘1𝑐2𝑐1𝑏1𝑎2𝑑1𝑎3𝑐2𝑑1𝑎𝑘𝑐𝑘1𝑐2𝑑1𝑐𝑘𝑐𝑘1𝑐2𝑑10𝑏2𝑎3𝑑2𝑎𝑘𝑐𝑘1𝑐3𝑑2𝑐𝑘𝑐𝑘1𝑐3𝑑200𝑏𝑘1𝑎𝑘𝑑𝑘1𝑐𝑘𝑑𝑘100𝑏𝑘𝑑𝑘,(2.23) then since according to (2.2), it holds 𝐔𝑘+1=𝐈𝑘𝐏𝑘+1𝐈𝑛𝑘2, we may write further for 𝑘=2,,𝑛2𝐔T1𝐔T2𝐔T𝑘𝐔T𝑘+1=𝐇𝑘+1𝐈𝑛𝑘1𝐔T𝑘+1=𝐇𝑘+11𝐈𝑛𝑘2𝐈𝑘𝐏T𝑘+1𝐈𝑛𝑘2=𝐇𝑘+1𝐈1𝑘𝐏T𝑘+1𝐈𝑛𝑘2=𝐇𝑘+2𝐈𝑛𝑘2,where𝐇𝑘+2𝐇=𝑘+1𝐈1𝑘𝐏T𝑘+1.(2.24) By using (2.2), (2.23)-(2.24), it follows that 𝐇𝑘+2=𝑎1𝑎2𝑐1𝑎3𝑐2𝑐1𝑎𝑘𝑐𝑘1𝑐2𝑐1𝑐𝑘𝑐𝑘1𝑐2𝑐10𝑏1𝑎2𝑑1𝑎3𝑐2𝑑1𝑎𝑘𝑐𝑘1𝑐2𝑑1𝑐𝑘𝑐𝑘1𝑐2𝑑100𝑏2𝑎3𝑑2𝑎𝑘𝑐𝑘1𝑐3𝑑2𝑐𝑘𝑐𝑘1𝑐3𝑑2000𝑏𝑘1𝑎𝑘𝑑𝑘1𝑐𝑘𝑑𝑘1000𝑏𝑘𝑑𝑘000001𝐈𝑘𝟎𝑘,2𝟎T𝑘,2𝑎𝑘+1𝑐𝑘+1𝑏𝑘+1𝑑𝑘+1=𝑎1𝑎2𝑐1𝑎3𝑐2𝑐1𝑎𝑘𝑐𝑘1𝑐2𝑐1𝑎𝑘+1𝑐𝑘𝑐𝑘1𝑐2𝑐1𝑐𝑘+1𝑐𝑘𝑐𝑘1𝑐2𝑐1𝑏1𝑎2𝑑1𝑎3𝑐2𝑑1𝑎𝑘𝑐𝑘1𝑐2𝑑1𝑎𝑘+1𝑐𝑘𝑐𝑘1𝑐2𝑑1𝑐𝑘+1𝑐𝑘𝑐𝑘1𝑐2𝑑10𝑏2𝑎3𝑑2𝑎𝑘𝑐𝑘1𝑐3𝑑2𝑎𝑘+1𝑐𝑘𝑐𝑘1𝑐3𝑑2𝑐𝑘+1𝑐𝑘𝑐𝑘1𝑐3𝑑200𝑏𝑘1𝑎𝑘𝑑𝑘1𝑎𝑘+1𝑐𝑘𝑑𝑘1𝑐𝑘+1𝑐𝑘𝑑𝑘100𝑏𝑘𝑎𝑘+1𝑑𝑘𝑐𝑘+1𝑑𝑘000𝑏𝑘+1𝑑𝑘+1,(2.25) and thereby it is proved our previous assumption that 𝐔T1𝐔T2𝐔T𝑘=𝐇𝑘+1𝐈𝑛𝑘1(𝑘=2,,𝑛1), where 𝐇𝑘+1 (2.23) is the orthogonal upper Hessenberg (𝑘+1)×(𝑘+1) matrix with entries expressed explicitly. And finally, for 𝑘=𝑛1 from 𝐔T1𝐔T2𝐔T𝑘=𝐇𝑘+1𝐈𝑛𝑘1 and (2.4), (2.23), we obtain 𝐇𝑛=𝐔T=𝐔T1𝐔T2𝐔T𝑛1 and 𝐔=𝐇T𝑛=𝐔𝑛1𝐔𝑛2𝐔1=𝑎1𝑏10000𝑎2𝑐1𝑎2𝑑1𝑏2000𝑎3𝑐2𝑐1𝑎3𝑐2𝑑1𝑎3𝑑2𝑎4𝑐3𝑐2𝑐1𝑎4𝑐3𝑐2𝑑1𝑎4𝑐3𝑑2𝑎𝑛1𝑐𝑛2𝑐2𝑐1𝑎𝑛1𝑐𝑛2𝑐2𝑑1𝑎𝑛1𝑑𝑛2𝑏𝑛1𝑐𝑛1𝑐𝑛2𝑐2𝑐1𝑐𝑛1𝑐𝑛2𝑐2𝑑1𝑐𝑛1𝑑𝑛2𝑑𝑛1.(2.26) The entries of the orthogonal lower Hessenberg matrix 𝐔=[𝑢𝑘𝑚](𝑘,𝑚=1,,𝑛) are defined as follows: 𝑢𝑘𝑚=0(𝑚>𝑘+1;𝑘,𝑚=1,,𝑛),𝑢𝑘,𝑘+1=𝑏𝑘(𝑘=1,,𝑛1),𝑢11=𝑎1,𝑢𝑘𝑘=𝑎𝑘𝑑𝑘1(𝑘=2,,𝑛1),𝑢𝑘,1=𝑎𝑘𝑐𝑘1𝑐𝑘2𝑐2𝑐1𝑢(𝑘=2,,𝑛1),𝑛,1=𝑐𝑛1𝑐𝑛2𝑐1,𝑢𝑛,𝑘=𝑐𝑛1𝑐𝑛2𝑐𝑘𝑑𝑘1𝑢(𝑘=2,,𝑛1),𝑘𝑚=𝑎𝑘𝑐𝑘1𝑐𝑚𝑑𝑚1(𝑚+1𝑘𝑛1;𝑚=2,,𝑛1).(2.27) By using the similar arguments as in derivation of entries of matrix 𝐔, the orthogonal matrix 𝐕 which is to be produced by using (2.4) can be shown to take on upper Hessenberg form. Proving of this fact goes with similar paces that were used for obtaining 𝐔 and it is left to the reader.

3. The Explicit Solution of the IEP of Real Symmetric Matrices with Some Specific Sign Patterns

Let the real eigenvalues from the spectrum {𝜆1,𝜆2,,𝜆𝑛} be arbitrarily enumerated, thereby establishing the sequence {𝜆𝑘}(𝑘=1,,𝑛). The nonnegative sequence will be denoted by {𝜆𝑘}0, and the nonpositive one by {𝜆𝑘}0(𝑘=1,,𝑛). Firstly, we will prove two lemmas.

Lemma 3.1. If the sequence {𝜆𝑘}0(𝑘=1,,𝑛) is increasing [decreasing], then in (2.21) 𝜆𝑛0,𝜓𝑚𝑚0, and the sequence {𝑎𝑚𝜀𝑚}0[{𝑎𝑚𝜀𝑚}0](𝑚=1,,𝑛1).

Proof.Since {𝜆𝑘}0(𝑘=1,,𝑛), then it is trivial to see from (2.17) and (2.18) that all diagonal entries of 𝐆𝑛 are nonnegative, that is, 𝜆𝑛0 and 𝜓𝑚𝑚0(𝑚=1,,𝑛1) no matter whether the sequence {𝜆𝑘}0 is increasing or decreasing. By virtue of orthogonality of 𝐏𝑘, we have 𝑐2𝑘+𝑑2𝑘=1(𝑘=1,,𝑛). If {𝜆𝑘}(𝑘=1,,𝑛) is increasing sequence, then for 𝑚=1 we have 𝑎1𝜀1=(𝜆1𝜆2)𝑎21=(𝜆1𝜆2)𝑎210 and for 𝑚=2,,𝑛1 we obtain 𝑑2𝑚1𝜆𝑚𝜆𝑚+1𝑑2𝑚1𝜆𝑚𝜆𝑚=𝜆𝑚1𝑑2𝑚1=𝑐2𝑚1𝜆𝑚𝑐2𝑚1𝜆𝑚1,𝑑𝑚2𝑐𝑚12𝜆𝑚1+𝑑2𝑚1𝜆𝑚𝜆𝑚+1𝑑𝑚2𝑐𝑚12𝜆𝑚1𝑐2𝑚1𝜆𝑚1𝑐=𝑚2𝑐𝑚12𝜆𝑚1𝑐𝑚2𝑐𝑚12𝜆𝑚2,𝑑𝑚3𝑐𝑚2𝑐𝑚12𝜆𝑚2+𝑑𝑚2𝑐𝑚12𝜆𝑚1+𝑑2𝑚1𝜆𝑚𝜆𝑚+1𝑐𝑚3𝑐𝑚2𝑐𝑚12𝜆𝑚2𝑐𝑚3𝑐𝑚2𝑐𝑚12𝜆𝑚3,𝑑1𝑐2𝑐𝑚12𝜆2𝑑++𝑚2𝑐𝑚12𝜆𝑚1+𝑑2𝑚1𝜆𝑚𝜆𝑚+1𝑐1𝑐𝑚12𝜆2𝑐1𝑐𝑚12𝜆1.(3.1) From (2.17) and the last of inequalities (3.1) it follows 𝜆𝑚𝜆𝑚+1 and 𝑎𝑚𝜀𝑚=(𝜆𝑚𝜆𝑚+1)𝑎2𝑚0(𝑚=2,,𝑛1). If {𝜆𝑘}(𝑘=1,,𝑛) is decreasing sequence, then for 𝑚=1 we have 𝑎1𝜀1=(𝜆1𝜆2)𝑎21=(𝜆1𝜆2)𝑎210 and for 𝑚=2,,𝑛1 we obtain 𝑑2𝑚1𝜆𝑚𝜆𝑚+1𝑑2𝑚1𝜆𝑚𝜆𝑚=𝜆𝑚1𝑑2𝑚1=𝑐2𝑚1𝑐2𝑚1𝜆𝑚1,𝑑𝑚2𝑐𝑚12𝜆𝑚1+𝑑2𝑚1𝜆𝑚𝜆𝑚+1𝑑𝑚2𝑐𝑚12𝜆𝑚1𝑐2𝑚1𝜆𝑚1𝑐=𝑚2𝑐𝑚12𝜆𝑚1𝑐𝑚2𝑐𝑚12𝜆𝑚2,𝑑𝑚3𝑐𝑚2𝑐𝑚12𝜆𝑚2+𝑑𝑚2𝑐𝑚12𝜆𝑚1+𝑑2𝑚1𝜆𝑚𝜆𝑚+1𝑐𝑚3𝑐𝑚2𝑐𝑚12𝜆𝑚2𝑐𝑚3𝑐𝑚2𝑐𝑚12𝜆𝑚3,𝑑1𝑐2𝑐𝑚12𝜆2𝑑++𝑚2𝑐𝑚12𝜆𝑚1+𝑑2𝑚1𝜆𝑚𝜆𝑚+1𝑐1𝑐𝑚12𝜆2𝑐1𝑐𝑚12𝜆1.(3.2) From (2.17) and the last of inequalities (3.2) it follows 𝜆𝑚𝜆𝑚+1 and 𝑎𝑚𝜀𝑚=(𝜆𝑚𝜆𝑚+1)𝑎2𝑚0(𝑚=2,,𝑛1). This completes the proof of lemma. For a nonpositive sequence, an analogous lemma can be formulated.

Lemma 3.2. If the sequence {𝜆𝑘}0(𝑘=1,,𝑛) is increasing [decreasing], then in (2.21) 𝜆𝑛0,𝜓𝑚𝑚0 and the sequence {𝑎𝑚𝜀𝑚}0[{𝑎𝑚𝜀𝑚}0](𝑚=1,,𝑛1).

Proof. It is similar to that of Lemma 3.1, but in this case the diagonal entries of 𝐆𝑛 are nonpositive, that is, 𝜆𝑛0 and 𝜓𝑚𝑚0(𝑚=1,,𝑛1), no matter whether the sequence {𝜆𝑘}0 is increasing or decreasing (see (2.18)).

Now, we shall formulate a new theorem related to explicit solving of IEP of real symmetric matrices with some specific sign patterns.

Theorem 3.3. If 𝜃𝑘(𝑘=1,,𝑛1) are arbitrarily selected angles from the range [0,𝜋/2], then the entries of real symmetric matrices 𝐆𝑛 with assigned spectrum {𝜆1,𝜆2,,𝜆𝑛}, produced by (2.21), can attain the following twelve sign patterns (zero entries are permitted), depending on selection of matrices 𝐏𝑘(𝑘=1,,𝑛1) (see (2.1)).

Case 1. 𝐏𝑘=𝐀𝑘=cos𝜃𝑘sin𝜃𝑘sin𝜃𝑘cos𝜃𝑘,or𝐏𝑘=𝐁𝑘=cos𝜃𝑘sin𝜃𝑘sin𝜃𝑘cos𝜃𝑘𝐆𝑛=+++,𝜆++1𝜆2𝜆𝑛𝐆0𝑛=+,𝜆+𝑛𝜆𝑛1𝜆1𝐆0𝑛=++,𝜆+𝑛𝜆𝑛1𝜆1𝐆0𝑛=𝜆1𝜆2𝜆𝑛.0(3.3)

Case 2. 𝐏𝑘=𝐂𝑘=cos𝜃𝑘sin𝜃𝑘sin𝜃𝑘cos𝜃𝑘𝐆𝑛=++(1)𝑛1+(1)𝑛2(1)𝑛2+(1)𝑛1,𝜆+1𝜆2𝜆𝑛𝐆0𝑛=+(1)𝑛1(1)𝑛2(1)𝑛2(1)𝑛1,𝜆𝑛𝜆𝑛1𝜆1𝐆0𝑛=++(1)𝑛+++(1)𝑛1(1)𝑛1++(1)𝑛,𝜆++𝑛𝜆𝑛1𝜆1𝐆0𝑛=+(1)𝑛++(1)𝑛1(1)𝑛1+(1)𝑛𝜆+1𝜆2𝜆𝑛.0(2)

Case 3. 𝐏𝑘=𝐃𝑘=cos𝜃𝑘sin𝜃𝑘sin𝜃𝑘cos𝜃𝑘𝐆𝑛=,𝜆+++++++++++++1𝜆2𝜆𝑛𝐆0𝑛=,𝜆++++++++𝑛𝜆𝑛1𝜆1𝐆0𝑛=,𝜆++++++++++𝑛𝜆𝑛1𝜆1𝐆0𝑛=.𝜆+++++++1𝜆2𝜆𝑛0(3.5)

Proof. If 𝜃𝑘[0,𝜋/2], then the signs of 𝑎𝑘 and 𝑐𝑘 depend solely on selection of canonic orthogonal matrices 𝐏𝑘(𝑘=1,,𝑛1). For any sign of sequence {𝜆𝑚}(𝑚=1,,𝑛) and its monotonicy realized through enumeration of its members, one can readily check the sign patterns stated above: by using (2.18) to determine signs of the diagonal entries in 𝐆𝑛 and by using Lemma 3.1 or Lemma 3.2 to determine signs of 𝜀𝑘(𝑘=1,,𝑛1). Observe that only in Case 1 when 𝜆𝑛𝜆𝑛1𝜆10, that is, when the sequence {𝜆𝑚}(𝑚=1,,𝑛) is nonnegative and increasing (but not strictly), matrix 𝐆𝑛 is produced with hd sign pattern, including the possible presence of zero entries. 𝐆𝑛 may attain a sparse structure if, for example, some eigenvalues are equal. To see that, let us firstly suppose 𝜆1==𝜆𝑘=𝜆. Then from (2.17)-(2.18) it follows that 𝜆1==𝜆𝑘=𝜆,𝜓11==𝜓𝑘1,𝑘1=𝜆 and 𝜀1==𝜀𝑘1=0, thus obviously making the matrix 𝐆𝑛 (2.21) with sparse structure. By using (2.17)-(2.18), (2.21) and both Lemmas, we can readily infer that if 𝜃𝑘(0,𝜋/2)(𝑘=1,,𝑛1) and the sequence {𝜆𝑚}(𝑚=1,,𝑛) is strictly monotone, then matrix 𝐆𝑛 (2.21) is produced with no zero entries in all three considered cases.

Remark 3.4. Let 𝜆1𝜆2𝜆𝑛0. Then, since 𝐆𝑛=𝐔𝐆1𝐔T and 𝐔1=𝐔T (recall that U is orthogonal), it follows that 𝐆𝑛1=(𝐔T)1𝐆11𝐔1=𝐔𝐆11𝐔T. Also, when the sequence {𝜆𝑚}(𝑚=1,,𝑛) is increasing (decreasing), then the sequence {𝜆𝑚1}(𝑚=1,,𝑛) is decreasing (increasing). These facts and Theorem 3.3 offer a possibility of determining the sign pattern of 𝐆𝑛1 without really inverting 𝐆𝑛. Furthermore, by using (2.17)-(2.18), (2.21), 𝐆𝑛1 can be calculated explicitly, also without really inverting 𝐆𝑛.

Theorem 3.5. Let the positive increasing sequence {𝜆𝑚}(𝑚=1,,𝑛) be the spectrum of 𝐆𝑛 produced by using Case 1 of Theorem 3.3. Then there always exists such a diagonal matrix 𝐃=𝑑𝑖𝑎𝑔(𝑑1,𝑑2,,𝑑𝑛) with positive diagonal entries which makes 𝐃𝐆𝑛𝐃 truly hyperdominant.

Proof. If 𝐆1=diag(𝜆1,𝜆2,,𝜆𝑛), then by Case 1 of Theorem 3.3, the nonsingular matrix 𝐆𝑛=𝐔𝐆1𝐔T will have hd sign pattern and by Remark 3.4𝐆𝑛1=𝐔𝐆11𝐔T will be nonnegative matrix. Since 𝑑𝑚>0(𝑚=1,,𝑛), then the nonsingular symmetric matrix 𝐃𝐆𝑛𝐃 is produced with hd sign pattern, but it may not be truly hd, unless hd margin of each of its rows (or columns) is nonnegative (recall that hd margin of a row or a column is sum of all entries in that row or column). If 𝐆𝑛=[𝑔𝑘𝑚](𝑘,𝑚=1,,𝑛), then hd margin 𝑝𝑘 of the 𝑘th row (or the 𝑘th column) in 𝐃𝐆𝑛𝐃 is given by 𝑝𝑘=𝑛𝑚=1𝑔𝑘𝑚𝑑𝑚𝑑𝑘=𝑑𝑘𝛼𝑘,where𝛼𝑘=𝑛𝑚=1𝑔𝑘𝑚𝑑𝑚,𝑘=1,,𝑛.(3.6) Let we arbitrarily select 𝛼𝑘>0(𝑘=1,,𝑛) and let 𝛼𝐚=[1𝛼2𝛼𝑛T𝛼1𝛼2𝛼𝑛T𝑑,col(𝐃)=[1𝑑2𝑑𝑛𝑑]T1𝑑2𝑑𝑛]T and 𝑝𝐩=[1𝑝2𝑝𝑛𝑝]T1𝑝2𝑝𝑛]T. Then, from (3.6) it follows that 𝐆𝑛col(𝐃)=𝐚, that is, col(𝐃)=𝐆𝑛1𝐚>𝟎𝑛,1 and 𝐩=𝐃𝐚>𝟎𝑛,1. This not only means that 𝐃𝐆𝑛𝐃 has hd sign pattern, but that it is truly hd furthermore. Obviously, as much as “𝛼”s are assumed greater, the greater will be row (column) hd margins of 𝐃𝐆𝑛𝐃. This completes the proof of theorem.

4. Explicit Solution of IEP of Hd Matrices with Uncommitted and with Assigned Nonnegative Spectrum

Theorem 4.1. Let 𝜃𝑘(𝑘=1,,𝑛1) be a set of angles selected from the range [0,𝜋/2] and let {𝜆1,𝜆2,,𝜆𝑛} be uncommitted nonnegative spectrum of the real symmetric matrix 𝐆𝑛=𝐔𝐆1𝐔𝑇[𝐆1=𝑑𝑖𝑎𝑔(𝜆1,𝜆2,,𝜆𝑛)] which is to be produced as truly hd. Suppose that through enumeration of eigenvalues the sequence {𝜆𝑚}0(𝑚=1,,𝑛) is made increasing. Then, matrix 𝐆𝑛 given by (2.21) will be truly hyperdominant if 𝜆1 is sufficiently great.

Proof. Since by assumption the conditions of Theorem 3.3 (Case 1) are satisfied, then 𝐆𝑛 produced by using (2.21) has hd sign pattern. As it is 𝜀𝑘=(𝜆𝑘𝜆𝑘+1)𝑎𝑘(𝑘=1,,𝑛1), then from (2.17)-(2.18), (2.21) it follows that hd margin 𝑝𝑚 of the 𝑚th row (or column) from 𝐆𝑛(𝑚=1,,𝑛) can be in general represented as 𝑝𝑚=𝛼1(𝑚)𝜆1+𝛼2(𝑚)𝜆2++𝛼𝑚(𝑚)𝜆𝑚+𝛼(𝑚)𝑚+1𝜆𝑚+1𝑝(𝑚=1,,𝑛1),𝑛=𝛼1(𝑛)𝜆1+𝛼2(𝑛)𝜆2++𝛼(𝑛)𝑛1𝜆𝑛1+𝛼𝑛(𝑛)𝜆𝑛,(4.1) where “𝛼” coefficients are defined as follows: 𝛼1(1)=𝑎1𝑎1+𝑎2𝑐1+𝑎3𝑐2𝑐1++𝑎𝑛1𝑐𝑛2𝑐2𝑐1+𝑐𝑛1𝑐𝑛2𝑐2𝑐1𝛼,𝑚=1,2(1)=𝑏1𝑏1+𝑎2𝑑1+𝑎3𝑐2𝑑1++𝑎𝑛1𝑐𝑛2𝑐2𝑑1+𝑐𝑛1𝑐𝑛2𝑐2𝑑1,𝛼1(𝑚)𝑎=𝑚𝑐𝑚1𝑐2𝑐1𝑎1+𝑎2𝑐1+𝑎3𝑐2𝑐1++𝑎𝑚𝑐𝑚1𝑐2𝑐1++𝑎𝑛1𝑐𝑛2𝑐2𝑐1+𝑐𝑛1𝑐𝑛2𝑐2𝑐1,𝛼2(𝑚)𝑎=𝑚𝑐𝑚1𝑐2𝑑1)(𝑏1+𝑎2𝑑1+𝑎3𝑐2𝑐1++𝑎𝑚𝑐𝑚1𝑐2𝑑1++𝑎𝑛1𝑐𝑛2𝑐2𝑑1+𝑐𝑛1𝑐𝑛2𝑐2𝑑1𝛼,𝑝=3,,𝑚,𝑝(𝑚)𝑎=𝑚𝑐𝑚1𝑐𝑝𝑑𝑝1𝑏𝑝1+𝑎𝑝𝑑𝑝1+𝑎𝑝+1𝑐𝑝𝑑𝑝1++𝑎𝑛1𝑐𝑛2𝑐𝑝𝑑𝑝1+𝑐𝑛1𝑐𝑛2𝑐𝑝𝑑𝑝1𝛼,𝑚=2,,𝑛1,𝑝=3,,𝑚𝑚(𝑚)𝑎=𝑚𝑑𝑚1𝑏𝑚1+𝑎𝑚𝑑𝑚1+𝑎𝑚+1𝑐𝑚𝑑𝑚1++𝑎𝑛1𝑐𝑛2𝑐𝑚𝑑𝑚1+𝑐𝑛1𝑐𝑛2𝑐𝑚𝑑𝑚1,𝛼(𝑚)𝑚+1=𝑏𝑚𝑏𝑚+𝑎𝑚+1𝑑𝑚+𝑎𝑚+2𝑐𝑚+1𝑑𝑚++𝑎𝑛1𝑐𝑛2𝑐𝑚+1𝑑𝑚+𝑐𝑛1𝑐𝑛2𝑐𝑚+1𝑑𝑚,𝛼𝑚=2,,𝑛2,𝑛(𝑛1)=𝑏𝑛1𝑏𝑛1+𝑑𝑛1,𝛼1(𝑛)𝑐=𝑛1𝑐𝑛2𝑐2𝑐1𝑎1+𝑎2𝑐1+𝑎3𝑐2𝑐1++𝑎𝑛1𝑐𝑛2𝑐2𝑐1+𝑐𝑛1𝑐𝑛2𝑐2𝑐1,𝛼2(𝑛)𝑐=𝑛1𝑐𝑛2𝑐2𝑑1𝑏1+𝑎2𝑑1+𝑎3𝑐2𝑑1++𝑎𝑛1𝑐𝑛2𝑐2𝑑1+𝑐𝑛1𝑐𝑛2𝑐2𝑑1𝛼𝑝(𝑛)𝑐=𝑛1𝑐𝑛2𝑐𝑝𝑑𝑝1𝑏𝑝1+𝑎𝑝𝑑𝑝1+𝑎𝑝+1𝑐𝑝𝑑𝑝1++𝑎𝑛1𝑐𝑛2𝑐𝑝𝑑𝑝1+𝑐𝑛1𝑐𝑛2𝑐𝑝𝑑𝑝1𝛼,𝑝=3,,𝑛1.𝑛(𝑛)=𝑑𝑛1𝑏𝑛1+𝑑𝑛1.(4.2) According to Case 1 of Theorem 3.3, both 𝑎𝑘 and 𝑐𝑘 are nonnegative when 𝜃𝑘[0,𝜋/2](𝑘=1,,𝑛1). Then, from (4.2) we see that 𝛼1(𝑚)0(𝑚=1,,𝑛), whereas other “𝛼”s may be nonpositive. Since “𝛼”s depend only on selection of “𝜃”s, then by presuming 𝜆1=𝜆2==𝜆𝑛=𝜆0, we obtain from (2.21) 𝐆𝑛=𝜆𝐈𝑛 and 𝑝𝑚=𝜆(𝑚=1,,𝑛) and from (4.1) we conclude that in general it holds: 𝑚+1𝑝=1𝛼𝑝(𝑚)1(𝑚=