Shock and Vibration

Volume 2015, Article ID 124932, 10 pages

http://dx.doi.org/10.1155/2015/124932

## Modal Parameter Identification from Output Data Only: Equivalent Approaches

Institute FEMTO-ST, UMR 6174, Department of Applied Mechanics (DMA), University of Franche-Comté, 25 000 Besançon, France

Received 1 February 2015; Revised 15 April 2015; Accepted 27 April 2015

Academic Editor: Marc Thomas

Copyright © 2015 Joseph Lardies. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The problem of modal parameter identification from output data only is presented. To identify the modal parameters different algorithms are presented: the block Hankel matrix and its shifted version and the block observability and block controllability matrices and their shifted version. These algorithms are derived from properties of the subspace approach. It is shown in the paper that these algorithms give the same results even in the noisy data case. Numerical and experimental results are presented showing the effectiveness of the procedure. In particular a microsystem constituted of a perforated microplate is analysed.

#### 1. Introduction

The traditional modal parameter identification techniques from input and output data have been well developed and are widely used in engineering. However, in operational modal analysis (OMA) the input data, or the excitation force, is not available as a measured signal and the need to develop techniques, which are capable of accurately extracting the modal parameters from output-only measurements, has arisen. In early 1990s the NExT (natural excitation technique) algorithm [1] was proposed using correlation functions of the random response of the analyzed structure. The correlation functions are expressed as a summation of decaying sinusoids and each decaying sinusoid has a damped natural frequency, a damping ratio, and a mode shape coefficient that is identical to one of the corresponding structural modes. A general autoregressive moving average (ARMA) model [2–4] has also been employed for operational modal parameter identification. The AR parameters describe the system dynamics while the MA part is related to external disturbances. Corresponding to the multioutput case, the ARMA model has been extended to a multidimensional ARMA model or ARMAV model, in which all the modal information is contained in the AR part. The AR coefficients can be obtained by the extended instrumental variable method [5] and the modal parameters are identified by the eigenvalue decomposition of the companion matrix of the AR polynomial. The efficiency of ARMAV algorithms has been proved with various excitation models, consisting of white or coloured noise, mixed with harmonics and nonstationarity [6, 7]. In particular, Spiridonakos and Fassois [8] have used a functional series vector time-dependent autoregressive moving average (FS-VTARMA) method and an experimental laboratory test of a bridge with a heavy passing vehicle has shown that this method is robust to nonstationarity of the excitation.

The dynamic behaviour of a discrete mechanical system can be described by a matrix differential equation which can be converted in a discrete time state space model and numerous papers have been presented on system identification, acquiring the estimation of parameters from measured data. For a linear time-invariant system Ho and Kalman [9] introduced the minimal state space realization problem in which a Hankel matrix is constructed by a sequence of impulse response functions called the Markov parameters. Kung [10] proposed a concept combining singular value decomposition and minimal realization algorithm for the problem of retrieving sinusoidal processes from noisy measurements. Juang and Pappa [11] introduced an eigensystem realization algorithm (ERA) for modal parameter identification and model reduction for dynamical systems from test data. This algorithm is an extension of the Ho-Kalman procedure where two indicators have been developed to quantitatively identify the system and noise modes. A stochastic subspace identification (SSI) method has been developed by van Overschee and de Moor [12] and by Peeters and de Roeck [13]. The subspace method presented by these authors identifies the state space matrices by using robust numerical techniques such as QR-factorization, singular value decomposition (SVD), and least squares. The state space matrices are related to the modal parameters and the key concept of SSI is the projection of the row space of the future outputs into the row space of the past outputs.

The fundamental problem in modal parameter identification by subspace methods is the determination of the state space matrix (or transition matrix) which characterizes the dynamics of the system. The purpose of this paper is to perform a comparison of four algorithms, derived from four properties of the subspace approach, to estimate the transition matrix. The aim is to prove that these algorithms are equivalent even in the noisy data case. The first algorithm uses properties of the shifted block controllability matrix; the second algorithm uses properties of shifted columns of the block Hankel matrix, and it is proved that these two decomposition techniques give the same modal parameters. The third algorithm uses properties of the shifted block observability matrix; the fourth algorithm uses properties of shifted rows of the block Hankel matrix, and it is also proved that these two decomposition techniques give the same modal parameters. The procedures presented in the paper can identify closely eigenfrequencies that cannot be identified by the traditional Fourier transform. In addition, we show that our procedure can be used to the modal parameter identification of a microsystem constituted of a perforated microplate. To our knowledge, it is the first time that subspace algorithms are used in the microsystem field.

The paper is organized as follows. The second section is devoted to the presentation of four identification algorithms based on shifting properties of block Hankel, block controllability, and block observability matrices. Validity of the modal parameter identification procedures is presented in the third section with simulated and experimental tests in laboratory. The paper is briefly concluded in Section 4.

#### 2. Subspace Identification Methods

##### 2.1. The Discrete State Space Representation

The subspace identification method assumes that the dynamic behaviour of a vibrating system can be described by a discrete time state space model [14, 15]: where is the unobserved state vector of dimension , is the vector of observations or measured output vector at discrete time instant , contains the external nonmeasured force or the excitation which can be a random force, an impulse force, a step force and so on, and is a measurement noise term. is the transition matrix describing the dynamics of the system and is the output or observation matrix, translating the internal state of the system into observations. The subspace identification problem deals with the determination of the two state space matrices and using output-only data .

The modal parameters of a vibrating system are obtained by applying the eigenvalue decomposition of the transition matrix [14, 15]: where , , is the diagonal matrix containing the complex eigenvalues and contains the eigenvectors of as columns. The eigenfrequencies and damping factors are obtained from the eigenvalues which are complex conjugate pair:with being the sampling period of analyzed signals.

The mode shapes evaluated at the sensor locations are the columns of the matrix obtained by multiplying the output matrix with the matrix of eigenvectors : We propose four algorithms for the estimation of the transition matrix , in order to identify the eigenfrequencies and damping factors of a vibrating system. However, we prove that the four algorithms are equivalent even in the noisy data case.

##### 2.2. Determination of the Transition Matrix by Shifting Properties of the Block Controllability Matrix and by Shifting Properties of the Block Hankel Matrix

Define the future data vector as and the past data vector as , where the superscript denotes the transpose operation. The covariance matrix between the future and the past is given bywhere denotes the expectation operator. is the block Hankel matrix formed with the individual theoretical autocovariance matrices , with . The index corresponds to the number of block lines or block columns needed to form the block Hankel matrix .

In practice, the autocovariance matrices are estimated from data points and are computed by ; and with these estimated autocovariance matrices we form the estimated block Hankel matrix.

The block Hankel matrix can be written asBy identification we obtain the block observability matrix and the block controllability matrix :This last matrix can be written aswhere the block matrices and are where and are matrices obtained by deleting, respectively, the last and the first block column of the block controllability matrix . It is easy to show that and the transition matrix obtained by the deleted block column of the controllability matrix method can then be calculated via pseudoinverse:The eigenvalues of the transition matrix can be used to identify the modal parameters and we have A second method to determine the transition matrix is obtained by deleting block columns of the block Hankel matrix . Let and be the matrices obtained by deleting, respectively, the first and the last block column of the block Hankel matrix:From (12) we have and the transition matrix obtained by the deleted block column of the block Hankel matrix method can then be calculated via pseudoinverses of the observability and controllability matrices: The eigenvalues of the transition matrix areFirstly the eigenvalues of the transition matrix obtained by shifting properties of the block controllability matrix are the same as those obtained by shifting properties of block columns of the block Hankel matrix and secondly the eigenvalues of the transition matrix are also the eigenvalues of the matrix . So, in practical conditions it is sufficient to form the block Hankel matrix , to extract matrices and by deleting a block column from and compute the eigenvalues (and eventually the eigenvectors) of the matrix .

In the presence of noise, both algorithms use the singular value decomposition (SVD) of the block Hankel matrix to get the same performances. By retaining the dominant singular values and the corresponding singular vectors, we getwith , being matrices of singular vectors and being diagonal matrix of singular values. By identification we choose and we perform the following matrix decomposition:where and are matrices formed, respectively, with the first and last block columns of the matrix . The eigenvalues of the transition matrix are and this relation can also be obtained by shifting properties of the block Hankel matrix. Indeed, we have We have proved that, in the state space approach, the modal parameters obtained by shifting properties of the block Hankel matrix are the same as those obtained by shifting properties of the block controllability matrix. In the next section we analyze the eigenvalues of the transition matrix estimated by shifting properties of the block observability matrix.

##### 2.3. Determination of the Transition Matrix by Shifting Properties of the Block Observability Matrix and by Shifting Properties of the Block Hankel Matrix

The observability matrix has the formwhere and are matrices obtained by deleting, respectively, the last and the first block row of the block observability matrix:It is easy to show that and the transition matrix obtained by the deleted block row of the observability matrix method is as follows:The eigenvalues of the transition matrix can be used to identify the modal parameters and we have Another method to determine the transition matrix is obtained by deleting block rows of the block Hankel matrix. Let and be the matrices obtained by deleting, respectively, the first and the last block row of the block Hankel matrix:From (24) we have and the transition matrix obtained by the deleted block row of the block Hankel matrix method is The eigenvalues of the transition matrix are The eigenvalues of the transition matrix obtained by shifting properties of the block observability matrix are the same as those obtained by shifting properties of block rows of the block Hankel matrix.

In the presence of noise, we use the singular value decomposition of the block Hankel matrix to show that these two methods give the same performances. From (16) we choose and we consider the following matrix decomposition:where and are matrices formed, respectively, with the first and last block rows of the matrix of singular vectors . The eigenvalues of the transition matrix areand this relation can also be obtained by shifting properties of the block Hankel matrix. Indeed, we have We have shown that, in the state space approach, the modal parameters obtained by shifting properties of the block Hankel matrix are the same as those obtained by shifting properties of the block observability matrix. Finally we note that the output or observation matrix can be obtained by the first block row of the matrix or .

With estimates of the transition matrix and the observation matrix in hand, we compute the eigenvalues and eigenvectors of the transition matrix and we can identify eigenfrequencies, damping factors, and mode shapes of the vibrating system following (3) and (4). However, all the subspace modal identification algorithms have a serious problem of model order determination. When extracting physical or structural modes, subspace algorithms always generate spurious or computational modes to account for unwanted effects such as noise, leakage, residuals, nonlinearity. For these reasons, the assumed number of modes, or model order, is incremented over a wide range of values and we plot the stability diagram. The stability diagram involves tracking the estimates of eigenfrequencies and damping factors as a function of model order. As the model order is increased, more and more modal frequencies and damping ratios are estimated; hopefully, the estimates of the physical modal parameters are stabilized using a criterion based on the modal coherence of measured modes and identified modes [9]. Using this criterion we detect and remove the spurious modes. A numerical example and two experimental tests in laboratory are now presented to identify eigenfrequencies and damping factors of vibrating systems.

#### 3. Applications

##### 3.1. Simulated Data

To prove the effectiveness of the identification procedure based on the subspace analysis we consider a two-DOF system with very closely spaced modes. The parameters of the system are Hz, Hz, , and . Figure 1 shows the free response of the system where a Gaussian white noise has been added: the generated data were corrupted by a random noise. The sampling frequency is 100 Hz and 300 time samples are used in the simulation. We convert to the frequency domain this time response by taking the discrete Fourier transform of the noisy signal. It is impossible to identify the two frequencies components by using the FFT as shown in Figure 2, where the power spectral density has been plotted. In our identification procedure, we plot the stabilization diagram on eigenfrequencies and damping factors. Figure 3 shows stabilization diagrams using shifting properties of the block controllability matrix with the modal coherence indicator: spurious modes have been eliminated and only physical modes are present. Figure 4 shows stabilization diagrams using shifting properties of the block observability matrix with the modal coherence indicator. Our procedure can separate closely spaced modes and the mean values on identified eigenfrequencies and damping factors obtained by an average over the orders of the stabilization diagram are Hz, , , and . A very satisfactory estimation on eigenfrequencies and damping factors has been obtained using simulated data. These values have been estimated using the observability identification procedure; however, very similar results are obtained if we use the controllability identification procedure. Two experimental tests in laboratory are presented in the following sections.