Journal of Structures

Volume 2015 (2015), Article ID 236475, 9 pages

http://dx.doi.org/10.1155/2015/236475

## An Improved Bayesian Structural Identification Using the First Two Derivatives of Log-Likelihood Measure

Department of System Design Engineering, Keio University, 3-14-1 Hiyoshi, Kohoku-ku, Yokohama 223-8522, Japan

Received 28 November 2014; Accepted 3 March 2015

Academic Editor: Elio Sacco

Copyright © 2015 Jin Zhou et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

The posterior density of structural parameters conditioned by the measurement is obtained by a differential evolution adaptive Metropolis algorithm (DREAM). The surface of the formal log-likelihood measure is studied considering the uncertainty of measurement error to illustrate the problem of equifinality. To overcome the problem of equifinality, the first two derivatives of the log-likelihood measure are proposed to formulate a new informal likelihood measure for the sake of improving the accuracy of the estimator. Moreover, the proposed measure also reduces the standard deviation (uncertain range) of the posterior samples. The benefit of the proposed approach is demonstrated by simulations on identifying the structural parameters with limit output data and noise polluted measurements.

#### 1. Introduction

Recent years witness the increasing desire of Bayesian estimation for structural parametric system when quantifying the inevitable uncertainties, such as measurement error or structural model error and so forth, as is reviewed by Simoen et al. [1]. In particular, Beck and Au [2] used Laplace’s method of asymptotic approximation to obtain a posterior PDF with a small-dimensional parameter space. To solve higher dimensional problems, Muto and Beck [3] developed an adaptive Markov chain Monte Carlo (MCMC) simulation for the Bayesian model updating. Gibbs sampling and transitional Markov chain Monte Carlo (TMCMC) were used by Ching and Chen [4] to obtain the posterior PDF of parameters. Cheung and Beck [5] used a hybrid Monte Carlo method, known as the Hamiltonian Markov chain, to solve higher dimensional model updating problems. Huhtala and Bossuyt [6] explored a Bayesian inference framework to solve the inverse problem of locating structural damage. An et al. [7] proposed a statistical model parameter estimation using Bayesian inference when parameters are correlated and observed that data have noise. Green [8] used a novel MCMC algorithm, data annealing, which is similar to simulated annealing, for the Bayesian identification of a nonlinear dynamic system.

The difficulty of Bayesian estimation lies in the efficiency in the convergence of posterior samples in the Markov chain to the acceptable model set. Moreover, because of the noise corrupted measurement, the surface of the prediction error lies in a hypersurface of a multidimensional parametric space. It will cause the surface of the probability density for the posterior sequences to have multiple regions of attraction and numerous local optima. It thus inevitably yields a biased estimator (no matter what is called maximum likelihood estimator, ML, or maximum a posteriori estimator, MAP). This problem is defined as the “equifinality” [9–11]. The surfaces of the prediction error, using formal likelihood measures, maximum log-likelihood (ML), are studied. From the surfaces of fitness measures, it can be concluded that the formal likelihood measure underestimates or overestimates the uncertain intervals of the posterior samples. The reason is that there are several possible models which can also give high values of likelihood around the neighborhood of the estimator.

In this paper, the bias between the ML/MAP estimator and the actual value is deduced by the Taylor expansion. It is found that the gradient and Hessian matrix of the likelihood measure can bridge the biased estimator and the actual value, which is thus proposed to improve the accuracy of the posterior samples. The parameter estimation problem is proposed as a two-step strategy. In the first step, the MAP/ML estimator is obtained by the formal Bayesian likelihood measures using the differential evolution adaptive Metropolis-Hastings (DREAM) algorithm. In the second step, a new fitness measure is proposed, which can be seen as the informal likelihood measure under the framework of the generalized likelihood uncertainty estimation (GLUE) [12–14]. Numerical examples of a linear structural system are presented, with which the effectiveness and efficiency of the proposed method are investigated.

#### 2. Problem Statement

##### 2.1. Least Squares (LS) Estimator for the Inverse Problem

Let denote the measured response at each time interval () and denotes the output of candidate models. The difference between the measured response and model outputs is defined as the residual error: , where , and is the number of outputs. The common measure for the inverse problem is to attempt to force the residual vector as close to zero as possible by tuning the model parameter vector, . Thus, the fitness measure can be defined as follows:

This is an -dimensional optimization issue which maximizes the likelihood measure of SSR (equivalent to minimize the measure of LS formulation). But such measure can only provide an estimate of optimal value of . If we need to quantify the uncertainty of the estimator, it would be a desire to estimate the underlying posterior PDF of parameter, , which is under the framework of Bayesian probabilistic estimation.

##### 2.2. Bayes Estimate Using Formal Log-Likelihood (LL) Measures

In the Bayesian estimation framework, the model set, , is a class of probabilistic models, each of which predicts the response of the actual system. The identification problem is to infer the plausibility of each candidate model with a posterior density distribution conditioned by the measured data, ; it is not a quest for the true structural parameters. is a stochastic parameter vector defining each possible model in the model set. The model set,* M*, is defined by random parameters, (, is the random variables in probability space ), where is the number of parameters for model and is the number of stochastic samples. The initial plausibility of each model parameterized by is defined as a prior density function, . The updated plausibility of I/O model using Bayes’ theorem is as follows:

is the likelihood measure, . If the measurement error is considered as obeying the Gaussian distribution with a constant variance, (th available observed response), the posterior PDF in (2) is thus as follows:where is the evidence of model class (), which is a high-dimensional integral. The difficulty in estimating the posterior PDF is none other than approximating the model evidence. Strives to overcome this challenge are the purpose of the Metropolis-Hastings algorithm. For simplicity, (2) is rewritten as . The MH algorithm generates the posterior PDF with four steps. Start with initial samples, , and compute the likelihood measures . The updated samples, , are produced by the jumping distribution, , which is the probability of returning a value of given a previous value of . The restriction on the jumping is that the transition is probability symmetric, . The acceptance ratio at the updated candidate () and the source posterior samples () is . If the acceptance ratio, , accept the candidate sample, ; if the jump decreases the density, , then it rejects the updating and keeps the current samples,. The acceptance ratio is as follows:

It is clear that the advantage of MH algorithm lies in the fact that when computing acceptance ratio there is no need to obtain the model evidence since the constant cancels out. The transition of samples generates a Markov chain (). Following a burn-in period, the Markov chain approaches its stationary distribution, and the samples after the burn-in period converge into the posterior PDF, , as that in (2). From (4), it can be found that the Bayesian estimate relies on the likelihood measure, . It is more convenient to use the logarithm of the likelihood measures rather than the likelihood function itself:

Either the log-likelihood measure, as is in (5), or the least square measure, as is in (1), obeys the rule of “goodness-of-fit.” This is because only the model with high probabilistic value of likelihood in the MH method will be accepted.

##### 2.3. The Surface of the Likelihood Measures

To illustrate the problem of “equifinality,” the surfaces of the common-used likelihood measure, the as is in (5), are simulated in identifying the stiffness parameters of a 2-DOF linear dynamic system. The state space of the system is written as follows: where , , and are the mass, damping, and the stiffness matrices, is identity matrix, and is position vector. and are state space vectors, respectively, representing the displacement and velocity, and is the input of the system. The system output is an acceleration which is assumed to be contaminated by Gaussian white noise . The measured output vector is thus

The mass and stiffness of each DOF are defined as 100 kg and 1000 N/m. Equation (7) includes a Rayleigh damping Mita [15], where the first two-modal damping ratio is set as 5%:

The parametric domain is meshed by 5% deviation of the true value. The output acceleration (acc*.*) with different noise levels (7) was used in the simulation, in which the noise level (nl*.*) was defined as . The contour plots of the likelihood measure, as is in (5), in the scenarios of noise-free and different noise level scenarios are exhibited in Figure 1.