#### Abstract

We present and analyze a new hybrid stochastic finite element method for solving eigenmodes of structures with random geometry and random elastic modulus. The fundamental assumption is that the smallest eigenpair is well defined over the whole stochastic parameter space. The geometric uncertainty is resolved using collocation and random material models using Galerkin method at each collocation point. The response statistics, expectation and variance of the smallest eigenmode, are computed in numerical experiments. The hybrid approach is superior to alternatives in practical cases where the number of random parameters used to describe geometric uncertainty is much smaller than that of the material models.

#### 1. Introduction

In standard engineering models many physical quantities such as material parameters are taken to be constant, even though their statistical nature is well understood. Similarly, assumptions of geometric constants, such as thickness of a structure, are not realistic due to manufacturing imperfections. In a detailed report on a state-of-the-art verification and validation process comparing modern simulations with the set of experiments performed in the Oak Ridge National Laboratory in the early 70s, Szabo and Muntges report discrepancies of over 20% in some quantities of interest [1]. These discrepancies are attributed to machining imperfections not accounted for in the computations. Also, in important nonlinear problems such as buckling of a shell, it is known that variation between manufactured specimens has a profound effect in the actual performance [2]. This suggests that a stochastic dimension should be added to the models.

The modern era of uncertainty quantification starts with the works of Babuska et al. [3, 4] and the ETH-group led by Schwab and Gittelson (e.g., [5]) with provably faster convergence rates than the standard Monte Carlo methods. Although the so-called stochastic finite elements had been studied for a relative long time before (the classic reference is Ghanem and Spanos [6]), their application was thus limited to highly specific cases.

The solution methods can broadly speaking be divided into* intrusive* and* nonintrusive* ones. The same division applies to stochastic eigenvalue problems (SEVPs) as well. SEVPs have attracted a lot of attention recently and various algorithms have been suggested for computing approximate eigenpairs [7–10], specially the power iteration [11] and inverse iteration [12].

In this paper our focus is on effects of material models and manufacturing imperfections of geometric nature. It should be noted that in the context of this paper it is assumed that the problems are positive definite and the eigenpair of interest is the ground state, that is, the one with the smallest eigenvalue which, in theory, can be a double eigenvalue. Our experimental setup is nonsymmetric and, thus, a spectral gap exists and the inverse iteration converges to the desired eigenmode.

In stochastic eigenvalue problems one must address two central issues that do not arise in stochastic source problems: first, the eigenmodes are defined only up to a sign and, second, the eigenmodes must be normalized over the whole parameter space; that is, every realization must be normalized in the same way. Here our quantity of interest is the eigenvalue and, therefore, we do not necessarily have to fix the signs. The normalization is handled by solution of a nonlinear system of equations as in [12].

The main result of this paper is the new hybrid algorithm which combines a nonintrusive outer loop (collocation) with an intrusive inner one (Galerkin). The randomness in geometry is resolved using collocation and that of materials with Galerkin* at every collocation point*. The model problem is an idealized engine bed vibration problem, where a 2D hollow square is clamped from one side (see Figure 1). The inner cavity is assumed to be of fixed shape but random orientation and Young’s modulus is taken to be random in the normal direction into the body. The choice of the geometric uncertainty models the case of sand casting with moulds of fixed shape. The hybrid approach combines the geometric flexibility of collocation with much faster computation times of the Galerkin approach used in the inner loop and, thus, combines the best of both worlds.

**(a) Nominal domain**

**(b) Perturbed domain**

The rest of the paper is organized as follows: first the abstract problem is introduced in Section 2. The representation of the random input is given in Section 3. Higher order finite elements are described in Section 4. Section 5 is central to our discussion; both solution methods, collocation and spectral inverse iteration, are presented. Numerical experiments are discussed in Section 6 and finally conclusions are drawn in Section 7.

#### 2. Problem Statement

We start by presenting the deterministic system of Navier’s equations of elasticity and the corresponding weak form. We then extend this to the stochastic setting in order to cover the case of uncertain domain and modulus of elasticity. We aim to compute statistics of the smallest eigenvalue of the resulting stochastic system. The geometry of the computational domain is illustrated in Figure 1.

##### 2.1. Deterministic Formulation

We use the Navier equations of elasticity to model the system of interest. Find the eigenvalue , the displacement field , and the symmetric stress tensor , such thatwhere is a partitioned boundary of . The Lamé constants arewith and being Young’s modulus and Poisson’s ratio, respectively. Further, is the identity tensor, denotes the outward unit normal to , and the strain tensor isThe vector-valued tensor divergence isThis formulation assumes a constitutive relation corresponding to linear isotropic elasticity with stresses and strains related by Hooke’s generalized law:

##### 2.2. Weak Formulation

Let us introduce a function space and define the eigenproblem: find the eigenpair such thatwhere the bilinear formis the integrated tensor contractionand denotes the mass matrixAs usual, we define the kinematic relationand specify the constitutive matrixfor the purpose of rewriting the bilinear form as

##### 2.3. Stochastic Extension of the Eigenproblem

We introduce two probability spaces and to model randomness of the domain and the elastic modulus, respectively. Here denotes the set of outcomes, is a -algebra of events, and is a probability measure on for each . Furthermore we setto be the underlying product probability space.

The geometry of the computational domain is illustrated in Figure 1. We assume that the radius of the cavity and the angle are random variables satisfying and for some suitable constants . Thus we may write . In the numerical examples we set and to be uniformly distributed random variables.

Suppose that for each the Young modulus is a random field on the domain . By this we mean that for every . We assume that is strictly positive and bounded; that is, for some constants it holds that

The stochastic extension of eigenproblem (6) is now given by the following: find the eigenvalue and the eigenfunction such thatholds for -almost every . Here is the stochastic equivalent of the bilinear form and is given byThe Lamé constants and are simply random fields defined by (2) with .

In most cases we are mainly interested in computing statistical moments of the solution. Suppose that we have a random variable . The expected value or mean of is defined to beand its variance isIn this paper our aim is to compute the expected value and variance of the smallest eigenvalue of system (15).

#### 3. Spectral Representations

For purposes of numerical treatment we need to approximate the random Young modulus using only a finite number of random variables. A natural way of achieving this is to use the truncated Karhunen-Loève expansion. Once a finite-dimensional approximation has been obtained we may express our solution in a generalized polynomial chaos basis and apply solution schemes based on the stochastic Galerkin finite element method; see [6] for a thorough introduction to this approach. Instead of using the Karhunen-Loève expansion one could also fix a spatial basis and then estimate the polynomial chaos coefficients straight from measurement data; see, for instance, [13, 14].

##### 3.1. Karhunen-Loève Expansion

The Karhunen-Loève expansion is a representation of a random field as a linear combination of the eigenfunctions of the associated covariance operator. Truncating the resulting series we obtain a finite-dimensional approximation of the original random field. The Karhunen-Loève expansion is the optimal choice among linear expansions in the sense that it minimizes the mean square truncation error [6].

For a fixed let be a random field on with mean fieldand a covariance functionthat is symmetric and positive definite. The associated covariance operatoris compact as an operator and therefore admits a countable number of positive eigenvalues and associated eigenfunctions . The eigenvalues tend to zero as tends to infinity and the eigenfunctions form an orthonormal basis of . The eigenpairs may be computed numerically using, for example, fast multipole methods [15].

The Karhunen-Loève expansion of the random field is now given bywhere are centered and orthogonal but not necessarily independent random variables.

In numerical computations we replace the random field with an approximation obtained by truncating series (22) after terms. The truncation error depends upon the decay of the Karhunen-Loève eigenvalues and eigenfunctions. See [16] for bounds on this decay.

*Remark 1. *It is essential for numerical algorithms that the random variables are mutually independent, even though this is not in general implied by their orthogonality. A solution to this issue has been proposed in [3], where suitable auxiliary density functions have been introduced. In the numerical examples we assume a representation of the random field with respect to a set of independent uniformly distributed random variables. Other types of random variables (e.g., Gaussian) could be considered as well.

##### 3.2. Legendre Chaos

We employ the generalized polynomial chaos framework which essentially means representing our solution on a basis of orthogonal polynomials. In our case we assume the input random variables to be uniformly distributed and the polynomial chaos basis is therefore given by tensorized Legendre polynomials. The use of orthogonal polynomials as a basis allows us to apply stochastic Galerkin based methods and ensures optimal convergence of these methods; see [17]. In this paper we aim to solve the eigenmodes using a method of spectral inverse iteration introduced in [12].

Assume that is a vector of mutually independent random variables that are uniformly distributed in the interval . The expected value or mean of a random variable is now given byHere we have used a subscript to indicate that the expected value is taken over the sample space only. We associate each multi-index with the multivariate Legendre polynomialwhere denotes the univariate Legendre polynomial of degree . We assume that the polynomials are normalized so that for all .

The system forms an orthonormal basis of . This allows us to express any square integrable random variable in a polynomial chaos expansionwhere the coefficients are given by .

For numerical computations we truncate expansion (25). To this end we choose a finite set of multi-indices and approximate with the seriesHere the choice of the set is key to obtaining accurate approximations and is currently a topic of active research. In our numerical examples we consider sets that for a fixed level are given by , where are the Karhunen-Loève eigenvalues. More sophisticated adaptive schemes for selecting the multi-index sets have been presented in [18–20].

The most important reason for using the polynomial chaos basis is that it allows fast computation of any expectations involved. This is because we may evaluate integrals over the stochastic domain analytically and are thus able to avoid numerical integration in high dimensions. Orthogonality of the Legendre polynomials yieldsfor all multi-indices . For notational convenience we setThese coefficients are easy to evaluate analytically; see the appendix in [12]. It is also worth noting that most of the coefficients are evaluated to zero. This fact can be exploited in order to speed up the solution process.

#### 4. Higher Order Finite Elements

Here we give a short overview of the -FEM following closely the one in [21]. In the -version of the FEM the polynomial degree is used to control the accuracy of the solution while keeping the mesh fixed in contrast to the -version or standard finite element method, where the polynomial degree is constant but the mesh size varies. The -method simply combines the - and -refinements.

In our setting we can use topologically fixed meshes with high-order elements with and to ensure sufficient resolution of the deterministic part of the solution.

In the following one way to construct a -type quadrilateral element is given. The construction of triangles follows similar lines. First of all, the choice of shape functions is not unique. We use the so-called hierarchic integrated Legendre shape functions.

Legendre polynomials of degree can be defined by a recursion formulawhere and .

The derivatives can similarly be computed by using the recursion

The integrated Legendre polynomials are defined for asand can be rewritten as linear combinations of Legendre polynomialsThe normalizing coefficients are chosen so that

Using these polynomials we can now define the shape functions for a quadrilateral reference element over the domain . The shape functions are divided into three categories: nodal shape functions, side modes, and internal modes.

There are four nodal shape functions:which taken alone define the standard four-node quadrilateral finite element. There are side modes associated with the sides of a quadrilateral , with ,For the internal modes we choose the shape functions:The internal shape functions are often referred to as bubble-functions.

Note that some additional book-keeping is necessary. The Legendre polynomials have the property . This means that every edge must be globally parameterized in the same way in both elements where it belongs.

##### 4.1. Curved Boundaries

Since we want to use fixed mesh topologies even with perturbed domains, it is important to represent curved boundary segments accurately. The linear blending function method of Gordon and Hall [22] is our choice for this purpose.

In the general case all sides of an element can be curved, but in our case only one side is—as in Figure 1. We assume that every side is parameterized:where . Using capital letters as coordinates of the corner points , we can write the mapping for the global -coordinates of a quadrilateral asand symmetrically for the -coordinate. Note that if the side parameterizations represent straight edges, the mapping simplifies to the standard bilinear mapping of quadrilaterals.

#### 5. The Solution Method

We will present a hybrid method of stochastic finite elements for solving the stochastic eigenvalue problem (15). More precisely, we apply stochastic collocation to resolve the dependency on the geometry and the stochastic Galerkin scheme for computing the effects of the random field. The method of stochastic collocation allows for an easy implementation, since the solution statistics are sampled from an ensemble of deterministic solutions. This makes the method attractive from the point of view of random geometry. On the other hand, if a sufficiently accurate parametric representation for the input random field exists, then the use of stochastic Galerkin based methods is well motivated by their high rate of convergence—especially if the number of parameters is large. In this paper we apply a method of spectral inverse iteration introduced in [12] to compute the eigenpair of interest at each collocation point.

##### 5.1. Discretization

Let us consider the stochastic eigenproblem (15) for a fixed . We replace the Young modulus with the expressionobtained by truncating the Karhunen-Loève expansion (22) after terms. Plugging (39) into (2) we may write the constitutive matrix (11) in the formWe assume that is a set of mutually independent and uniformly distributed random variables with supports scaled to the interval . The sample space is thus parameterized by the vector . With being fixed we may now interpret the eigenvalue as a function of and the eigenfunction as a function of and .

Next we address the spatial discretization of (15) on the fixed domain . Let be a set of global basis functions for the approximation space of . Here we employ the -version of the finite element method so that the components of each are some shape functions (34), (35), or (36) from Section 4. From (15) we obtain the deterministic Galerkin projectionWe define the stiffness matrix for each by settingand the mass matrix viaFor a fixed we have now reduced (15) to a stochastic matrix eigenvalue problem: find a random variable and a random vector such that

##### 5.2. Stochastic Collocation

In our problem of interest the geometry of the computational domain depends on the random variables and . Thus, these variables give us a parametrization of the sample space . We present an anisotropic collocation operator and apply it for computing the dependency of our solution on the parameters. Implementation of a sparse grid collocation operator (see [4, 23]) would also be possible but since we only consider collocation in two dimensions we are restricted to full tensor collocation in the following. In our case we rely on the Gauss-Legendre quadrature, since the random variables and are uniformly distributed. Furthermore, for notational simplicity we assume these to be scaled so that .

Denote by the abscissae of the Gauss-Legendre quadrature of order and by the associated quadrature weights. The quadrature points are the zeros of the univariate Legendre polynomial of degree . The one-dimensional Lagrange interpolation operators with respect to the interpolation points are defined viaHere are the Lagrange basis polynomials of degree . For information on orthogonal polynomials and computation of the quadrature weights see [24].

The full tensor Lagrange interpolation operator is defined as the tensor product of the univariate operators. In our two-dimensional case this iswhere and denote the one-dimensional interpolation operators in the variables and , respectively, and are the orders of quadrature. Applied on a random variable the full tensor interpolation gives

The expected value of a random variable is given byHere we have again used a subscript to designate that the expectation is taken over the sample space . From (47) we obtainHence we may compute the expected value of the solution once we have solved the underlying problem at each collocation point.

##### 5.3. Spectral Inverse Iteration

We choose to solve (44) at each collocation point using a method of spectral inverse iteration. This method has been introduced in [12] and there its convergence towards the eigenpair with the smallest eigenvalue has been verified in the case of an elliptic operator with random coefficients. However, assuming that the eigenpair of interest is of multiplicity one, the method is in fact applicable for a much wider range of problems.

We may consider the spectral inverse iteration as a stochastic extension of the deterministic inverse iteration which has been given in Algorithm 2 for reference. For the generalized eigenvalue problem of a stiffness matrix and a mass matrix , the deterministic inverse iteration converges to the eigenpair for which the eigenvalue is closest to the given parameter .

*Algorithm 2 (deterministic inverse iteration). *Fix and . Let be a normalized initial guess for the eigenvector and set . For do the following:(1)Solve for .(2)Set .(3)Set .(4)Stop if and return .

The idea in the spectral inverse iteration is to interpret each equation involved in Algorithm 2 in a stochastic sense using Galerkin projections with respect to a given spectral basis. To this end we start by fixing a finite polynomial chaos basis and proceed by formulating the respective Galerkin projections. Here we only recap the essentials of the process as it has already been thoroughly explained in [12].

For some let us consider the equationthat arises from the first step of Algorithm 2. Here we assume the matrix to take the formas in (44). We replace the random vectors and with the truncated polynomial chaos expansionsThe Galerkin projection of (50) on the basis is now defined asUsing the orthogonality relations (27) this reduces to the linear systemwhere , , and are block vectors and matrices characterized by and

Let us next consider the normalization step in Algorithm 2. For a vector we aim to compute such thatIf we set then the respective Galerkin projection on the basis is defined asand reduces toThis is a nonlinear system of equations for the coefficients and can be solved using, for example, Newton’s method with the initial guess . The Jacobian is given explicitly byThe Galerkin projection of (56) is now defined asand reduces to the block linear systemwhere , , and

For computing the eigenvalue we need to be able to evaluate the Rayleigh quotientfor a vector . As explained in [12] we may approximate in the Galerkin sense to obtain the formulafor the Galerkin coefficients .

Algorithm 3 now formulates the method of spectral inverse iteration. Similar to the deterministic case, the choice of the parameter affects the speed of convergence. However, if is chosen too close to the actual eigenvalue, the iteration might not converge as (50) becomes singular for some . We refer to [12] for more information. In our numerical examples we set which ensures convergence to the eigenpair with the smallest eigenvalue as long as the matrix is positive definite for all .

*Algorithm 3 (spectral inverse iteration for stochastic eigenvalue problems). *Fix and . Let be a normalized initial guess for the eigenvector and set . For do the following:(1)With solve system (54) for .(2)Solve from the nonlinear system (58). Solve system (61) and set .(3)Set .(4)Stop if and return .

##### 5.4. Response Statistics

We may easily calculate statistical moments of a random variable once we know its polynomial chaos expansionat each collocation point. By using the orthogonality relations (27) we obtainThe first and second moments of are now given byThese can be computed by applying formula (49) on (66). From these we obtain approximations for the expected value and variance of .

#### 6. Numerical Experiments

Let us consider a problem with uncertain domain and uncertain field. The computational domain is , where the cavity which is rotated radians by the origin. Here and are random with uniform distribution. The bottom edge is clamped; that is, all displacements are inhibited.

In Figure 2 the effect of domain perturbation on the modes is illustrated. Notice that if one wanted to compute the statistics of the modes, it would be necessary to map every realisation onto the nominal domain via conformal mappings, for instance.

**(a)**Contour plot: ()

**(b)**Contour plot: ()

**(c)**Contour plot: (, )

**(d)**Contour plot: (, )The (constant) material parameters are Poisson ratio and the mean field of the Young modulus MPa. We have used the convention, however, where the reported results are normalized by the value of .

We will proceed in two steps by first letting only the geometry vary before adding uncertainty to the field as well. The cases are(A)uncertain domain with deterministic field, solved with collocation,(B)uncertain domain with uncertain field, solved with the new hybrid method.In both cases the nominal domain is the same (see Figure 1). In the absence of exact solutions overkill solutions have been used in convergence graphs unless otherwise specified. We have also verified the results by comparing to a full Monte Carlo simulation of 620000 draws.

##### 6.1. Case A

The values of the smallest eigenvalue over the parameter space are shown in Figure 3 using both contour and surface plots.

**(a)**Contour plot: (-axis) versus (-axis)

**(b)**Surface plot: (-axis) versus (-axis)We consider a collocation sequence, where the collocation points are taken to span all 16 cases from to points over the -parameter space. The relative convergence of the smallest eigenvalue over the whole collocation sequence is shown in Figure 4. We have used constant polynomial order (1728 degrees of freedom) throughout and (6528 degrees of freedom) for the reference computation.

**(a) Expected value**

**(b) Variance**

The dashed line shows the convergence in the case when the deterministic discretization of the reference solution is used also in collocation. The exponential convergence in expectation is due to convergence in parameter space only. The poorly performing grids in Figure 4 are the anisotropic ones with more points for . This is due to relative effects of and not being exactly balanced (see Figure 3). The solid line indicates that already at rule the spatial discretization used is not sufficient when the reference has been computed with much higher resolution. The convergence stalls completely since the deterministic error dominates. Convergence in variance is slower, as suggested by theory.

##### 6.2. Case B

We proceed to consider a fixed collocation grid with collocation points for the parameters and . Assuming normalization of the mean field () we set the coefficients in the Karhunen-Loève expansion of the Young modulus (39) to be , where is the distance from to the inner boundary and . Thus the series can be considered to converge at an algebraic rate of . We set the truncation parameter to be . Using and 25 multi-indices, every linear system has 12000 degrees of freedom.

At each collocation point we employ the spectral inverse iteration given in Algorithm 3 with . For the polynomial chaos basis we consider multi-index sets of the form . Varying the level results in multi-index sets of different size. The overkill solution used as a reference has been computed with 64 multi-indices. Convergence of the relative error of the smallest eigenvalue towards the overkill solution as a function of the size of the multi-index set is illustrated in Figure 5. We observe algebraic rates of convergence for both the expected value and the variance.

**(a) Expected value**

**(b) Variance**

The dashed horizontal line in Figure 5 illustrates the relative error between the overkill solution and the Monte Carlo solution. The statistics for the eigenvalue arefor the overkill solution andfor the Monte Carlo solution. The central limit theorem suggests that the standard deviation of the Monte Carlo mean is , which is of the same size range as the error between the overkill solution and the Monte Carlo solution. Comparison with standard collocation is more complicated, since the hybrid method includes a collocation part. For an isotropic grid aligned for the geometric uncertainty dimensions, , that is, 8 points per every term in the Karhunen-Loève expansion, we get statistics:within expected accuracy as well.

Algorithm 3 also gives us the eigenvector associated with the smallest eigenvalue. More precisely we obtain the spectral components at each collocation point. In Figure 6 we have illustrated the means of the sorted and energies of these spectral components. At each of the 16 collocation points we would observe results similar to those presented in Figure 6. Theory suggests that the asymptotic convergence rates for these energies should be and for the norms and seminorms, respectively [25, 26]. In our example we observe much faster convergence than these asymptotic rates would predict. However, the observed convergence rates differ by one half, which is in accordance with the theory.

**(a)**energies

**(b)**energiesThe Monte Carlo reference results have been computed from 620000 draws using 432 single core CPU hours on Intel Xeon (2009, 2.8 GHz). The anisotropic collocation took two and the hybrid algorithm for the highest reported number of multi-indices (64) exactly one single core CPU hour on the same machine. Naturally, in iterative algorithms there are many tolerances with which to tune the performance, but these reported times can be taken as representative.

*Remark 4. *Notice that in more elaborate models for geometric uncertainty with much higher dimensions of the parameter space the asymptotic convergence rates will apply to the collocation method. For lower dimensional cases it is possible to converge at faster rates, of course, but in real-life applications one should consider the modeling error as well. Similarly, for the Galerkin scheme the size of multi-index sets is too small for the energies of the spectral components to exhibit asymptotic behaviour.

#### 7. Conclusions

We have demonstrated the practicality of the new hybrid algorithm in a practical, yet idealized, application. The combination of collocation for the geometric uncertainty and Galerkin for that of materials allows us to take advantage of the higher computational efficiency of the Galerkin approach whilst keeping mesh generation simple. Indeed, in this paper we have kept the mesh topologically fixed. The observed convergence rates for the smallest eigenvalue are in line with theoretical predictions.

The main two remaining issues for future research are the mapping of the eigenmodes to the nominal domain and handling of double eigenvalues in the general case. In 2D we are confident that this extension can be readily done, but in 3D there are still open mathematical questions on how to connect the nominal domain and the perturbed ones.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.