#### Abstract

The element-free Galerkin (EFG) method is one of the widely used meshfree methods for solving partial differential equations. In the EFG method, shape functions are derived from a moving least-squares (MLS) approximation, which involves the inversion of a small matrix for every point of interest. To avoid the calculation of matrix inversion in the formulation of the shape functions, an improved MLS approximation is presented, where an orthogonal function system with a weight function is used. However, it can also lead to ill-conditioned or even singular system of equations. In this paper, aspects of the IMLS approximation are analyzed in detail. The reason why singularity problem occurs is studied. A novel technique based on matrix triangular process is proposed to solve this problem. It is shown that the EFG method with present technique is very effective in constructing shape functions. Numerical examples are illustrated to show the efficiency and accuracy of the proposed method. Although our study relies on monomial basis functions, it is more general than existing methods and can be extended to any basis functions.

#### 1. Introduction

In recent years, the meshfree (meshless) method has been developed rapidly as a computational technique for solving partial differential equations. In the meshfree method, only a mesh of nodes and a boundary description are needed to develop the discrete equations. This makes it much flexible in solving problems with moving discontinuities (e.g., fracture of solids) and/or moving boundaries (e.g., shape optimization problems) that cannot be solved easily by conventional numerical methods, such as the finite element method (FEM), finite difference method (FDM), and boundary element method (BEM).

A group of meshfree methods have been proposed and developed, such as the diffuse element method (DEM) [1], the element-free Galerkin method (EFG) [2, 3], the meshfree point interpolation method [4], the meshless method based on radial basis functions [5–7], the meshless local Petrov-Galerkin method [8], and the meshfree weak-strong (MWS) method [9]. Many details about meshfree methods have been presented [10]. Among these meshfree methods, the EFG method is a very promising method and is currently widely used in computational mechanics and other areas.

In the EFG method, moving least-squares (MLS) approximation is used to construct shape functions for each computational point. One disadvantage of MLS approximation is that a set of linear algebraic equations must be solved for every computational point of interest [11]. If basis functions were used to construct the shape functions, then an moment matrix, say , must be inverted for every computational point when the discrete equations are assembled. With inappropriate nodes distribution and basis functions, may be ill-conditioned or even be singular. Besides, in the postprocessing, also must be inverted at each node when the displacements, strains, and stresses are computed. The computational cost associated with this is quite burdensome. Moreover, in order to retain the high accuracy of MLS approximation, the moment matrix must be inverted accurately. If the inverse of is obtained inaccurately, then the accuracy of EFG method is compromised. In order to ameliorate this shortcoming, an improved MLS (IMLS) approximation is described and studied in [12–17], where the authors used weighted orthogonal basis functions for constructing MLS approximation by using Schmidt orthogonalization process. With the use of weighted orthogonal basis functions, the moment matrix becomes a diagonal matrix; thus the burden of inverting at each computational point is totally eliminated. This idea is further studied and applied in elasticity problems [12, 16], dynamics analysis [12], fracture problems [13], and so on. The use of weighted orthogonal basis functions can also be applied to other meshfree methods based on MLS approximation, such as MLPG method [17].

In [16], some aspects of the MLS approximation with orthogonal basis functions are revisited. The authors found that this technique can be used to avoid computing the inversion of but cannot be used to overcome the singularity problems which may occur in the MLS approximation. They used the perturbation method to avoid inversion but did not overcome the singularity problem, either. In [18], the reason for singularity problem was discussed and a method to solve the singularity problem by finding optimal radius of the support domain was proposed. This method is useful for regular nodes distribution and monomial basis functions. For irregular nodes distribution and general basis functions, this method cannot avoid singularity problems. Sunilkumar and Roy proposed another class of techniques in [19], where support domain automatically allows for invertibility of the moment matrix in all cases. In this paper, aspects of MLS approximation with orthogonal basis functions are analyzed in detail. The reason why singularity problem occurs is studied, where the proof is different from that in [18]. A novel technique, which is similar to matrix triangular process [20], is proposed to solve this problem. It is shown that the EFG method with the present technique is very effective in constructing shape functions. Some numerical examples are illustrated to show its efficiency and accuracy. Our study also relies on monomial basis functions, but it is more general than the existing methods and can be extended to any basis functions.

The remainder of this paper is arranged as follows. In Section 2, the MLS approximation with orthogonal functions is reviewed. In Section 3, a comprehensive discussion of the MLS approximation with orthogonal basis functions is analyzed; the cause of singularity problem is derived and a new technique is proposed to solve this problem. The standard EFG formulation is presented for 2D linear elastic problem in Section 4. In Section 5, some numerical examples are presented to show the effectiveness of the proposed method. Finally, this paper is ended with some conclusions and outlook in Section 6.

#### 2. The MLS Approximation with Orthogonal Functions

Consider a subdomain , the neighborhood of a point and denoted as the domain of definition of the MLS approximation for the trial function at , which is located in the problem domain . To approximate the distribution of function in , over a number of randomly located nodes , , the moving least-squares (MLS) approximation of , for all , can be defined as where are basis functions and is a vector containing the coefficient (), which are also functions of . The basis functions can be chosen widely. In general, they can be chosen as the monomial bases, which are defined as follows: (i)linear basis (ii)quadratic basis

The coefficient vector can be obtained at any point by minimizing a weighted discrete norm, which can be defined as where is the weight function associated with the node , with for all in the support domain of , denotes the value of at node , and is the number of nodes in . In this paper, the cubic spline weight function is used where is the distance from node to point and is the size of the support domain for the weight function. The matrices and in (4) are and . Because the number of nodes used in the MLS approximation is usually much larger than the number of unknown coefficients , the approximated function does not pass through the nodal values. Thus are the fictitious nodal values, not the nodal values in general.

To find , we obtain the extremum of in (4) by [2, 10] where If is nonsingular, then from (7) we obtain

Substituting into (1), the expression of the local approximation is thus where is the shape function and The derivative of these shape functions can also be obtained where and the index following a comma is a spatial derivative. Since , we should only solve the derivative of and to get the derivative of . It should be noted that the above expression is a standard way to compute the derivative. Some novel and numerically accurate schemes for computations of derivatives of shape functions have also been studied; see [21].

As we mentioned above, (7) must be solved accurately for every point to retain the accuracy of the MLS approximation. In the meshfree methods, it may be time consuming and even cannot obtain desired accuracy when is ill-conditioned. To overcome these shortcomings, weighted orthogonal basis functions are proposed to derive the improved moving least-squares approximation (IMLS). Using weighted orthogonal basis functions in the MLS approximation, the moment matrix in (7) is a diagonal matrix and the necessity for solving (7) can be eliminated. Moreover, the condition number of the matrix can be improved to some extent, but we cannot obtain a well-conditioned matrix when ill-conditioned matrix or singular problems occur; see the discussion in [16] and Section 3.

To diagonalize the moment matrix for arbitrary computational point , the following orthogonality condition is imposed at any point where is to be computed: where the function set can be termed a weighted orthogonal function set with weight functions about points .

For the given arbitrary basis functions , the orthogonal basis functions can be obtained by using the Schmidt orthogonalization procedure as follows: where

If the weighted orthogonal basis function set about is used, then (7) becomes where , , and From (16), we can see that is a diagonal matrix. If , then we can obtain the coefficients directly: That is, where Equations (17) and (1) give the following expressions of the approximation function as where are the shape functions. The derivative of these shape functions can also be obtained: where and the index following a comma is a spatial derivative.

#### 3. A Modified Moving Least-Squares Approximation

In this section, more comprehensive analysis of the use of orthogonal basis functions in the MLS approximation is discussed, although some of them are described in [16]. MLS approximation with orthogonal basis functions possesses some distinguished advantages, but there is also a possible singularity problem of the moment matrix in its way. A typical example which shows the MLS approximation with orthogonal basis functions fails is presented. The main reason for the singularity problem is analyzed. Then a modified MLS approximation is proposed to overcome this problem.

##### 3.1. Aspects of Use of Orthogonalization

As discussed in [12, 13, 16, 17], the aim of the orthogonalization process is to enhance the computational efficiency and remove inaccuracies in the MLS approximation by avoiding the inversion of an ill-conditioned matrix. Using the orthogonal basis functions, the moment matrix is a diagonal matrix; thus the matrix inversion at each computational point is avoided, which can accelerate the computation especially for large problem. Moreover, the condition number of the diagonal moment matrix can be improved greatly for a lot of problems. But this process fails when the original moment matrix is a singular (ill-conditioned) matrix. That is to say the source of inaccuracy causing the ill-conditioning is not removed by this procedure.

It is shown in [12] that orthogonal basis functions can be obtained from Pascal basis by using the Schmidt orthogonalization. Thus, there is a linear relation between them; that is, each can be written as or in matrix form where is a transform matrix mapping to . The Schmidt orthogonalization makes a lower triangular matrix with all diagonal entries of unit value. Thus we obtain , where means the determinant of matrix . Equation (24) makes the following equality hold: where and are defined in (6) and (17), respectively. Then we have From (26), we can find the following. (1) is congruent with , and these two matrices have the same inertia, which means these two matrices have same numbers of positive, negative, and zero eigenvalues.(2)Solving (16) is equivalent to solving which means the matrix can be viewed as a split preconditioner [22] for solving (7). Using a preconditioner, the condition number of the system matrix may be reduced. That is to say, the condition number of may be improved from matrix theory. This has been proved for a lot of problems in applications when is ill-conditioned.(3)Since is a lower triangular matrix with all diagonal entries of unit value, that is, , therefore which means if is ill-conditioned, then the determinant of is close to zero. And we can get that is also close to zero.(4)If and are well defined, then the shape functions derived using either bases are identical; that is,

Moreover, the derivatives of these shape functions are also identical; that is, From these aspects, we know that if is not well defined (ill-conditioned or even singular), then we cannot obtain a well-conditioned diagonal moment matrix . But the accuracy may be improved for many applications; this is because the transformation matrix , which can be considered as a preconditioner, can reduce the condition number in some extent (see also Table 1). That is why many researchers believe that using the orthogonal basis functions in MLS approximation can improve the accuracy of solution [13–15]. The following example also shows the above discussion. Consider a computational point located at , whose influence domain contains eight nodes; see Figure 1. These eight nodes are located in parallel lines in both and directions. The rectangle domain was chosen as the support domain of the computational point . The radiuses of the support domain are in direction and in direction. These values are not of significance but mean that all eight nodes are located in the influence domain of .

When quadratic basis and cubic weight functions are utilized to automatically construct the shape function, we find that the moment matrix isand its rank is 5. If the orthogonalization process is further used, then the moment matrix becomes

It is easy to see that is also rank deficiency. This example shows that it is possible to form an ill-conditioned or singular linear system of equations, even if orthogonalization procedure is used. If is singular or there is a zero on the diagonal of , then no shape functions can be derived. If some nodes are perturbed, for example, node 8 is perturbed to node 8′ (see Figure 1), then becomes nonsingular but may be an ill-conditioned matrix. It should be noted that node 8′ shown in Figure 1 is just one case; it can be perturbed in any direction indeed. In the perturbed case, using the orthogonalization process may be more effective than the traditional MLS approximation, since the condition number decreases and no matrix inversion is needed to compute. Table 1 presents the minimum values of the condition number of and , the minimum values of , and the values of shape functions. stand for the shape functions of node 1–node 8, respectively. indicates the sum of these shape functions.

##### 3.2. The Cause of Singularity Problem

From above discussion, we know that the MLS approximation with orthogonal basis functions may also fail, although the number of nodes in the support domain of a computational point is larger than the number of bases. The cause of singularity problem has been discussed in the original paper of the MLS approximation [11] and a recent paper [18], where the authors also presented the optimal radius of the support domain of a computational point for polynomial basis. In this subsection, we discuss the cause of singularity problem from matrix theory.

Close examination has shown that the problem of singularity arises from rank deficiency of matrix . In fact, the matrix is a diagonal positive definite matrix for a given set of nodes in the support domain. If we further assume that the number of these nodes is larger than the number of monomial bases and has full row rank, then we can obtain a nonsingular moment matrix from the matrix theory of view. The following proposition gives the nonsingular condition of the moment matrix . The proof of this proposition is given in the appendix.

Proposition 1. *The moment matrix is symmetric. Assume that a set of nodes are located in the support domain of a computational node ; then the moment matrix is a nonsingular matrix provided that
**
that is, has full row rank, where is the number of nodes in the support domain and is the number of basis functions.*

##### 3.3. Techniques to Avoid the Singularity

To avoid a singular moment matrix, some techniques have been proposed, such as the perturbation method [16] and finding appropriate support domain for a computational point [18]. Using a perturbation method, one could not determine the disturbance beforehand. If the disturbance is too small, the condition number will be very large; see Table 1 for a degradation example. If the disturbance is too large, the moment matrix may be well defined, but the nodes are not the original nodes; this will take large error for discrete linear systems. Finding appropriate support domain for a computational point is a good choice to avoid the singularity problem, but it is only efficient for regular nodes distribution and special basis functions, such as monomial bases. For irregular nodes distribution and general basis functions, this method is not very efficient. In this subsection, we discuss a new technique, which is similar to the matrix triangularization algorithm (MTA) [20].

As discussed above, the reason of the singularity problem of the moment matrix is the rank deficiency of due to the improper enclosure of nodes and basis functions. For a given set of nodes, the basis functions are very important. The main idea of this technique is, therefore, to automatically find the monomials that are responsible for the rank deficiency for a given set of nodes. These monomials should be done specially to ensure a full rank moment matrix. But this technique should be an automatic procedure, and it has to be done without increasing too much computational cost and should not be too complex.

The details of the algorithm are as follows.(1)Suppose that monomials are chosen to obtain the basis functions and nodes are selected in the support domain of an interpolation point . Using (6), the matrix can be obtained. It should be noted that the rows of correspond to the monomials in the basis, and its columns associate with the nodes in the support domain.(2)The matrix is triangularized to determine the row rank .(i)If , it means that has full row rank. From Proposition 1, we know that the moment matrices and are generally well defined. In this case, the orthogonal basis functions can be obtained by using the Schmidt orthogonalization process. Thus, a diagonal moment matrix and its inversion are obtained. Therefore, shape functions can be easily got. In particular, the moment matrix may be ill-conditioned. In this case, using the Schmidt orthogonalization process is a good choice and shape functions can also be easily got (see discussion in Section 3.1).(ii)If , it means that there is a rank deficiency in . Thus, the moment matrix is singular and we cannot obtain the shape functions. The reason of the rank deficiency is that some rows of ( rows) are linearly related to other rows. Therefore, the rows should be done with some special techniques. The procedure is as follows.(a)In the row triangularization process, the permutations of rows are recorded. From the diagonal elements of the triangularized matrix, we can find out which rows (total rows) are related to other rows. This means that these rows (which corresponded to the monomials) have no effect in forming shape functions.(b)We can do the Schmidt orthogonalization process for the remaining basis functions. Since basis functions are corresponding to the rows of , we can set the corresponding rows and columns of or to be zero except the diagonal entry to be one and set the rows of or to be zero.(c)Through the above operation, we can obtain a diagonal moment matrix , which has full rank now. Thus the shape functions can be obtained easily.

##### 3.4. Flowchart of the Technique

The flowchart of the technique is briefly given as follows.(1)Determine the support domain for point to obtain an matrix and an weight matrix .(2)Transform to be a row trapezoidal matrix to get row rank and record the exchanges of rows.(3)If , then go to the next step. If , then determine which rows are related to other rows. The remaining basis functions are used to construct orthogonal basis functions. Set these rows and columns of or to be zero except the diagonal entry to be one, and set the rows of or to be zero.(4)Compute the shape functions from (16)–(18).

##### 3.5. Technique for an Eight-Node Approximation

In order to show how this technique works, an example of eight-node approximation, as shown in Figure 1, is presented here in detail. These eight nodes are included in the support domain of an interpolation point . The complete quadratic basis and the cubic weight function (5) are used. Thus and .(1)Construct the matrix : using (6), the matrix is constructed as follows: It can be easy to find that rows of correspond to the monomials in the bases, and its columns associate with the eight nodes in the support domain.(2)Row rank and row determination: using some permutation matrices, the matrix is transformed to a row trapezoidal matrix to determine the row rank. The row trapezoidal matrix is It can be found that the sixth row of is zero and the row rank is 5. It means that there is a rank deficiency in . Therefore, there is a rank deficiency in the moment matrix from Proposition 1. From we can also find that the sixth row is linearly related to other five rows. It should be noted that there is no row changed in the row triangularization process, which means that the sixth row of is corresponding to the sixth row of and the sixth row of is also linearly related to other five rows.(3)Since the sixth row is corresponding to the basis function , the remaining basis functions are used to construct orthogonal basis functions by Schmidt orthogonalization process. Now, the sixth row and the sixth column of are set to be zero except the diagonal entry to be one, and the sixth row of is set to be zero. In this case, we have the moment matrix (4)Compute the MLS shape functions from (16)–(18): after the above steps, the moment matrix has full rank. The shape functions can be obtained easily. They are listed in Table 2.

#### 4. Element-Free Galerkin Formulation

Consider the 2D problem of the deformation of a linear elastic medium from an undeformed domain , enclosed by : where is the stress tensor corresponding to the displacement field and is a body force vector, with boundary conditions as follows: in which and are prescribed tractions and displacements, respectively, on the traction boundary and on the displacement boundary , and is the unit outward normal matrix to the boundary .

Using the standard principle of minimum potential energy for (37)-(38), that is, to find , such that
is stationary, where denotes the Sobolev space of order and is a function in the Sobolev space, and are strain and stress vectors, and is the strain-stress matrix. MLS equation (10) or (21) is used to approximate the displacements in the Galerkin procedure. Then we can get
Then substituting (40) into (39) leads to the following total potential energy in the matrix form, as
and invoking results in the following linear system of **:**
in which the stiffness matrix is built from matrices and the right hand side vector is built from the vectors . These matrices and vectors are defined by
where
In the sequel and unless mentioned otherwise, EFG method indicates the element-free Galerkin method with the original MLS approximation (10) and MEFG method stands for the element-free Galerkin method with the modified MLS approximation (21). These methods employ the moving least-squares (MLS) approximation or its modified form to approximate the trial functions. Another problem is that, in general, they do not have the property of nodal interpolants as in the FEM; that is, , where is the shape function corresponding to the node at , evaluated at a nodal point, , and is the Kronecker data, unless the weight functions used in the MLS approximation are singular at nodal points [11]. Therefore, essential boundary conditions should be imposed with special techniques [23, 24]. In this paper, we use the penalty method [23].

#### 5. Numerical Experiments

In this section, two numerical examples are presented to show the efficiency of the EFG method with the modified MLS approximation studied in this paper. In these examples, quadratic basis function and cubic spline weight function are used. All runs are performed in MATLAB 7.0 on an Intel Pentium 4 (2 G RAM) Windows XP system.

##### 5.1. Cantilever Beam

The first example is a cantilever beam of length and height subjected to traction at the free end (see Figure 2). The beam has a unit thickness and hence a plane stress problem is considered here. The closed-form solution is available for parabolic traction of force where the moment of inertia of the beam is given . The stresses corresponding to above displacements are The beam parameters are taken as , , , , and in computation.

To evaluate the accuracy of the coupled method, the following error norms are used: where is 2-norm of vector “”, and are the approximation and exact solution of displacements, and and are the approximation and exact value of stresses.

To show the effectiveness of EFG method with the modified MLS approximation studied in this paper, a typical regular node distribution (with nodes) is shown in Figure 3. To evaluate the integrals, background cells are used. For each background Gauss points are employed. The rectangle domain is chosen as the support domain of the computational point . We adopt two kinds of radius of the support domain. The first case is in the direction and in the direction. The second case is in and directions, respectively. In the first case, singularity problem of the moment matrix occurs; thus the EFG method fails, while the MEFG method can also obtain excellent agreement with the analytical solution; see Figures 4 and 5. In the second case, no singularity problem appears; that is, the moment matrix is always well defined. Thus both the EFG method and the MEFG method can obtain excellent agreement with the analytical solution; see also Figures 4 and 5. It should be also noted that, in the second case, the results of EFG method and MEFG method are almost the same, which confirms the theoretical analysis in Section 3.1.

**(a)**

**(b)**

The convergence of the MEFG method and the accuracy of the MEFG method for irregular node distribution are also studied. Four different regular node distributions with 52 (: 13 nodes in the direction, 4 nodes in the direction), 175 () nodes, 481 () nodes, and 784 () nodes and four different irregular node distributions are considered. Figure 3 also plots a typical irregular node distribution (with 481 nodes). For intergration for each problem, , , , background cells are used. Here, the radius of the support domain is taken as in the direction and in the direction. The computational results are listed in Table 3. The convergence curves for displacements and stresses with different node distributions are plotted in Figure 6. In Figure 6, is the maximum size of node arrangement. From Table 3 and Figure 6, we can see that the MEFG method possesses convergence. Convergence for the regular node distribution problem is better than that for the irregular distribution problem.

##### 5.2. Poisson Equation

Consider Poisson equation, with boundary conditions whose analytical solution is given by

The EFG formulation of Poisson equation can be easily obtained from the same procedure (39)–(41). A typical regular nodes discretization and background cells are shown in Figure 7. We use two kinds of radius of the support domain to show the effectiveness of the MEFG method. The first case is in the direction and in the direction. The second case is in both and directions, respectively. Figures 8 and 9 plot the numerical solution and analytical solution along and . From these figures, we can see that the EFG method fails for the first case while the MEFG method can also obtain good numerical results, since singularity problem for the moment matrix occurs when the original MLS approximation is used. For the second case, both the EFG method and the MEFG method can obtain excellent agreement with the analytical solution, and the results of EFG method and MEFG method are almost the same.

For convergence studies, four different regular node distributions with 121 () nodes, 441 () nodes, 961 () nodes, and 1681 () nodes are employed. Two kinds of radius of the support domain studied above are also considered here. The computational results are listed in Table 4, where is the relative error for the case (which corresponds to the nonsingularity problem) and is the relative error for the cases and (which corresponds to the singularity problem). The convergence curves are depicted in Figure 10. From Table 4 and Figure 10, we can also see that the MEFG method also possesses convergence.

#### 6. Conclusion and Outlook

In this paper, aspects of MLS approximation with orthogonal basis functions, which are used to construct shape functions in the element-free Galerkin method, are analyzed in detail. It is shown that this method has some advantages; for example, it avoids computing inversion of moment matrices; the condition number of moment matrix constructed by MLS approximation with orthogonal basis functions can be improved greatly for many problems. But it does not overcome the singularity problem of moment matrix, either. The reason why singularity problem occurs is studied. A novel technique based on matrix triangular process is proposed to solve this problem in detail. Our study relies on monomial basis functions, but it can be extended to any basis functions. Numerical examples are illustrated to show the effectiveness of the new approach. Although the proposed method has been investigated in the EFG method, it is also suitable for other meshfree methods based on MLS approximation.

#### Appendix

In this appendix, we prove Proposition 1.

*Proof. *The first part of Proposition 1 is easy to verify since is a diagonal matrix. Moreover, from the definition of the weight function , we know that if a set of nodes are in the support domain of a computational node , then . Thus, is a positive definite matrix. Because
we obtain
If , then and is at most . Therefore, . Thus, is a singular matrix. If , we show that is a positive semidefinite matrix. If we further assume that , then is a positive definite matrix; thus is a nonsingular matrix. In fact, for a given nonzero vector
since is a diagonal positive definite matrix. The equality in (A.3) holds if and only if . This cannot be true when has full column rank and . Thus always holds for a nonzero vector . Therefore, is a positive definite matrix.

Thus, we complete the proof.

#### Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

#### Acknowledgments

The authors would like to express their great thankfulness to the referees for the comments and constructive suggestions very much, which are valuable in improving the quality of their paper. This work is supported by the National Natural Science Foundation of China under the Grant nos. 11172192 and 11301290. The research of Jun-Liang Dong is supported by The Basic Development Foundation for Mathematics and Statistics (no. 006000542213501), Beijing University of Technology, China.