Abstract

A gradient recovery operator based on projecting the discrete gradient onto the standard finite element space is considered. We use an oblique projection, where the test and trial spaces are different, and the bases of these two spaces form a biorthogonal system. Biorthogonality allows efficient computation of the recovery operator. We analyze the approximation properties of the gradient recovery operator. Numerical results are presented in the two-dimensional case.

1. Introduction

The gradient reconstruction is a popular technique to develop reliable a posteriori error estimators for approximating the solution of partial differential equations using adaptive finite element methods [16]. The main idea of the gradient recovery error estimators is based on postprocessing the computed gradient and showing that the postprocessed gradient has a better convergence rate to the true gradient than the computed gradient. There are many ideas of gradient postprocessing. Most popular techniques are local least-squares fitting or patch recovery [1, 2, 4], weighted averaging [4], local or global -projection [4, 5, 7], and polynomial preserving recovery [8].

Recently we have presented a gradient reconstruction operator based on an oblique projection [9], where the global -projection is replaced by an oblique projection in order to gain the computational efficiency. The oblique projection operator is constructed by using a biorthogonal system. In fact, for the linear finite element in simplicial meshes, this approach reproduces the so-called gradient reconstruction scheme by the weighted averaging [4, 6, 10]. We proved that the error estimator based on the oblique projection is asymptotically exact in mildly unstructured meshes using the fact that the error estimator based on -projection is also asymptotically exact for such meshes [4, 5, 7, 11].

In this paper, we aim at analyzing the approximation property of the recovered gradient in one dimension using an oblique projection. As the approach of using an oblique projection reproduces the weighted average gradient recovery of linear finite elements [4, 6], this construction is quite useful in extending the weighted average gradient recovery of linear finite elements to quadrilaterals and hexahedras.

Let with and . Let be a partition of the interval . We define the interior of the grid, denoted by , as We also define the set of intervals in the partition as , where . Two sets and of indices are also defined as respectively. A piecewise linear interpolant of a continuous function is written as with where is the standard hat function associated with the point , . We define a discrete space, The linear interpolant of is the continuous function defined by . However, if we compute the derivative of this interpolant , the resulting function will not be continuous. To make the derivative continuous we project the derivative of the interpolant, , onto the discrete space . There are two different types of projection. One is an orthogonal projection and the other is an oblique projection. The orthogonal projection operator, , that projects onto is to find a that satisfies Since , we can represent it as an -dimensional vector: Now the requirement given in (5) is equivalent to a linear system: , where is a mass matrix, and Here the mass matrix is tridiagonal. We can reduce computation time greatly if we have a diagonal mass matrix. This can be done if we use a suitable oblique projection instead of an orthogonal projection. We consider the projection which is defined as the problem of finding such that where is another piecewise polynomial space, not orthogonal to , with ; see [12]. In fact, the projection operator is well-defined due to the following stability condition. There is a constant independent of the mesh-size such that [9, 13] In order to achieve that the mass matrix is diagonal we need to define a new set of basis functions for , , that are biorthogonal to the standard hat basis function (Figure 1) we used previously. This biorthogonality relation is defined as where is the Kronecker delta function: and is a positive scaling factor. The basis functions for are simply given by and, for , By using an oblique projection the mass matrix will be diagonal. We let the diagonal mass matrix be , so that our system is . The values are our estimates of the gradient of at the point . So, we estimate the gradient by finding , where We want to calculate the error in this approximation and find out when approximates exactly for each . As in [14, 15] we want to see if approximates exactly when is a quadratic polynomial.

2. Superconvergence

Theorem 1. Let . Then one has

Proof. We note that Now, we calculate for : where
Therefore,   Now we look at the endpoints. We note that Computing as before we get

We have the following super-convergence in -norm. This is proved as in [4, 16].

Theorem 2. Let for , , , and . If the point distribution satisfies for , then one has the estimate where, for , If one measures the error in the whole domain , one has

Proof. Since the gradient of a quadratic function is exactly reproduced in the interior of the domain, the first estimate (23) follows from the Bramble-Hilbert lemma. The second estimate is proved as in [4, 16].

For the tensor product meshes in two or three dimensions satisfying the above mesh condition this theorem has an easy extension. We will discuss about the simple extension to the two-dimensional case in Section 3.

2.1. Application to Quadratic Functions

Corollary 3. Let be a quadratic polynomial on . Then reproduces exactly for all , where

Proof. We use the result of Theorem 1 to get On the other hand, So, reproduces exactly for . Now, for and , we have Since we have and reproduce and , respectively, exactly.

Remark 4 (uniform grid). Let be a uniform grid on the interval so that , where is some constant, called the stepsize. We note that if our grid is uniform, then . So, our gradient recovery operator will reproduce the exact gradient of any quadratic function on the interior of a uniform grid. We cannot recover the gradients at the endpoints exactly, however, since and .

Corollary 5. Let on , and let the grid be uniform with step-size . Then for (i.e., for the endpoints of the grid).

Proof. We will start with the case where (i.e., the left endpoint). We know from Corollary 3 that . Since our grid is uniform with stepsize , this simplifies to . We also have . Thus The case for (i.e., the right endpoint) is proven similarly.

For a nonuniform grid (Figure 2), we cannot simplify our approximations using the stepsize , since the spacing between each adjacent node is not always equal. We did not make any assumption about the uniformity of the grid in Corollary 3. Thus , , is not zero for a nonuniform grid. This is estimated in the following corollary.

Corollary 6. Let . Then,

Remark 7. For let . Then we have We still get superapproximation of the gradient recovery when and .

2.2. Application to Cubic Functions

Corollary 8. Let on . Then, for all , and where is defined as in Corollary 3. Similarly, for all one has

Proof. The proof of this theorem is similar to Corollary 3.

3. Extension to the Two-Dimensional Case

Let be a tensor product mesh of the two-dimensional rectangular domain having the mesh-size , where elements of are rectangles. Note that the elements are rectangles for a tensor product mesh in two dimensions. Here we have Let be the collection of all these rectangles not touching the boundary of , and Let be the space of bilinear polynomials on . Then the standard tensor product finite element space is defined as We now use the property of the gradient recovery operator defined above by means of a biorthogonal system as in Theorem 2.

Let be the Lagrange interpolation of . We consider a patch as shown in Figure 3 consisting of four elements , where the values of are shown as . Let and be the length of elements and , and and the height of and , respectively, as shown in Figure 3.

Then for the rectangular mesh as shown in Figure 3 we have for the inner vertex Thus using Taylor’s expansion as in [4, 16] we obtain if the tensor product mesh satisfies Hence if the mesh satisfies the above condition, we have If we compute the -norm of the error on we have the estimate

4. Numerical Examples

We consider two examples. In each example, we compute the -norm of the error between the exact gradient and the recovered gradient , where is the Lagrange interplant of with respect to the underlying mesh . The recovery operator is based on the biorthogonal system in the two-dimensional case. We note that the biorthogonal system in the two-dimensional case is constructed by using the tensor product construction of the one-dimensional case. We also compute the errors in the whole domain and , where consists of elements not touching the boundary in the coarsest mesh. We have also verified that recovered gradient is exact with the absolute error except on the boundary when we have a quadratic exact solution.

Example 1. For Example 1 we choose the exact solution as We compute -norm of the difference between the exact gradient and recovered gradient in and . The numerical results are tabulated in Table 1. We note that we fix from the beginning. From Table 1 we can see the super-convergence of the recovered gradient. As predicted by the theory the -errors in converge with order , whereas the -errors in have quadratic convergence.

Example 2. For Example 2 we choose the exact solution as The errors in -norm are tabulated in Table 2, where we can observe the super-convergence as in Example 1. We note that the numerical results support the theoretical prediction in both examples.

5. Conclusion

We have presented an analysis of approximation property of the reconstructed gradient using an oblique projection. The reconstruction of the gradient is numerically efficient due to the use of a biorthogonal system. Numerical results demonstrate the optimality of the approach. It is useful to investigate the extension to higher order finite elements.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

The authors are grateful to the anonymous referees for their valuable suggestions to improve the quality of the earlier version of this work.