#### Abstract

A quadrature-based mixed Petrov-Galerkin finite element method is applied to a fourth-order linear ordinary differential equation. After employing a splitting technique, a cubic spline trial space and a piecewise linear test space are considered in the method. The integrals are then replaced by the Gauss quadrature rule in the formulation itself. Optimal order a priori error estimates are obtained without any restriction on the mesh.

#### 1. Introduction

In this paper, we develop a quadrature-based Petrov-Galerkin mixed finite element method for the following fourth-order boundary value problem: subject to the boundary conditions where . Let . We, hereafter, suppress the dependency of the independent variable on the functions , , and . Therefore, we write , and instead of these functions.

Let us define the splitting of the above fourth-order equation as follows.

Set Then the differential equation (1.1) with the boundary conditions (1.2) can be written as a coupled system of equations as follows: In this paper, the error analysis will take place in the usual Sobolev space defined on the domain with denoting . The Sobolev norms are given below. For an open interval and a non negative integer , We suppress the dependence of the norms on when . Further, denotes the function space

#### 2. Continuous and Discrete 𝐻1-Galerkin Formulation

Given , let be an arbitrary partition of with the property that as , where and . Let represent the inner product, and let represent the discrete inner product of any two functions and be defined as follows: where is the fourth-order Gaussian quadrature rule: Here, , are the two Gaussian points in the subinterval [ with , .

Let us now consider the following cubic spline space as trial space: where is the space of polynomials of degree defined over the subinterval .

The corresponding space with zero Dirichlet boundary condition is denoted by Further, let us consider the following piecewise linear space as the test space.

##### 2.1. Weak Formulation

The weak formulation corresponding to the split equations (1.4) and (1.5) is defined, respectively, as follows.

Find such that

##### 2.2. The Petrov-Galerkin Formulation

The Petrov-Galerkin formulation corresponding to the above weak formulation (2.7) and (2.8) is defined, respectively, as follows.

Find such that The integrals in the above Petrov-Galerkin formulation are not evaluated exactly at the implementation level. We, therefore, define the following discrete Petrov-Galerkin procedure in which the integrals are replaced by the Gaussian quadrature in the scheme as follows.

##### 2.3. Discrete Petrov-Galerkin Formulation

The discrete Petrov-Galerkin formulation corresponding to (2.7) and (2.8) is defined, respectively, as follows.

Find such that The approximate solutions and without any conditions on boundary points are expressed as a linear combination of the B-splines as follows: where the basis of the cubic B-splines space for is given below:

For and , the basis functions are defined as in the above form, after extending the partition by introducing fictitious nodal points on the left-hand side and on the right-hand side, respectively. Further, the basis of the piecewise linear “hat” splines space for is given below:

In a similar manner, for and , the basis functions are defined as in the above form, after extending the partition by introducing fictitious nodal point on the left-hand side and on the right-hand side, respectively. The mixed discrete Petrov-Galerkin method for (2.10) and (2.11) without assuming boundary conditions in the trial space is given as follows: with the corresponding equations: referring to the zero-boundary conditions: The above set of equations (2.15)–(2.16) can be written as a set of equations in unknowns. Here, we study the effect of quadrature rule in the error analysis. Since we compute the approximations for the solution as well as for its second derivative with integrals replaced by Gaussian quadrature rule in the formulation, this work may be considered as a quadrature-based mixed Petrov-Galerkin method.

#### 3. Overview of Discrete Petrov-Galerkin Method

Here, the integrals are replaced by composite two-point Gauss rule. Therefore, the resulting method may be described as a “qualocation” approximation, that is, a quadrature-based modification of the collocation method. Further, it may be considered as a Petrov-Galerkin method with a quadrature rule because the test space and trial space are different. Hence, it may be referred to as discrete Petrov-Galerkin method. One practical advantage of this procedure over the orthogonal spline collocation method described in Douglas Jr. and Dupont [1, 2] is that for a given partition there are only half the number of unknowns, and therefore it reduces the size of the matrix.

The qualocation method was first introduced and analysed by Sloan  for boundary integral equation on smooth curves. Later on Sloan et al.  extended this method to a class of linear second-order two-point boundary value problems and derived optimal error estimates without quasi-uniformity assumption on the finite element mesh. Then, Jones Doss and Pani  discussed the qualocation method for a second-order semilinear two-point boundary value problem. Further, Pani  expanded its scope by adapting the analysis to a semilinear parabolic initial and boundary value problem in a single space variable. Jones Doss and Pani  extended this method to the free boundary problem, that is, one-dimensional single-phase Stefan problem for which part of the boundary has to be found out along with the solution process. A quadrature-based Petrov-Galerkin method applied to higher dimensional boundary value problems is studied in Bialecki et al. [8, 9] and Ganesh and Mustapha .

The main idea of this paper is that a quadrature based approximation for a fourth order problem is analyzed in mixed Galerkin setting. The organization of this paper is as follows. In previous Sections 1 and 2, the problem is introduced; the weak and the Galerkin formulations are defined. Overview of discrete Petrov-Galerkin method is discussed in Section 3. Preliminaries required for our analysis are mentioned in Section 4. Error analysis is carried over in Section 5. Throughout this paper is a generic positive constant, whose dependence on the smoothness of the exact solution can be easily determined from the proofs.

#### 4. Preliminaries

We assume that and are such that where . We assume that the problem consisting of the coupled equations (1.4) and (1.5) is uniquely solvable for a given sufficiently smooth function . It can be proved that the quadrature rule in (2.3) has an error bound of the form This follows from Peano’s kernel theorem (see ).

The following inequality is frequently used in our analysis. If with , then there exists a positive constant depending only on such that, for any satisfying , where denotes the length of . For a detailed proof, one may refer to appendix of Sloan et al.  or Chapter 4 of Adams . Let us use the following notation: The adjoint operator with corresponding adjoint boundary condition is defined as follows: Since is a self-adjoint operator, we mention below the regularity of (equal to ) in the norm. We make a stronger assumption as in Sloan et al.  that for arbitrary , there exists a positive constant such that We have the following inequality due to the Sobolev embedding theorem; the proof of which can be found in page 97, Adams ,

#### 5. Convergence Analysis

Hereafter throughout this section, for and with 1  and , we use the following notations: Let us denote the error between and by and the error between and by , respectively, that is, and . Using (2.11) and (1.5), we obtain the following error equations: and therefore we get Further, using (2.10) and (1.4), and therefore we have The following lemma gives estimates for the error in the quadrature rule for the term () and () for . These estimates are required for our error analysis later. The proof of the lemma is similar to the proof of Lemma 4.2 of Sloan et al. .

Lemma 5.1. For all and h sufficiently small, (a), (b), (c), (d).

The following result gives estimate for , where is any arbitrary point in . This estimate is crucial for our error analysis.

Lemma 5.2. Let be the weak solution of (1.4) defined through (2.7). Further, let be the corresponding discrete Petrov-Galerkin solution defined through (2.10). Then, the error satisfies where is an arbitrary point in .

Proof. For a given , let be an element of satisfying the following auxiliary problem: The above problem has a solution. For example, satisfies the above differential equation, the boundary conditions, and the jump condition.
Let us define as follows: Then,   a.e. on . We first multiply with and then integrate over . On applying integration by parts, using the fact that and the jump condition for , we obtain Applying integration by parts once again, using boundary condition for and the continuity of , we obtain that is, . Let be the linear interpolant of . Then, we have We know that We now compute the estimates for the terms , , and as follows: Using Lemma 5.1(c) and (5.13), we obtain Using (5.5), (2.3), and the Sobolev embedding theorem (4.7) locally on for both and , we have Using Hölder's inequality for sums and (5.13), we have For satisfying the auxiliary problem, it is easy to verify that , where is a constant not depending on .
Using , , and in (5.12), we have This completes the proof.

In the following lemma, we initially compute the error in terms of , and then later on we establish an optimal estimate of error independent of .

Lemma 5.3. Let and be the weak solutions of the coupled equations (1.4) and (1.5) defined through (2.7) and (2.8), respectively. Further, let and be the corresponding discrete Petrov-Galerkin solutions defined through (2.10) and (2.11), respectively. Then the estimates of the errors in , , and norms are given as follows:

Proof. Let be an arbitrary element of , and let be the solution of the auxiliary problem We now have where is the linear interpolant of .
We know that We shall compute the estimates for the terms , , and as follows: Using (5.3), (2.3), and the Sobolev embedding theorem (4.7) locally on for , we have Using Hölder's inequality for sums, Lemma 5.2, and (5.22), we obtain Substituting , , and in (5.21), we have Using (4.6) and the regularity of the auxiliary problem, we have . Since is arbitrary, we have We now estimate via a projection argument. Let be the orthogonal projection onto with respect to inner product defined by The domain of may be taken to be . From Crouzeix and Thomée  and de Boor , it is seen that the projection is stable. Thus, Then the error can be interpreted in terms of the error of the above projection: From the stability property (5.29), the error in the projection follows as in de Boor , that is, Then the remaining task is to compute the estimate of .
For , We shall compute the estimates for the terms and by Lemma 5.1(b).
Following the steps of computation involved in the term , we obtain the estimate of as where we have used the inverse inequality locally. Using and in (5.32), we get We now show the above inequality for to obtain .
Now let be an arbitrary element of . Then since , it follows from the definition of , (5.35), and (5.29) with replaced by , that Now, from (5.30), (5.31), and (5.36), we conclude that Now, using the fact and the above estimate, we have Now using (4.3) with and , we have Substituting (5.39) in the above expression, we obtain For sufficiently small , we have Using (5.41) in (5.27), For sufficiently small , we get Using (5.43) in (5.41), we have Using (5.43) and (5.44) in (5.39), we have Equations (5.43), (5.44), and (5.45) give the required result.

We now compute the error estimate of in and norms as has been done in the previous case.

Lemma 5.4. Let and be the weak solutions of the coupled equations (1.4) and (1.5) defined through (2.7) and (2.8), respectively. Further, let and be the corresponding discrete Petrov-Galerkin solutions defined through (2.10) and (2.11), respectively. Then the estimates of the errors in and norms are given as follows:

Proof. Let be an arbitrary element of , and let be the unique solution of the auxiliary problem Then we have where is a linear interpolant of , Following the steps involved in the computation of and , we obtain the estimates of and as follows: by Lemma 5.1(c) and (5.22).
Using (5.5) and (2.3) first, then the Sobolev embedding theorem (4.7) locally on for and to estimate , we have Further, using Hölder's inequality for sums and (5.22), we obtain Substituting the estimates , , and in (5.49), we obtain Using (4.6) and regularity of the auxiliary problem, we have . Since is arbitrary, we have The estimate of can be obtained through a projection argument as mentioned in Lemma 5.3 as where we have used Lemma 5.1(d). In a similar manner we can compute the estimates for , and as Using all the estimates from Lemmas 5.3 and 5.4, we have the following main error estimates.

Theorem 5.5. Assume that and satisfy (1.4) and (1.5), respectively, with (4.1). Assume also that and , where . Then (2.10) and (2.11) have unique solutions and , respectively, and for sufficiently small, one has

Proof. Assume temporarily that solutions and of (2.10) and (2.11), respectively, exist. Using (5.46) in (5.45), we obtain For sufficiently small , we have An application of the above in (5.46), we get Apply (5.59) in (5.56) to have Use (5.60) in (5.43) to get Using (5.60) in (5.44), we obtain Using (5.61) and (5.60) in (5.39) with replaced by , we have The required result can be obtained from estimates (5.59) to (5.64).

So far we have assumed temporarily that solutions and exist. We now discuss the existence and uniqueness of discrete Petrov-Galerkin approximation. Since the matrix corresponding to (2.10) and (2.11) with zero boundary conditions for and is square, existence of and for any will follow from uniqueness, that is, from the property that the corresponding homogeneous equations have only trivial solutions.

Suppose that and corresponding to and satisfy It follows from (5.61) and (5.62) (with replaced by 0 and eventually that, for sufficiently small , and hence and . Thus, uniqueness is proved, and hence existence follows from uniqueness.