Abstract

A boundary value problem is posed for an integro-differential beam equation. An approximate solution is found using the Galerkin method and the Jacobi nonlinear iteration process. A theorem on the algorithm error is proved.

1. Introduction

1.1. Statement of the Problem

We consider the equation with the conditions Here , and are some positive constants, is a given function, and is the function we want to define.

1.2. Background of the Problem

Equation (1.1) is the stationary problem associated with which was proposed by Woinowsky-Krieger [1] as a model for the deflection of an extensible beam with hinged ends. Here , , , , and denote, respectively, the tension at rest, Young's elasticity modulus, density, cross-sectional moment of inertia, cross-section area and length of the beam. The nonlinear term in brackets is the correction to the classical Euler-Bernoulli equation where tension changes induced by the vibration of the beam during deflection are not taken into account. This nonlinear term was for the first time proposed by Kirchhoff [2] who generalized D'Alembert's classical model. Therefore (1.3) is often called a Kirchhoff-type equation for a dynamic beam. Note that Arosio [3] calls the function of the integral the Kirchhoff correction (briefly, the -correction) and makes a reasonable statement that the -correction is inherent in a lot of physical phenomena.

The works dealing with the mathematical aspects of (1.3) and its generalization as well as some modifications of (1.3) and (1.5) belong to Ball [4, 5], Biler [6], Henriques de Brito [7], Dickey [8], B.-Z. Guo and W. Guo [9], Kouémou-Patcheu [10], Medeiros [11], Menezes et al. [12], Panizzi [13], Pereira [14], and to others. The subject of investigation concerned the questions of the existence and uniqueness of a solution [4, 5, 914], its asymptotic behavior [68, 10], stabilization and control problems [9], and so on.

As to the static Kirchhoff-type equation for a beam, Its more general form than (1.1), namely, was considered in Ma [15, 16], where the solvability under nonlinear boundary conditions is studied.

The topic of approximate solution of Kirchhoff equations, which the present paper is concerned with, was treated by Choo and Chung [17], Choo et al. [18], Clark et al. [19], and Geveci and Christie [20] for a dynamic beam, while Ma [16] and Tsai [21] studied the problem for the static case. Speaking more exactly, the finite difference and finite element Galerkin approximate solutions are investigated and the corresponding error estimates are derived in [17, 18]. Numerical analysis of solutions for a beam with moving boundary is carried out in [19]. The question of the stability and convergence of a semidiscrete and fully discrete Galerkin approximation is dealt with in [20]. To solve the problem with nonlinear boundary conditions, Ma [16] applies the difference method and the Gauss-Seidel iteration process. Finally, in [21] for the discretization of the problem, in particular the finite difference, finite element and spectral methods are used, while nonlinear systems of equations are solved by the Newton iteration and other methods.

In the present paper, a numerical algorithm is constructed and its total error estimated for (1.1). Formulas are given allowing us to calculate the upper bound of the error by using the initial data of the problem. The algorithm includes the Galerkin approximation reducing the problem to a system of cubic algebraic equations which are solved by means of the nonlinear Jacobi iteration process. We also use the Cardano formula due to which the current iteration approximation is expressed through the already found approximation in explicit form.

1.3. Assumptions

Let for each there exists an integral and let the inequality be fulfilled with and being some known positive constants.

Assume that there exists a solution of problem (1.1)-(1.2) representable as a series whose coefficients satisfy the system of equations

2. The Algorithm

2.1. Galerkin Method

An approximate solution of problem (1.1)-(1.2) will be sought for in the form of a finite series where the coefficient is defined by the Galerkin method from the system

Here, incidentally, note that vast literature is available (e.g., see [2225]) on the application of the Galerkin method to differential equations of second and fourth order.

2.2. Jacobi Iteration Process

To solve the nonlinear system (2.2) we use the Jacobi iteration process [26] where denotes the th iteration approximation of , .

For fixed , (2.3) is a cubic equation with respect to (here is taken with weight just for convenience). Using the Cardano formula [27], we express through the th iteration approximation where

The algorithm we have considered should be understood as the counting carried out by formula (2.4). Having , , we construct the approximate solution of the problem

2.3. Algorithm Error Definition

Let us compare the approximate solution (2.6) with the th truncation of the exact solution (1.9) This means that the algorithm error is defined as a difference which we write as a sum where is the Galerkin method error and the Jacobi process error which are equal, respectively, to

3. The Algorithm Error

We set ourselves the task of estimating the -norm of the algorithm error. For this we have to estimate the errors of the Galerkin method and the Jacobi process.

3.1. Galerkin Method Error

Let us expand into a series. Taking (2.10), (2.7), and (2.1) into account we write where By virtue of (3.1) we have We will come back to (3.3) later, while now we denote and rewrite (1.10) and (2.2) in the form and . Since by virtue of (3.4), (3.2), and (3.6) we have and therefore and . Subtracting the last two equalities from each other and taking (3.2) into account, we obtain which we multiply by and sum over . Using (3.4), (3.5), and the inequality following from (3.6), we see that

By the Cauchy-Bunyakowsky-Schwarz inequality, we therefore have

Let us estimate the right-hand side of inequality (3.8). After multiplying (1.10) by and summing the resulting relation over in one case and over in the other, we come to the formula common for both cases where , or , . Thus

Let us put , , in (3.10) and use the fact that . We obtain where

Now assuming , , in (3.10) and using in addition to this the inequality , we get where

The use of (3.11) and (3.13) in (3.8) brings us to the inequality which together with (3.3) gives

Let us substitute (3.12) and (3.14) into (3.16) and apply condition (1.8) and also the integral test for series convergence. As a result, if , for the Galerkin method error we obtain the estimate where the coefficients , , and do not depend on and are defined by

3.2. Jacobi Process Error

Taking (2.10), (2.1), and (2.6) into account, we represent as a series where

Series (3.19) implies the formula to be used later.

Let us rewrite (2.4) in the form and introduce into consideration the Jacobian (in this paper this is the second notion associated with the name of C. Jacobi, 1804–1851).

To establish the convergence condition for process (3.22) we have to estimate the norm of the matrix . By virtue of (2.4), (2.9), and (3.22) there are zeros on the principal diagonal of this matrix, As to the nondiagonal elements, , they are defined by the formula Using the relations which follow from (2.5), we rewrite (3.25) as the equality Apply to the latter equality the estimate , which is obtained from the first relation in (3.26) and (2.5). Also use the fact that the maximal value of the function , , is equal to . Thus we obtain the inequalities which are fulfilled for the nondiagonal elements of the matrix .

Let us use the vector and matrix norms equal, respectively, to and for the vector and the matrix . Assume that for an arbitrary set of values , , , the elements of the matrix satisfy the condition . For this, by virtue of (3.28), (3.24), and (1.8) it is sufficient that Then, according to the map compression principle, the system of (2.2) has a unique solution , , the iteration process (2.4) converges, , , with the rate which in view of notation (3.20) is defined by the inequality . From this and (3.21) we obtain the estimate for the Jacobi process error

To conclude this section, we would like to touch upon one auxiliary question. Let us see how condition (3.29) will change if we apply to it the integral test for the convergence of series and ignore under the summation sign. Besides, we restrict ourselves to the case where is an integer number and apply the inequality . Then using the formula for the integral , , [28] instead of (3.29), we obtain

3.3. Algorithm Error

Let us estimate error (2.8). By (2.9) we have and therefore the application of (3.17) and (3.30) gives the inequality

The obtained result can be summarized as follows.

Theorem 3.1. Let and be some number from the interval . Assume that the conditions of Section 1.3 and restriction (3.29) or (3.31) in the case of integer are fulfilled. Then the algorithm error is estimated by inequality (3.33), where the coefficients , , and are calculated by formulas (3.18).