## Recent Development in Partial Differential Equations and Their Applications

View this Special IssueResearch Article | Open Access

A. V. Krysko, J. Awrejcewicz, S. P. Pavlov, M. V. Zhigalov, V. A. Krysko, "On the Iterative Methods of Linearization, Decrease of Order and Dimension of the Karman-Type PDEs", *The Scientific World Journal*, vol. 2014, Article ID 792829, 15 pages, 2014. https://doi.org/10.1155/2014/792829

# On the Iterative Methods of Linearization, Decrease of Order and Dimension of the Karman-Type PDEs

**Academic Editor:**H. Jafari

#### Abstract

Iterative methods to achieve a suitable linearization as well as a decrease of the order and dimension of nonlinear partial differential equations of the eighth order into the biharmonic and Poisson-type differential equations with their simultaneous linearization are proposed in this work. Validity and reliability of the obtained results are discussed using computer programs developed by the authors.

#### 1. Introduction

Mathematical models of continuous mechanical structures are described by nonlinear partial differential equations which may be solved analytically only in a few rare cases. However, a direct application of the numerical methods is associated also with big difficulties regarding a high order of both dimension and differential operator, as well as nonlinearity of the PDEs studied.

This is why it is tempting to develop approaches that offer a reduction of the input differential equations. The mentioned methods can be divided into three groups: linearization; order decrease of the PDEs; order decrease of a differential operator.

The so far existing methods of solutions of nonlinear problems, depending on the introduced linearization level, can be divided into two groups. The first one deals with the linearization of PDEs, whereas the second one is dedicated to the linearization of algebraic equations obtained through the discretization procedures applied to the input PDEs. Below, we consider the methods associated with the first group. This group contains the Newton and Newton-Kantorovich methods [1].

One of the linearization methods is the method of quasilinearization, widely illustrated in monograph [2]. It presents a further development of Newtonâ€™s method, and it generalizes the method proposed by Kantorovich.

On the other hand, there is a seminal approach known as the Agmon-Douglis-Nirenberg (ADN) theory for elliptic PDEs still attracting a large number of imitators [3, 4]. In particular, the abstract least squares theory is developed satisfying the ADN elliptic theory assumptions [5â€“7].

Furthermore, in the case of corners in plane domains the ADN system exhibits singularities, which imply a need for construction of singular exponents and angular functions [8]. Our approach does not have this disadvantage and it is simple in direct applications to the real world systems.

The so far briefly addressed approaches linearize the input problem; that is, they reduce it to the solution of linear problems. However, there is one more important question to be solved, that is, a reduction of the space dimension of the initial problem.

One of the methods to solve the stated problem is focused on averaging (integration) along such a coordinate on which the object dimension is lesser in comparison to the two remaining coordinates. On the other hand, it is well known that mathematical problems related to the theory of material strength can be formulated as variational problems, that is, the problems of finding extrema of a certain functional. Variational statements create a foundation for the construction of direct difference and variational methods, as it is widely described in monograph [9].

We mention only a few works [10â€“12] devoted to the third group, that is, aiming at a decrease of the PDE order.

Note that the so far presented state of the art of the proposed and applied methods allows us to solve each of the mentioned problems separately: either a decrease of the system order or its linearization. However, we show how all these problems can be solved simultaneously.

Our paper is focused mainly on the method of dimension decrease and linearization of the Karman-type PDEs. However, the presented approach can be successfully applied to other nonlinear PDEs. In particular, in the modified version two variants of the proposed method are presented:(i)the first iterative method consists of the reduction of the eighth order linear PDEs to a successive solution of linear PDEs of the fourth order biharmonic equations; that is, the system dimension is reduced twice with simultaneous linearization of the problem;(ii)the applied second iterative procedure includes a further order decrease of the earlier obtained (first iterative method) linear system of biharmonic PDEs of the fourth order to the successive solution to the system of the second order Poisson-type equations.

In other words, the application of these two iterative procedures implies a fourfold reduction of the PDEs order with the linearization procedure carried out simultaneously.

The proposed iterative procedures regarding the nonlinear PDEs order decrease and linearization can also be applied to PDEs with a curvilinear boundary. The application of FDM (finite difference method) to solve biharmonic equations and PDEs of the Poisson-type requires a solution to the so-called Sapondzhyan-BabuÅ¡ka problem. The paradox of Sapondzhyan-BabuÅ¡ka (see [13â€“15]) was discovered when studying the asymptotic behavior of solutions to an elasticity system in a thin polygonal plate (inscribed in the plate with smooth boundary) as the length of the side of the polygon tends to zero and the number of sides goes to infinity. In Section 2 of our work we prove the proposed iterative procedure to remove this paradox (this problem concerns smoothness of the curvilinear boundary).

In Section 3 of our work the reliability and validity of the method of variational iterative procedure to solve PDEs described by positively defined operators are illustrated and discussed. Namely, the convergence of the method of variational iterations generalizes the Kantorovich-Vlasov method [16] aimed at the reduction of PDEs to ODEs. On the other hand, as it was pointed out by Vorovich [17], the Kantorovich-Vlasov method generalizes the Galerkin method. It should be emphasized that the choice of approximating functions referring either to two variables in the Galerkin method or to one variable in the Kantorovich-Vlasov method cannot be applied in the method of variational iterations. The system of functions being sought is provided by a solution of the PDEs with regard to two variables assuming that we deal with the 2D problem. Furthermore, the proposed approach can be applied to 3D elliptic equations.

Section 6 of the paper deals with a comparison of the solutions to the Karman equations obtained via our proposed iterative procedures with those offered by FEM and FDM, as well as with experimental results. Good coincidence of the results is achieved.

#### 2. Mathematical Model of a Flexible Karman-Type Plate (Hypotheses, Differential Equations, and Boundary Conditions)

The objects of our investigation are plates of different shapes (in particular, rectangular ones), representing a closed 3D part of space (Figure 1). The following hypotheses are introduced: (i) plate material is elastic and isotropic; (ii) the following Karman relations between deformations and displacements are introduced:

Equations governing the deflection and stress function have the following form [15]:

The following operators are introduced:

Here and further on the following nondimensional quantities are introduced: ; ; ; ; ; ; ; ; , where are the maximal plate dimensions regarding and , respectively; is thickness; is acceleration due to gravity; ; is specific gravity of volume plate material; is Poissonâ€™s coefficient; is the Young modulus; are deflection and stress functions, respectively.

Let us add boundary conditions of the support on flexible nonstretched (noncompressed) ribs to the system of plates [18, 19]: where stands for the space boundary occupied by the plate. The following initial conditions are attached to (2):

System of (2) is composed of nonlinear PDEs of the eighth order. Finding a reliable solution to this problem is still a serious problem in spite of achievements of the numerical methods. It should be emphasized that the solution to the mentioned problem was found earlier via either FDM (finite difference method) or FEM (finite element method), or by the Bubnov-Galerkin method. Below, we propose a novel method of order reduction and linearization of PDEs (2).

#### 3. Methods of Order Decrease and Linearization of the Karman Equation

There are two ways for construction of the fundamental iterative procedure to solve system (2): (i) system reduction to a successive solution to the Germain-Lagrange type equations, in this case the system order is decreased twice; (ii) system reduction to a Poisson-type equation (in this case the system order is reduced four times). In both mentioned cases the linearization procedure of the input PDEs is carried out.

##### 3.1. Iterative Linearization Procedure and Reduction of the Karman Equation into Germain-Lagrange Equations

We keep the biharmonic operator in each of (2), and we shift nonlinear terms into their right-hand sides. Assuming that functions on the right-hand sides are computed with respect to their previous step and that the equations are solved successively, the following iterative procedure is proposed:

On the first step of the iterative procedure the following biharmonic equation for a given load is solved:

The value of is substituted into the right-hand side of equation system (6), and as a result a biharmonic equation for with the known right-hand side is obtained. The value of the stress function found so far is substituted to the first system equation. The process of finding solutions is continued to achieve the required accuracy.

Let us note that as a result of the application of the iterative procedure, the Germain-Lagrange type system of equations are obtained.

Let us prove convergence of the constructed iterative procedure. Let refer to a Sobolev space of functions such that where denotes the space of functions being summed with a square in .

Let denote the closure of functions from (space of functions of class in , having compact carrier in ) in norm :

Since space is bounded and its boundary is efficiently regular, then map defines the norm in being equivalent to the norm generated by spaces .

Assume that ( denotes a conjugation to ). It is known [17] that in this case problems (2) and (4) have a solution (it may happen that it shall be nonunique).

*Novel Variation Formulation of the Problem*. Let us denote by a scalar product in : , and by a three linear form defined on :

Let us define the set and square function

Theorem 1. *The problem of minimizing (12) on set (11) has, at least, one solution.*

*Proof. *Let be the minimizing series; that is, we have
which exists, since is a square functional.

For arbitrary the following inequality holds:
where denotes the norm in and are certain positive constants. Then, (13) yields , where are the arbitrary functions (initial approximation).

Then, the following estimation holds:

Therefore, series is bounded in . Consequently, one may choose a series that , is weak in . Since is compact, then , is strong in .

We show that interval of the minimized series belongs to ; that is, , for all .

Since , is weak in , and is strong in , we get and consequently, for all . This means that

However, is half-continuous from below in weak topology on , and therefore the following inequality holds: .

Then (13) and (16) imply that . Therefore, the following equation holds: , which means that is a solution to the minimization problem.

Let us explain how points of the minimum of functional (12) are linked with solutions to problems (6) and (4). For this purpose a notation of weak solution shall be introduced.

A weak solution to problems (6) and (4) is defined by the pair of functions , satisfying the following:

Theorem 2. *Points of the functional minimum (12) are weak solutions to problems (6) and (4).*

*Proof. *Let be one of the functional (12) minimum points. Let us take and let us choose such that , that is, in the way that for all . Then . This yields
and by taking , condition yields

Substituting (19) into (18), dividing the obtained expression by and going to the limit for , the following inequality is obtained:

Substituting by in (20), one obtains equality, that is, (17).

Let us use the following notation .

Equation (17) can be given in the following form:
and it is clear that .

Therefore, each point of the minimum of functional (17) on satisfies (21), and hence it is a weak solution to problems (6) and (4).

Therefore, it has been shown that finding a solution to problems (6) and (4) is equivalent to finding a solution to the problem of minimization (13) with the occurrence of constraints . The reduced problem can be solved by various methods to find a minimum taking into account the mentioned constarints. Once a solution to the problem of finding an extreme is chosen, various algorithms to solve problems (6) and (4) can be applied.

Below, we focus on the method of gradient projection with a restoring constraint [18], which for linear constraints allows for essential simplification of finding a solution to the stated problem.

Let us construct an iteration process of minimizing on using the following scheme:(a)element is taken arbitrarily;(b)after computation of , and is defined successively by solutions to the following problems: (c)coefficient on step (b) is defined by the condition where stands for a parameter of the method.

Theorem 3. *For the iteration process (22) to (25) for an arbitrary initial point , obtained through this procedure series includes a subseries convergent to the weak solution to the problem ((6) and (4)).*

*Proof. *A possibility of constructing the series is yielded by an observation that for all and, consequently, , [19]. It means that the coupling equation is solvable. Consider the following difference:

Owing to , , (25) gives
where , . Taking (23) into account, one observes that serves as a general solution to the boundary value problem:

Further on it means that
where stands for the linear bounded operator being inversed to operator . Therefore

Let us proceed to the second order terms. Taking in (11) and applying (28), one gets

Let us estimate the last term in (29). Since and , then for the following equation should be satisfied:

This, in particular, yields

However, belongs to the bounded set in for arbitrary . It implies that or equivalently

Substituting (30), (33) into (29), and taking into account both positive defiantness (in the sense of ) and the constraints of operator , one gets

The latter estimation shows that the values are responsible for the satisfaction of inequality (24). For this purpose should be chosen in the following way:

It can always be done, since .

Taking in accordance with the algorithm applied so far, the following estimations are obtained on each step:
which means that for the arbitrarily taken we have . Since functional is bounded from below, the last inequality yields for . Besides, (36) gives

Let us emphasize that the so far introduced algorithm of the choice of guarantees that for arbitrary we have . In fact, because , then

Owing to (38), norms are bounded. Therefore, also the norm is bounded. In addition, taking (37) into account, we have for , and consequently, also for for all . The occurrence of a convergent subseries follows now from a bound of norms (see proof of Theorem 1).

We have shown in the above the convergence of the reduction procedure of system (2) to the successive solution to the biharmonic Germain-Lagrange type equation. The applied procedure linearizes and decreases the order of the input equations. We have proposed a further development of this approach on the basis of reduction of the biharmonic equation to that of the Poisson-type. The latter approach allows us to decrease four times the order of system (2).

##### 3.2. Iterative Procedure of Reduction of the Germain-Lagrange Equations Type to the Poisson Equations Type

The following original iterative procedure is proposed.

We consider a biharmonic equation given in the bounded convex space :

On the space boundary the following boundary conditions are given: where denotes the curvature of boundary .

Let us introduce the following new function . Substituting this function into (39) the following system of two Poisson-type equations is obtained:

Boundary conditions have the following form:

Therefore, a solution of the biharmonic equation is divided into a solution of two Poisson-type equations. Below, we prove convergence of the proposed procedure.

Let us define the following set for the function : where is the set of functions infinitely many times differentiable on . Closure of set (43) in norm is a subspace in which is denoted by . It is clear that .

It is known (see [13]) that a solution to problems (39) and (40) is equivalent to minimization on of the following functional:

*Hybrid Variation Problem Formulation*. We assume that instead of functional (44) the following one is minimized:
on such triads that their elements are coupled through equalities .

Let us define the space of the following functions: where the bilinear form is defined as

Theorem 4. *If the space is convex and has a Lipschitz continuous boundary , then, first, the map , now and later , is the norm on space equivalent to the real dot product form transforming into a Hilbert space; second, if , then
**
and if (48) is satisfied, then .*

*Proof. *In the beginning we show the second statement. Since has a continuous boundary, then the following Green formula holds:
Let . Then , , , and for all .

The last condition, in particular for yields

It follows from (50) that appears as a solution to the Dirichlet problem for the operator for . Since space is convex, therefore ([13], Section 7.1, page 373), and consequently . Using now (50) for , we get . Using the Green formula for , we find that . Assume that (48) holds. We show that . Since and , , then (49) yields for all ; that is, for all . Besides and the second statement is proved.

Let us prove the first statement. Endowed with the multiplication norm is a Hilbert space. Let . Then, as it has been shown, . For condition yields

Let us introduce space such that . Besides, let us introduce the following operator defined in the following way: for all we have a unique solution to the following equation:

Under the condition that satisfies the following:

It is not difficult to verify that under the theorem conditions, ; that is, stands for the operator of external normal derivative for . This operator is bounded; that is, . We denote its norm by . Then , where denotes the norm associated with the scalar product . Therefore, for all . Hence, taking (52) into account we get and the theorem is proved.

Results of this theorem allow us to transit from minimization of functional (45) on space to minimization of functional (45) on space .

Theorem 5. *Let be a solution to problem (44), then
**
In this case the triad is the unique solution to the problem of minimization of (54).*

*Proof. *We prove that a symmetric bilinear form , , is continuous and elliptic on .

Owing to Theorem 4, if , then ; ; ; . Then we have

For , , from (55) we obtain ([13], Section 1.2, page 38)

Then, elliptic property has been proved by *. *Continuity of the bilinear form is evident. It means that the problem of minimization
has a solution which is unique. Let us find a link between a solution to problem (37) as well as problems (41) and (42). If is a solution to problem (57), then the following relations should hold:

Since , then , , and . Therefore, taking (58) into account we get . Therefore, coincides with the solution to problems (39) and (40), and , .

*Remark 6. *Since the space is convex and its boundary is regular, then for a solution to problems (39) and (40),

*Solution to the Minimization Problem (44).* We show that a solution to problem (44) can be reduced to a solution of successive Dirichlet problems for the operator .

For further analysis it is suitable to introduce a linear transformation in the following way: if is a given function, then the function is a unique solution to the equation , for . This means that space , defined by (43), can be presented in the following form:

Problem (44) is equivalent to the following problem of optimal control: where the state function and are coupled via control through the following state equations:

As it follows from Remark 6, although the optimal control is sought on , its regularity is higher for *.* In this case the following trace is defined: . Furthermore, besides (62), we require that for all and . Then, if and , (61) implies that , where

The fundamental idea consists now in the application of a gradient method to the problem of minimization (63).

Let us take as a dual space for space , and let denote the relation of duality between spaces and . We denote by a derivative of the functional . Let us introduce a map in the following way: for is a unique function from satisfying the condition

Theorem 7. *For an arbitrary defined in (63), the functional is differentiable and its derivative is defined by the relation
*

*Proof. *Differentiating (63) yields
where .

From (66) and taking (52) into account we get

First term in (67) is equal to zero due to (64). Let us introduce the function , where is defined by (66) and let . Then, (66) implies .

However, the last term in this equality is equal to zero due to (64), which ends the proof. The gradient method applied to minimization (63) consists now in the determination of a series of functions via the following iteration scheme:
where is the arbitrary scalar product in space , is the positive parameter, and is the arbitrary function of .

Therefore, one iteration (68) corresponds to successive solutions of the following problems: (a)find for a given function a unique function , satisfying the following relations:
(b)find a function , satisfying the following relation:
(c)find a function , satisfying the following relation:
(d)find a function , satisfying the following relation:
where , , and .

We show that by a proper choice of parameter the iteration process ((69)â€“(73)) is convergent for arbitrarily taken initial approximation.

Let us first define the map in the following way: for any function function is unique satisfying the following condition:

Let us take , where denotes the norm associated with the scalar product . It is clear that this norm exists, since the map is bounded.

Theorem 8. *If parameter satisfies the following condition:
**
then the iteration process (69)â€“(73) is convergent in the sense that
*

*Proof. *It is sufficient to show that in in the particular case when . If we use the definition (74) of map , then the recurrent formula (73) gives
and therefore

Consider the term :

Let us estimate the norm :

The latter inequalities and (78) imply the following estimation:

Hence, in particular, we get
if satisfies inequalities (75). Besides, we have
which finishes the proof.

Since convergence of the considered method is guaranteed, any choice of subspace , satisfying the condition and any choice of scalar product on space is allowed. However, the choice influences parameter , as well as the computation time on each iterations. Finally, we point out a few remarks regarding practical computations of .

If the scalar product in is defined via the following formula: then as any function from can be taken, assuming that the following condition is satisfied:

Equality (85) can be understood in the sense of trace equality on a boundary. In fact, if (85) is satisfied, then which means that conditions of Theorem 7 are satisfied.

We may choose also the following scalar product:

However, in the latter case one needs to compute gradient on each step, which extends the computational time.

*Final Remarks*. (1) The proof has been carried out for equations in the hybrid form (2). It can be relatively easily extended into equations regarding displacements. (2) Results can be extended on other types of the differential equations, including nonlinear ones, consisting of a biharmonic operator.

##### 3.3. Iterative Procedure for the Reduction of the Karman Equation into the Poisson Equation

In the preceding sections we have proved convergence of the iterative procedures for linearization of (2) by reducing the solution of the eighth order system of nonlinear differential equations into that of the solution to a biharmonic equation, as well as the reduction of the biharmonic equation to the Poisson-type equation in the case of a curvilinear boundary using the finite element method (FEM).

While considering a space with the rectangular boundary, we may extend the procedure reported in Section 3.1 by introduction of new variables into the iterative procedure of solution to the Poisson-type equations without difficulties.

In the case of spaces with the curvilinear boundary, the procedure described in Section 3.2 can be applied to solve (2) using the iterative procedure, whose convergence has been proved in Section 3.1.

For this purpose new variables and are introduced

Then each of differential equations (2) is divided into two Poisson-type equations. The iterative procedure of solution of the obtained system of four Poisson-type equations has the following form:

Boundary conditions (4) are transformed to the following form:

The given procedure (89) has advantages over procedure (6), while solving each equation since instead of the fourth order equation that of the second order is solved. Because equations are solved by numerical methods (FDM, FEM) and the approximation of the biharmonic operator has high requirements on the approximating functions, then for the Poisson-type equation one may simplify the procedure (89) of finding a solution by choosing simple approximating functions.

In the FDM case, an order of algebraic equations system, after a discretization of the biharmonic equation, is higher for the second order equation, and hence higher expectations are required from computer abilities while solving the problem numerically.

#### 4. The Method of Variational Iterations (MVI) of PDEs Solutions

##### 4.1. Validation of Convergence

The method of variational iterations (MVI) was applied first in 1933 by Shunok who considered a deflection of cylindrical panels. However, this work did not meet with the response of others, and then it was rediscovered in the sixties of the previous century by Kantorovich and Krylov [20], who applied it in his investigation of rectangular plates. Then the MVI found wide application in solving various problems of plates and shells (see the list of references reported in [21]).

Here we prove validity and reliability of the mentioned method for a class of equations with positively defined operators, that is, biharmonic and harmonic ones. In other words, we prove a theorem on convergence of the MVI for iterative procedures (6) and (89).

Formally, the MVI scheme is as follows. Assume that our aim is to find a solution to the following: where stands for a certain operator defined on set of the Hilbert space ; is the function given for two variables and ; is the function of these two variables being sought; is the space of changes of variables and .

If ( is the certain bounded set of variables , is the bounded set of ), then a solution to (91) can be given in the following form: where functions and are defined by the following system of equations:

It is found in the following way: we have a certain system composed of functions regarding one variable, for instance, , and then from the first equations of system (93) the system of functions is defined. Then, the so far obtained functions represent a new choice of the functions regarding the variable , and the latter serves to get a new set of functions regarding variable , and so forth.

*Definition 9. *We say that a process of computation, when one given system of functions is replaced by the second system, is the MVI step. The number of steps needed to define a certain choice of functions corresponds to the superscript (number) of functions being considered. Truncating the process of finding functions and on the th step, which, for example, corresponds to the choice of functions , we define the function
taken as the approximating solution of (91) obtained by MVI.

*Remark 10. *Here and further on, we shall take as operator a certain differential operator defined on set of the Hilbert space . Then, on each step system (93) shall be transformed to a system of ODEs which can be solved further.

*Remark 11. *We call function the th approximation to (91) if the number of series terms in (92) is equal to .

Let us study the case of first approximation; that is, the following solution of (91) is sought: where functions and are defined through the illustrated way from the following system of equations:

Let the operator in (91) be positive definite. Let us introduce the following notation: is the energy space of the operator ; is the scalar product of elements in ; is the exact solution to (91).

Theorem 12. *If is a positive definite operator with the space of action , then the sequence of elements is monotonously decreasing; that is, for arbitrary and if , then
*

*Proof. *We consider a subset of the space which has the following form:

It is clear, that set represents a subspace of space (generally, of infinite dimension). Therefore, one may define projection onto space . As it is known that element stands for the projection of onto if the following condition is satisfied:
for arbitrary elements . It is clear that if , then (99) coincides with the first equation of system (97).

Since the element obtained through the first step of MVI is a projection of element onto the subspace , hence the following inequality holds:
for arbitrary elements . An analogous construction allows us to get a similar inequality for the subspaces; that is, we have

In the case corresponding to the second MVI step,