• Views 373
• Citations 4
• ePub 23
• PDF 261
`Abstract and Applied AnalysisVolume 2013 (2013), Article ID 524514, 9 pageshttp://dx.doi.org/10.1155/2013/524514`
Research Article

## On Exact Series Solution of Strongly Coupled Mixed Parabolic Problems

1Departamento de Matemàtica Aplicada, Universitat Politècnica de València, Camino de Vera S/N, 46022 Valencia, Spain
2Instituto de Matemàtica Multidisciplinar, Universitat Politècnica de València, Camino de Vera S/N, 46022 Valencia, Spain
3Departamento de Matemàtica e Informàtica, Universidad Jaume I de Castellón, Avenida de Vicent Sos Baynat S/N, 12071 Castellón de la Plana, Spain

Received 25 March 2013; Accepted 24 June 2013

Academic Editor: Juan Carlos Cortés López

Copyright © 2013 Vicente Soler et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

#### Abstract

This paper studies the construction of the exact solution for parabolic coupled systems of the type , , , , , and , where , , , and are arbitrary matrices for which the block matrix is nonsingular, and is a positive stable matrix.

#### 1. Introduction

Coupled partial differential systems with coupled boundary-value conditions are frequent in quantum mechanical scattering problems [13], chemical physics [46], thermoelastoplastic modelling [7], coupled diffusion problems [810], and other fields. In this paper, we consider systems of the type where the unknown and the initial condition are -dimensional vectors, , , , are complex matrices, elements of , and is a matrix which satisfies the condition and we say that is a positive stable matrix (where denotes the real part of ). We assume that the block matrix and also that Condition (7) is well known from the literature of singular systems of differential equations, and it involves the existence of some such that matrix is invertible [11].

Problem (1)–(4) with the less restrictive condition that (7) was solved in [12], but not with all of its blocks , , , , is singular (in particular ). Mixed problems of the previously mentioned type, but with the Dirichlet conditions , instead of (2) and (3), have been treated in [13, 14].

Throughout this paper, and as usual, matrix denotes the identity matrix. The set of all the eigenvalues of a matrix in is denoted by , and its 2-norm is defined by [15, page 56] where for vector , the Euclidean norm of is . By [15, page 556], it follows that where . We say that a subspace of is invariant by the matrix , if . If is a matrix in , we denote by its Moore-Penrose pseudoinverse. A collection of examples, properties, and applications of this concept may be found in [11, 16], and can be efficiently computed with the MATLAB and Mathematica computer algebra systems.

#### 2. Preliminaries and Notation

In [17], eigenfunctions of problem (1)–(3) were constructed assuming other additional conditions besides (6) and (7). We recall in this section the notation and results needed. Let and be matrices defined by fulfilling the relation: . Under hypothesis (6), matrix is regular; see [17, page 431], and let and be the matrices defined by so that they satisfy the relationships Assuming that the following condition and that values , of condition (13) satisfy we can define the function Note that under hypothesis (14) we have guaranteed the existence of the solutions for Equation (16) has a unique solution in each interval for , as seen in Figure 1. Also, it is straightforward to prove the following lemma.

Figure 1: Graphical representation of and determination of the eigenvalues .

Lemma 1. Under hypothesis (14), the roots of (16) satisfy . Also, if , then Otherwise, if , then However, in all cases it is

Proof. Function has vertical asymptotes at the points , , and has zeros at the points , . Thus, as we have stated, the real coefficient function intersects the graph of the function in each interval , where is the point of intersection. Thus, the sequence is monotonicaly increasing with . We have to consider two possibilities.(i). Function is therefore decreasing, and as seen in Figure 1, for large enough , then .(ii) . Function is therefore increasing, and as seen in Figure 1, for large enough , then .
Thus, observe that if , then for large sufficiently . For , reemploying in (16), one gets dividing by and taking limits where : This demonstrates that sequences and are infinite equivalents and where . Moreover, as is bounded, one gets that and . Taking into account that considering limits where , one gets , and with , then .
If , then one obtains two possibilities.(i)If , as one can see in Figure 1, for large enough , .(ii)If , as one can see in Figure 1, for large enough , .
Thus, observe that if , then also for sufficiently large. For , reemploying in (16), one gets dividing by and taking limits where , one gets that , and as the sequence is bounded, one gets that and . Moreover, one gets that considering limits where , one gets as the sequence is bounded, we have that , and with , one gets that .
If and , (16) reduces to , whose roots are , , and trivially . Then .

Under hypothesis there is a root , and we can define the set of eigenvalues of the problem (1)–(3) as where Thus, by [17, page 433] a set of solutions of problem (1) is given by where satisfies Observe that if is the degree of minimal polynomial of , the matrix is defined by In order to ensure that satisfies (30) we have and under condition (32), the solution of (30) is given by The eigenfunctions associated to the problem (1) are then given by Also is an eigenvalue of problem (1), if Under hypothesis (35), if , then, if we denote by one gets that function is an eigenfunction of problem (1) associated to eigenvalue .

All these results are summarized in Theorem 2.1 of [17, page 434]. Our goal is to find the exact solution of the problem (1)–(4). We provide conditions for the function and the matrix coefficients in order to ensure the existence of a series solution of the problem. The paper is organized as follows. In Section 3 a series solution for the problem is presented. In Section 4 we proceed with an algorithm and give an illustrative example.

#### 3. A Series Solution

By the superposition principle, a possible candidate to the series solution of problem (1)–(4) is given by where and are defined by (34) and (37), respectively, for suitable vectors and .

Assuming that series (38) and the corresponding derivatives , , and are convergent (we will demonstrate this later), (38) will be a solution of (1)–(3). Now, we need to determine vectors and so that (38) satisfies (4).

Note that, taking to satisfy (13), from (12) one gets Under condition (39), we will consider the scalar Sturm-Liouville problem: which provides a family of eigenvalues given in (27). Then, the associated eigenfunctions are

By the theorem of convergence of the Sturm-Liouville for functional series [18, chapter 11], with the initial condition given in (4) satisfying the following properties: each component of , for , has a series expansion which converges absolutely and uniformly on the interval ; namely, where Thus, where and . On the other hand, from (38) and taking into account (34) and (37), one gets

We can equate the two expressions; if and , apart from conditions (33) and (36), satisfy . Then, we have Note that and , if Then defined by where and are defined by (44) and (47), satisfies the initial condition (4). Note that conditions (30)–(32) hold if and then It is easy to check that conditions (48), (51) are equivalent to the condition Condition (52) holds if Now we study the convergence of the solution given by (49) with defined by (44) and by (47). Using Parseval's identity for scalar Sturm-Liouville problems [19], there exists a positive constant so that . Taking formal derivatives in (49), one gets These series are all bounded in their respective norms: To check that the series is uniformly convergent in each domain , it is sufficient to verify that the series is uniformly convergent in this domain. This is trivial because, using (9), one gets and from the d'Alembert test series applied to each summand, taking into account (5) and the relation (19), , given in Lemma 1, one gets for that Thus, the series (56) is convergent.

Independence of the series solution (49) with respect to the chosen can be demonstrated using the same technique as given in [20].

We can summarize the results in the following theorem.

Theorem 2. Consider the homogeneous problem with homogeneous conditions (1)–(4) under hypotheses (5), (6), and (7) verifying conditions (13) and (14). Let be a vectorial function satisfying (42). Let be the set defined by (27), and let be the matrix defined by (31), taking as eigenvalues of problems satisfying including the eigenvalue if , and taking as eigenfunctions defined by (34). Let be given by (44) and vectors defined by (47). Then, , as defined in (49), is a series solution of problem (1)–(4).

#### 4. Algorithm and Example

We can summarize the process to calculate the solution of the homogeneous problem with homogeneous conditions (1)–(4) in Algorithm 1.

Algorithm 1: Solution of the homogeneous problem with homogeneous conditions (1)–(4).

Example 1. We will consider the homogeneous parabolic problem with homogeneous conditions (1)–(4), where the matrix is chosen as and the matrices , , , are Also, the vectorial valued function will be defined as Observe that the method proposed in [12] cannot be applied to solve this problem.

We will follow Algorithm 1 step to step.(1)Matrix satisfies the condition (5), because . That is, is positive stable.(2)Each of the matrices , , is singular, and the block matrix is regular.(3)Note that although is singular, taking , the matrix pencil is regular. Therefore, we take .(4)By (10) we have (5)By (11) we have (6)We have and . Note that in this case the condition (13) holds because with and there exists a common eigenvector , , and thus . We are therefore in case 1 of Algorithm 1.(7)We take the values and and will check the conditions given in step 7 of the algorithm.(1.1) One gets that Let . Then , . In this case one gets and then the subspace is invariant by matrix .(1.2) It is trivial to check that (1.3) With these values , , and , one gets that With these values and , one gets (1.4) It is trivial to check that .(1.5) It is trivial to check that .(1.6) It is trivial to check that .(8)Equation (16) is of the form We can solve (72) exactly, , with an additional solution , because and then . Thus, we have a numerable family of solutions of (72) which we denote by , given by. (9)The minimal polynomial of matrix is given by . Then .(10)If is a positive solution of (72), the matrix given by (31) takes the form (11)Since the second column is zero, we have that . Thus, each one of the positive solutions given by (74) is an eigenvalue.(12)It is trivial to check that , because Then we do not include as an eigenvalue.(13)Taking into account that , one gets .(14)Vectors defined by (47) take the values (15)Using the minimal theorem [21, page 571], one gets that Next, by considering (78) with and simplifying, we obtain the value of . Taking into account that all eigenvalues are positive, the associated eigenfunctions are (16)We replace the values of given by (77) in (79) and take into account the value of the matrix . After simplification, we finally obtain the solution of (1)–(4) given by

#### Acknowledgments

This research has been supported by the Universitat Politècnica de València Grant PAID-06-11-2020. The third listed author has been partially supported by the Universitat Jaume I, Grant P1.1B2012-05.

#### References

1. M. H. Alexander and D. E. Manolopoulos, “A stable linear reference potencial algorithm for solution of the quantum close-coupled equations in molecular scattering theory,” Journal of Chemical Physics, vol. 86, pp. 2044–2050, 1987.
2. V. S. Melezhik, I. V. Puzynin, T. P. Puzynina, and L. N. Somov, “Numerical solution of a system of integro-differential equations arising from the quantum mechanical three-body problem with Coulomb interaction,” Journal of Computational Physics, vol. 54, no. 2, pp. 221–236, 1984.
3. W. T. Reid, Ordinary Differential Equations, John Wiley & Sons, New York, NY, USA, 1971.
4. R. D. Levine, M. Shapiro, and B. Johnson, “Transition probabilities in molecular collisions: computational studies of rotational excitation,” Journal of Chemical Physics, vol. 53, pp. 1755–1766, 1970.
5. J. V. Lill, T. G. Schmalz, and J. C. Light, “Imbedded matrix Green's functions in atomic and molecular scattering theory,” The Journal of Chemical Physics, vol. 78, no. 7, pp. 4456–4463, 1983.
6. F. Mrugala and D. Secrest, “The generalized log-derivate method for inelastic and reactive collisions,” Journal of Chemical Physics, vol. 78, pp. 5954–5961, 1983.
7. T. Hueckel, M. Borsetto, and A. Peano, Modelling of Coupled Thermo-Elastoplastic Hydraulic Response of Clays Subjected to Nuclear Waste Heat, John Wiley & Sons, New York, NY, USA, 1987.
8. J. Crank, The Mathematics of Diffusion, Oxford University Press, 2nd edition, 1995.
9. M. D. Mikhailov and M. N. Osizik, Unifield Analysis and Solutions of Heat and Mass Diffusion, John Wiley & Sons, New York, NY, USA, 1984.
10. I. Stakgold, Green's Functions and Boundary Value Problems, John Wiley & Sons, New York, NY, USA, 1979.
11. S. L. Campbell and C. D. Meyer Jr., Generalized Inverses of Linear Transformations, Pitman, London, UK, 1979.
12. L. Jódar, E. Navarro, and J. A. Martin, “Exact and analytic-numerical solutions of strongly coupled mixed diffusion problems,” Proceedings of the Edinburgh Mathematical Society II, vol. 43, no. 2, pp. 269–293, 2000.
13. L. Jódar and E. Ponsoda, “Continuous numerical solutions and error bounds for time dependent systems of partial differential equations: mixed problems,” Computers & Mathematics with Applications, vol. 29, no. 8, pp. 63–71, 1995.
14. E. Navarro, E. Ponsoda, and L. Jódar, “A matrix approach to the analytic-numerical solution of mixed partial differential systems,” Computers & Mathematics with Applications, vol. 30, no. 1, pp. 99–109, 1995.
15. G. H. Golub and C. F. Van Loan, Matrix Computation, The Johns Hopkins University Press, Baltimore, Md, USA, 1989.
16. C. R. Rao and S. K. Mitra, Generalized Inverse of Matrices and Its Applications, John Wiley & Sons, New York, NY, USA, 1971.
17. E. Navarro, L. Jódar, and M. V. Ferrer, “Constructing eigenfunctions of strongly coupled parabolic boundary value systems,” Applied Mathematics Letters, vol. 15, no. 4, pp. 429–434, 2002.
18. E. L. Ince, Ordinary Differential Equations, Dover, New York, NY, USA, 1962.
19. E. A. Coddington and N. Levinson, Theory of Ordinary Differential Equations, McGraw-Hill, New York, NY, USA, 1967.
20. V. Soler, E. Navarro, and M. V. Ferrer, “Invariant properties of eigenfunctions for multicondition boundary value problems,” Applied Mathematics Letters, vol. 19, no. 12, pp. 1308–1312, 2006.
21. N. Dunford and J. Schwartz, Linear Operators, Part I, Interscience, New York, NY, USA, 1977.