Table of Contents Author Guidelines Submit a Manuscript
Abstract and Applied Analysis
Volume 2014, Article ID 759427, 9 pages
http://dx.doi.org/10.1155/2014/759427
Research Article

On Exact Series Solution for Strongly Coupled Mixed Parabolic Boundary Value Problems

1Departamento de Matemática Aplicada, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain
2Instituto de Matemática Multidisciplinar, Universitat Politècnica de València, Camino de Vera s/n, 46022 Valencia, Spain

Received 27 November 2013; Revised 6 February 2014; Accepted 9 February 2014; Published 3 April 2014

Academic Editor: Shengqiang Liu

Copyright © 2014 Vicente Soler et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

This paper continues with the construction of the exact solution for parabolic coupled systems of the type , , , , , and , where , , , and are arbitrary matrices for which the block matrix is nonsingular, and is a positive stable matrix. Although this problem has been solved in the literature (Soler et al., 2013), in this work we are using completely new conditions.

1. Introduction

Coupled partial differential systems with coupled boundary-value conditions are frequent in different areas of science and technology, as in chemical physics [13], in scattering problems in quantum mechanics [46], thermoelastoplastic modelling [7], coupled diffusion problems [810], and so forth.

In [11], eigenfunctions of problems of the type where the unknown is a -dimensional vector, are constructed under the following hypotheses. (i)The matrix coefficient is a matrix which satisfies the following condition: and thus is a positive stable matrix (where denotes the real part of ).(ii)Matrices , , are complex matrices, and we assume that the block matrix and also the matrix pencil Observe that condition (4) involves the existence of some , matrix being invertible [12].

In order to construct the eigenfunctions in [11], the following matrices and were defined by thus satisfying the condition where matrix denotes, as usual, the identity matrix. Under hypothesis (3), matrix is regular; see [11, page 431], and and are the matrices defined by which satisfy the following conditions: In [11] authors also assumed the following essential hypothesis: where denotes the set of all the eigenvalues of a matrix in . These eigenfunctions introduced in [11] were also used in [13] to construct a series solution of the initial-value problem: where is an -dimensional vector, under the additional hypothesis: where a subspace of is invariant by the matrix , if .

It is not difficult to show examples where this assumption (9) is held but (14) is not held. Let us consider the following example:

Example 1. We will consider the homogeneous parabolic problem with homogeneous conditions (10)–(13), where matrix is chosen: and the matrices , , are
Due to (5)–(7) we obtain
It is easy to verify that and . If we take , one gets
Taking into account that , we will have three possible values for .
(i) For there is
and then condition (9) is not fulfilled.
(ii) For , one gets
Thus, condition (9) is satisfied with
Next, let us verify if subspace is an invariant subspace for . Let ; then takes the form , . In this case we obtain
Thus, condition (14) is not fulfilled.
(iii) For see
Thus, condition (9) is fulfilled with .
Now, we verify if subspace is an invariant subspace for . Let ; then takes the form , . In this case we have
Thus, condition (14) is not fulfilled.
Observe that, in this example, hypothesis (9) is satisfied but in (14) it is not satisfied. Thus, the method proposed in [13] cannot be used to solve this problem.

This paper deals with the construction of eigenfunctions of problem (10)–(12) by assuming hypotheses (2), (3), and (4) but not hypothesis (9). This set of eigenfunctions allows us to construct a series solution of the problem (10)–(13). We provide conditions for the function and the matrix coefficients, in order to ensure the existence of a series solution of the problem.

The paper is organized as follows: in Section 2 a set of eigenfunctions will be constructed under a new condition, different from condition (9); in Section 3 a series solution for the problem is presented. In Section 4 we will introduce an algorithm and give an illustrative example.

Throughout this paper we will assume the results and nomenclature given in [11]. If is a matrix in , we denote by its Moore-Penrose pseudoinverse [12]. A collection of examples, properties, and applications of this concept may be found in [14], and can be efficiently computed with the MATLAB and Mathematica computer algebra systems.

2. The New Conditions

In Section 2 of [11] the eigenfunctions of problem (10)–(12) were constructed by using a matrix variable separation technique. We can repeat the calculations in this section to reach condition (44): Instead of (9), we will assume the following new condition:

From relation (6) is obtained (because, obviously, ). Considering (8) is obtained. Thus ; that is, is the eigenvalue which will be equal to in (26):

Let us assume that given in (28) satisfies We will observe that, under hypothesis (29), we have guaranteed the existence of the solutions for the equation Equation (30) has a unique solution in each interval for , as seen in Figure 1. Also, the following lemma is easily demonstrated.

759427.fig.001
Figure 1: Graphical representation of and determination of the eigenvalues for .

Lemma 2. Under hypothesis (29), the roots of (30) fulfil . Also,

Proof. Function has got vertical asymptotes at the points , and has got zeros at the points , . Therefore, as we have stated, the real coefficient function intersects the graph of the function in each interval , where is the point of intersection. Therefore, the sequence is monotone increasing with . First, we have to consider two possibilities. (i)If , as can be seen in Figure 1, for being large enough, .(ii)If , as can be seen in Figure 1, for being large enough, .
Therefore, observe that for being large enough. For , substituted in (30), we get by dividing by . By taking limits where , is obtained, and in this way the sequence is bounded, , and . Moreover, so, considering limits where , one gets , and the sequence is also bounded. Moreover , and with , one gets that .
Finally, if , (30) is reduced to , whose roots are , , and trivially , then .

Observe that under hypothesis there is a root , and we can define the set of eigenvalues of the problem (10)–(12) as where Thus, by [11, page 433] a set of solutions of problem (10)–(12) is given by where satisfies and where is the degree of minimal polynomial of . Formulas (37) are equivalent to the matrix equation: where In order to ensure that fulfils (37) we have and under condition (40), the solution of (37) is given by The eigenfunctions associated to the problem (10)–(12) are then given by Working in a similar form to that in [11, page 433], we can show that also is an eigenvalue of problem (10), if Under hypothesis (43), if we denote one gets that function is an eigenfunction of problem (10) associated to eigenvalue .

As a conclusion, the following theorem has been demonstrated.

Theorem 3. Consider problem (10)–(12) which fulfils conditions (2) and (3). Let be the degree of the minimal polynomial of matrix and let be a real number satisfying (4). Let , be the matrices defined by (5) and , by (7), respectively. (1)Let us assume conditions (27) and (29). Then (28) admits a set of real positive solutions denoted by and defined by (34). Let be the matrix defined by (39) where and suppose that condition (40) is fulfilled. Then, problem (10)–(12) admits eigenfunctions associated to positive eigenvalues and defined by (42), where is given by (41) where is a vector in .(2) is an eigenvalue of problem (10)–(12) if condition (43) is fulfilled. Under hypothesis (43), if , then expression (45) provides eigenfunction of problem (10)–(12) associated to eigenvalue .

3. A Series Solution

By assuming superposition principle, a possible series solution of problem (10)–(13) is given by where and are defined by (42) and (45), respectively, for suitable vectors and .

Assuming that series (46) and the corresponding derivatives , ,  and   are convergent (we will demonstrate this later), (46) will be a solution of (10)–(12). Now, we need to determine vectors and so that (46) satisfies (13).

Note that, by taking to fulfil (27), from (8) we have Under condition (47), we will consider the scalar Sturm-Liouville problem: which provides a family of eigenvalues given in (34). Then, the associated eigenfunctions are

According to the theorem of convergence of Sturm-Liouville for functional series [15, chapter 11], with the initial condition given in (13) which fulfils the following properties: each component of , for , has got a series expansion which converges absolutely and uniformly to the interval ; namely, where Thus, where and . On the other hand, from (46) and taking into account (42) and (45), we obtain

We can equate the two expressions (53) and (54) if and , apart from conditions (41) and (44), satisfy . Then, we have Note and , if

Then is defined by where and are defined by (52) and (55), respectively, and fulfils the initial condition (13). Note that conditions (37)–(40) are held if

Condition (58) is equivalent to

The study of the convergence of the series solution (57) with defined by (52) and by (55), by using Lemma 2, can be reduced to that made in [13] for the case . Similarly, independence of the series solution (57) with respect to the chosen can be demonstrated with the same technique as given in [16].

We can summarize the results in the following theorem.

Theorem 4. Consider the homogeneous problem with homogeneous conditions (10)–(13) under hypotheses given in Theorem 3. Assume that function of (13) fulfils conditions (50) and (59). Then, the series defined in (46) is a solution of problem (10)–(13).

4. Algorithm and Example

We can summarize the process to calculate the solution of the homogeneous problem with homogeneous conditions (10)–(13) in Algorithm 1.

alg1
Algorithm 1: Solution of the homogeneous problem with homogeneous conditions (10)–(13).

Example 5. We will consider the homogeneous parabolic problem with homogeneous conditions (10)–(13), given in Example 1, that is, where the matrix is given in (15) and matrices are given in (16). We consider the vector-valued function to be defined as
Observe that, as demonstrated in Example 1, hypothesis (9) is fulfilled but (14) is not fulfilled. Thus, the method proposed in [13] cannot be used to solve this problem.

Algorithm  1 (step by step). Consider(1)Matrix satisfies the condition (2), because ; that is, is positive stable.(2)Each of the matrices is singular, and the block matrix, is regular.(3)Note that although is singular, if we take , the matrix pencil, is regular. Therefore, .(4)In (17) we have (5)In (18) we have (6)Also and . Note that in this case the condition (27) is fulfilled because with and there is a common eigenvector , , and thus . Therefore, we are in case 1 of Algorithm 1.(7)We take the values , , and we will check the conditions given in step 7 of the algorithm:(1.1) Let ; then , . In this case we have and then the subspace is invariant by matrix .It is trivial to verify the following:(1.2)(1.3),(1.4),(1.5),(8)Equation (30) is as follows: We can solve (68) exactly, , with an additional solution , because and then . Thus, we have a numerable family of solutions for (68) which we denote , given by (9)The minimal polynomial of matrix is given by , and then .(10)If is a positive solution of (68), the matrix given by (39) takes the form (11)Since the third column is zero, . Thus, each of the positive solutions given in (70) is an eigenvalue.(12)It is trivial to verify that , because and thus we will not include as an eigenvalue.(13)Taking into account that , .(14)Vectors defined in (55) take the following values: (15)When the minimal theorem [17, page 571] is used,

Next, by considering (74) with and simplifying, we obtain the value of .(16)Values of given in (73) are replaced in (57), and we take into account the value of the matrix . After simplification, we finally obtain the solution of problem (10)–(13) given by

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This work has been supported by the Generalitat Valenciana GV/2013/035.

References

  1. R. D. Levine, M. Shapiro, and B. R. Johnson, “Transition probabilities in molecular collisions: computational studies of rotational excitation,” The Journal of Chemical Physics, vol. 52, no. 4, pp. 1755–1766, 1970. View at Google Scholar · View at Scopus
  2. J. V. Lill, T. G. Schmalz, and J. C. Light, “Imbedded matrix Green's functions in atomic and molecular scattering theory,” The Journal of Chemical Physics, vol. 78, no. 7, pp. 4456–4463, 1983. View at Publisher · View at Google Scholar · View at MathSciNet
  3. F. Mrugaía and D. Secrest, “The generalized log-derivative method for inelastic and reactive collisions,” The Journal of Chemical Physics, vol. 78, no. 10, pp. 5954–5961, 1983. View at Google Scholar · View at Scopus
  4. M. H. Alexander and D. E. Manolopoulos, “A stable linear reference potential algorithm for solution of the quantum close-coupled equations in molecular scattering theory,” The Journal of Chemical Physics, vol. 86, no. 4, pp. 2044–2050, 1987. View at Google Scholar · View at Scopus
  5. V. S. Melezhik, I. V. Puzynin, T. P. Puzynina, and L. N. Somov, “Numerical solution of a system of integro-differential equations arising from the quantum mechanical three-body problem with Coulomb interaction,” Journal of Computational Physics, vol. 54, no. 2, pp. 221–236, 1984. View at Publisher · View at Google Scholar · View at MathSciNet
  6. W. T. Reid, Ordinary Differential Equations, Wiley, New York, NY, USA, 1971. View at MathSciNet
  7. T. Hueckel, M. Borsetto, and A. Peano, Modelling of Coupled Thermo-ElasToplastic Hydraulic Response of Clays Subjected to Nuclear Waste Heat, Wiley, New York, NY, USA, 1987.
  8. J. Crank, The Mathematics of Diffusion, Oxford University Press, Oxford, UK, 1975. View at MathSciNet
  9. M. D. Mikhailov and M. N. Osizik, Unifield Analysis and Solutions of Heat and Mass Diffusion, Wiley, New York, NY, USA, 1984.
  10. I. Stakgold, Green's Functions and Boundary Value Problem, Wiley, New York, NY, USA, 1979. View at MathSciNet
  11. E. Navarro, L. Jódar, and M. V. Ferrer, “Constructing eigenfunctions of strongly coupled parabolic boundary value systems,” Applied Mathematics Letters, vol. 15, no. 4, pp. 429–434, 2002. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  12. S. L. Campbell and C. D. Meyer Jr., Generalized Inverses of Linear Transformations, Pitman, London, UK, 1979.
  13. V. Soler, E. Defez, M. V. Ferrer, and J. Camacho, “On exact series solution of strongly coupled mixed parabolic problems,” Abstract and Applied Analysis, vol. 2013, Article ID 524514, 9 pages, 2013. View at Publisher · View at Google Scholar · View at MathSciNet
  14. C. R. Rao and S. K. Mitra, Generalized Inverse of Matrices and Its Applications, Wiley, New York, NY, USA, 1971. View at MathSciNet
  15. E. L. Ince, Ordinary Differential Equations, Dover, New York, NY, USA, 1962. View at MathSciNet
  16. V. Soler, E. Navarro, and M. V. Ferrer, “Invariant properties of eigenfunctions for multicondition boundary value problems,” Applied Mathematics Letters, vol. 19, no. 12, pp. 1308–1312, 2006. View at Publisher · View at Google Scholar · View at Zentralblatt MATH · View at MathSciNet
  17. N. Dunford and J. Schwartz, Linear Operators, Part I, Interscience, New York, NY, USA, 1977.