Abstract

This paper studies the construction of the exact solution for parabolic coupled systems of the type , , , , , and , where , , , and are arbitrary matrices for which the block matrix is nonsingular, and is a positive stable matrix.

1. Introduction

Coupled partial differential systems with coupled boundary-value conditions are frequent in quantum mechanical scattering problems [13], chemical physics [46], thermoelastoplastic modelling [7], coupled diffusion problems [810], and other fields. In this paper, we consider systems of the type where the unknown and the initial condition are -dimensional vectors, , , , are complex matrices, elements of , and is a matrix which satisfies the condition and we say that is a positive stable matrix (where denotes the real part of ). We assume that the block matrix and also that Condition (7) is well known from the literature of singular systems of differential equations, and it involves the existence of some such that matrix is invertible [11].

Problem (1)–(4) with the less restrictive condition that (7) was solved in [12], but not with all of its blocks , , , , is singular (in particular ). Mixed problems of the previously mentioned type, but with the Dirichlet conditions , instead of (2) and (3), have been treated in [13, 14].

Throughout this paper, and as usual, matrix denotes the identity matrix. The set of all the eigenvalues of a matrix in is denoted by , and its 2-norm is defined by [15, page 56] where for vector , the Euclidean norm of is . By [15, page 556], it follows that where . We say that a subspace of is invariant by the matrix , if . If is a matrix in , we denote by its Moore-Penrose pseudoinverse. A collection of examples, properties, and applications of this concept may be found in [11, 16], and can be efficiently computed with the MATLAB and Mathematica computer algebra systems.

2. Preliminaries and Notation

In [17], eigenfunctions of problem (1)–(3) were constructed assuming other additional conditions besides (6) and (7). We recall in this section the notation and results needed. Let and be matrices defined by fulfilling the relation: . Under hypothesis (6), matrix is regular; see [17, page 431], and let and be the matrices defined by so that they satisfy the relationships Assuming that the following condition and that values , of condition (13) satisfy we can define the function Note that under hypothesis (14) we have guaranteed the existence of the solutions for Equation (16) has a unique solution in each interval for , as seen in Figure 1. Also, it is straightforward to prove the following lemma.

Lemma 1. Under hypothesis (14), the roots of (16) satisfy . Also, if , then Otherwise, if , then However, in all cases it is

Proof. Function has vertical asymptotes at the points , , and has zeros at the points , . Thus, as we have stated, the real coefficient function intersects the graph of the function in each interval , where is the point of intersection. Thus, the sequence is monotonicaly increasing with . We have to consider two possibilities.(i). Function is therefore decreasing, and as seen in Figure 1, for large enough , then .(ii) . Function is therefore increasing, and as seen in Figure 1, for large enough , then .
Thus, observe that if , then for large sufficiently . For , reemploying in (16), one gets dividing by and taking limits where : This demonstrates that sequences and are infinite equivalents and where . Moreover, as is bounded, one gets that and . Taking into account that considering limits where , one gets , and with , then .
If , then one obtains two possibilities.(i)If , as one can see in Figure 1, for large enough , .(ii)If , as one can see in Figure 1, for large enough , .
Thus, observe that if , then also for sufficiently large. For , reemploying in (16), one gets dividing by and taking limits where , one gets that , and as the sequence is bounded, one gets that and . Moreover, one gets that considering limits where , one gets as the sequence is bounded, we have that , and with , one gets that .
If and , (16) reduces to , whose roots are , , and trivially . Then .

Under hypothesis there is a root , and we can define the set of eigenvalues of the problem (1)–(3) as where Thus, by [17, page 433] a set of solutions of problem (1) is given by where satisfies Observe that if is the degree of minimal polynomial of , the matrix is defined by In order to ensure that satisfies (30) we have and under condition (32), the solution of (30) is given by The eigenfunctions associated to the problem (1) are then given by Also is an eigenvalue of problem (1), if Under hypothesis (35), if , then, if we denote by one gets that function is an eigenfunction of problem (1) associated to eigenvalue .

All these results are summarized in Theorem 2.1 of [17, page 434]. Our goal is to find the exact solution of the problem (1)–(4). We provide conditions for the function and the matrix coefficients in order to ensure the existence of a series solution of the problem. The paper is organized as follows. In Section 3 a series solution for the problem is presented. In Section 4 we proceed with an algorithm and give an illustrative example.

3. A Series Solution

By the superposition principle, a possible candidate to the series solution of problem (1)–(4) is given by where and are defined by (34) and (37), respectively, for suitable vectors and .

Assuming that series (38) and the corresponding derivatives , , and are convergent (we will demonstrate this later), (38) will be a solution of (1)–(3). Now, we need to determine vectors and so that (38) satisfies (4).

Note that, taking to satisfy (13), from (12) one gets Under condition (39), we will consider the scalar Sturm-Liouville problem: which provides a family of eigenvalues given in (27). Then, the associated eigenfunctions are

By the theorem of convergence of the Sturm-Liouville for functional series [18, chapter 11], with the initial condition given in (4) satisfying the following properties: each component of , for , has a series expansion which converges absolutely and uniformly on the interval ; namely, where Thus, where and . On the other hand, from (38) and taking into account (34) and (37), one gets

We can equate the two expressions; if and , apart from conditions (33) and (36), satisfy . Then, we have Note that and , if Then defined by where and are defined by (44) and (47), satisfies the initial condition (4). Note that conditions (30)–(32) hold if and then It is easy to check that conditions (48), (51) are equivalent to the condition Condition (52) holds if Now we study the convergence of the solution given by (49) with defined by (44) and by (47). Using Parseval's identity for scalar Sturm-Liouville problems [19], there exists a positive constant so that . Taking formal derivatives in (49), one gets These series are all bounded in their respective norms: To check that the series is uniformly convergent in each domain , it is sufficient to verify that the series is uniformly convergent in this domain. This is trivial because, using (9), one gets and from the d'Alembert test series applied to each summand, taking into account (5) and the relation (19), , given in Lemma 1, one gets for that Thus, the series (56) is convergent.

Independence of the series solution (49) with respect to the chosen can be demonstrated using the same technique as given in [20].

We can summarize the results in the following theorem.

Theorem 2. Consider the homogeneous problem with homogeneous conditions (1)–(4) under hypotheses (5), (6), and (7) verifying conditions (13) and (14). Let be a vectorial function satisfying (42). Let be the set defined by (27), and let be the matrix defined by (31), taking as eigenvalues of problems satisfying including the eigenvalue if , and taking as eigenfunctions defined by (34). Let be given by (44) and vectors defined by (47). Then, , as defined in (49), is a series solution of problem (1)–(4).

4. Algorithm and Example

We can summarize the process to calculate the solution of the homogeneous problem with homogeneous conditions (1)–(4) in Algorithm 1.

Input data: , .
Result: .
(1) Check that matrix satisfies (5).
(2) Check that matrices are singular, and check that the block matrix
     is regular.
(3) Determine a number so that the matrix pencil is regular.
(4) Determine matrices and defined by (10).
(5) Determine matrices and defined by (11).
(6) Consider the following cases:
(i) Case  1. Condition (13) holds, that is, matrices and have a common eigenvector associated
   with eigenvalues and . In this case continue with step (7).
(ii) Case  2. Condition (13) does not hold. In this case the algorithm stops because it is not possible to
   find the solution of (1)–(4) for the given data.
(7) Determine , , and vector verifying
    such that:
(i) Conditions (53) hold, that is:
1.1: is an invariant subspace respect matrix .
1.2: , .
    (ii) Conditions (14) hold, that is:
1.3: , .
    (iii) The vectorial function satisfies (42), that is:
1.4: .
1.5: .
1.6: .
If these conditions are not satisfied, return to step (6) of Algorithm 1 discarding the values
taken for and .
(8) Determine the positive solutions of (16) and determine defined by (27).
(9) Determine degree of minimal polynomial of matrix .
(10) Building block matrix defined by (31).
(11) Determine so that rank .
(12) Include the eigenvalue if .
(13) Determine given by (44).
(14) Determine vectors defined by (47).
(15) Determine functions defined by (34).
(16) Determine the series solution of problem (1)–(4) defined by (49).

Example 1. We will consider the homogeneous parabolic problem with homogeneous conditions (1)–(4), where the matrix is chosen as and the matrices , , , are Also, the vectorial valued function will be defined as Observe that the method proposed in [12] cannot be applied to solve this problem.

We will follow Algorithm 1 step to step.(1)Matrix satisfies the condition (5), because . That is, is positive stable.(2)Each of the matrices , , is singular, and the block matrix is regular.(3)Note that although is singular, taking , the matrix pencil is regular. Therefore, we take .(4)By (10) we have (5)By (11) we have (6)We have and . Note that in this case the condition (13) holds because with and there exists a common eigenvector , , and thus . We are therefore in case 1 of Algorithm 1.(7)We take the values and and will check the conditions given in step 7 of the algorithm.(1.1) One gets that Let . Then , . In this case one gets and then the subspace is invariant by matrix .(1.2) It is trivial to check that (1.3) With these values , , and , one gets that With these values and , one gets (1.4) It is trivial to check that .(1.5) It is trivial to check that .(1.6) It is trivial to check that .(8)Equation (16) is of the form We can solve (72) exactly, , with an additional solution , because and then . Thus, we have a numerable family of solutions of (72) which we denote by , given by. (9)The minimal polynomial of matrix is given by . Then .(10)If is a positive solution of (72), the matrix given by (31) takes the form (11)Since the second column is zero, we have that . Thus, each one of the positive solutions given by (74) is an eigenvalue.(12)It is trivial to check that , because Then we do not include as an eigenvalue.(13)Taking into account that , one gets .(14)Vectors defined by (47) take the values (15)Using the minimal theorem [21, page 571], one gets that Next, by considering (78) with and simplifying, we obtain the value of . Taking into account that all eigenvalues are positive, the associated eigenfunctions are (16)We replace the values of given by (77) in (79) and take into account the value of the matrix . After simplification, we finally obtain the solution of (1)–(4) given by

Acknowledgments

This research has been supported by the Universitat Politècnica de València Grant PAID-06-11-2020. The third listed author has been partially supported by the Universitat Jaume I, Grant P1.1B2012-05.