Abstract

Based on recent progress in understanding the abstract setting for Friedrichs symmetric positive systems by Ern et al. (2007), as well as Antonić and Burazin (2010), we continue our efforts to relate these results to the classical Friedrichs theory. Following the approach via the trace operator, we extend the results of Antonić and Burazin (2011) to situations where the important boundary field does not consist only of projections, allowing the treatment of hyperbolic equations, besides the elliptic ones.

1. Introduction

Over fifty years ago Friedrichs [1] showed that many partial differential equations of mathematical physics can be written as a first-order system of the form 𝗎=d𝑘=1𝜕𝑘𝐀𝑘𝗎+𝐂𝗎=𝖿,(1.1) which was afterwards called the Friedrichs system or the symmetric positive system.

More precisely, it is assumed (we keep these assumptions throughout the rest of the paper) that 𝑑,𝑟𝐍 and that Ω𝐑𝑑 is an open and bounded set with Lipschitz boundary Γ (we will denote its closure by 𝖢𝗅Ω=ΩΓ). Real matrix functions 𝐀𝑘W1,(Ω;𝑀𝑟(𝐑)), 𝑘{1,,𝑑}, and 𝐂L(Ω;𝑀𝑟(𝐑)) satisfy𝐀𝑘issymmetric:𝐀𝑘=𝐀𝑘,(F1)𝜇0>0𝐂+𝐂+d𝑘=1𝜕𝑘𝐀𝑘2𝜇0𝐈(a.e.onΩ),(F2) while 𝖿L2(Ω;𝐑𝑟).

Quite often, even though a system does not satisfy the above conditions, it can be symmetrised after multiplication by a positive definite matrix function. However, the choice of such a multiplier is neither unique nor straightforward in general. An important advantage of this framework is the fact that it can accommodate the equations which change their type, such as the equations appearing in the mathematical models of transonic gas flow.

For the boundary conditions, Friedrichs [1] first defined 𝐀𝝂=𝑑𝑘=1𝜈𝑘𝐀𝑘,(1.2) where 𝝂=(𝜈1,𝜈2,,𝜈𝑑) is the outward unit normal on Γ, which is, as well as 𝐀𝝂, of class L on Γ (of course, Friedrichs considered more regular boundaries at the time). For a given matrix field on the boundary 𝐌Γ𝑀𝑟(𝐑), the boundary condition is prescribed by 𝐀𝝂𝗎𝐌|Γ=𝟢,(1.3) and by varying 𝐌 one can enforce different boundary conditions. Friedrichs required the following two conditions (for a.e. 𝐱Γ) to hold: (𝝃𝐑𝑟𝐑)𝐌(𝐱)𝝃𝝃0,(FM1)𝑟𝐀=ker𝝂𝐀(𝐱)𝐌(𝐱)+ker𝝂(𝐱)+𝐌(𝐱),(FM2) and such 𝐌 he called an admissible boundary condition. In the sequel we will refer to both properties (FM1) and (FM2) as (FM), and similarly in other such situations.

The boundary value problem thus reads the following: for given 𝖿L2(Ω;𝐑𝑟) find 𝗎 such that 𝐀𝗎=𝖿,𝝂𝗎𝐌|Γ=𝟢.(BVP)

Of course, under such weak assumptions the existence of a classical solution (C1orW1,) cannot be expected. It can be shown that, in general, the solution belongs only to the graph space of operator : 𝑊=𝗎L2(Ω;𝐑𝑟)𝗎L2(Ω;𝐑𝑟).(1.4)𝑊 is a separable Hilbert space (see, e.g., [2]) with the inner product (the corresponding norm will be denoted by ) 𝗎𝗏=𝗎𝗏L2(Ω;𝐑𝑟)+𝗎𝗏L2(Ω;𝐑𝑟),(1.5) in which the restrictions of functions from C𝑐(𝐑𝑑;𝐑𝑟) to Ω are dense.

However, with such a weak notion of solution in a quite large space, the question arises how to interpret the boundary condition. It is not a priori clear what would be the meaning of restriction 𝗎|Γ for functions 𝗎 from the graph space. Recently (cf. [2, 3]; for standard results regarding the traces of functions defined in Lipschitz domains we refer to [4]), it has been shown that 𝐀𝝂𝗎|Γ can be interpreted as an element of 𝐻1/2(Γ;𝐑𝑟). Namely, on the graph space we can define operator 𝒯𝑊𝐻1/2(Γ;𝐑𝑟), which for 𝗎,𝗏𝐻1(Ω;𝐑𝑟) satisfies 𝐻1/2(Γ;𝐑𝑟)𝒯𝗎,𝒯𝐻1𝗏𝐻1/2(Γ;𝐑𝑟)=𝗎𝗏L2(Ω;𝐑𝑟)𝗎𝗏L2(Ω;𝐑𝑟)=Γ𝐴𝝂(𝐱)𝒯𝐻1𝗎(𝐱)𝒯𝐻1𝗏(𝐱)𝑑𝑆(𝐱),(1.6) where 𝒯𝐻1 stands for the trace operator 𝒯𝐻1𝐻1(Ω;𝐑𝑟)𝐻1/2(Γ;𝐑𝑟), and L2(Ω;𝐑𝑟)𝒟(Ω;𝐑𝑟), the formally adjoint operator to , is defined by 𝗏=𝑑𝑘=1𝜕𝑘𝐀𝑘𝗏+𝐂+𝑑𝑘=1𝜕𝑘𝐀𝑘𝗏.(1.7) In general, 𝒯 is not an operator onto 𝐻1/2(Γ;𝐑𝑟) but still has a right inverse (the lifting operator) im𝒯𝑊0<𝑊, which satisfies 𝒯𝑔=𝑔,𝑔im𝒯.(1.8) Here, 𝑊0 denotes the closure of C𝑐(Ω;𝐑𝑟) in 𝑊, while 𝑊0 denotes its orthogonal complement in 𝑊. As im𝒯 is not necessarily closed in 𝐻1/2(Γ;𝐑𝑟), neither is necessarily continuous.

Using this trace operator, the appropriate well-posedness results for the weak formulation of (BVP), under additional assumptions, have been proven [3, 5].

More recently, the Friedrichs theory has been rewritten in an abstract setting by Ern and Guermond [6] and Ern et al. [7], in terms of operators acting on Hilbert spaces, such that the traces on the boundary have not been explicitly used. Instead, the trace operator has been replaced by the boundary operator 𝐷(𝑊;𝑊) defined, for 𝗎,𝗏𝑊, by 𝑊𝐷𝗎,𝗏𝑊=𝗎|𝗏L2(Ω;𝐑𝑟)𝗎|𝗏L2(Ω;𝐑𝑟).(1.9) The boundary operator 𝐷 can also be expressed [2, 7] via matrix function 𝐀𝝂: 𝗎,𝗏𝐻1(Ω;𝐑𝑟)𝑊𝐷𝗎,𝗏𝑊=Γ𝐀𝝂(𝐱)𝒯𝐻1𝗎(𝐱)𝒯𝐻1𝗏(𝐱)𝑑𝑆(𝐱).(1.10)

In the light of expressions (1.10) and (1.6), it is clear that 𝒯 and 𝐷 are somehow connected. However, 𝒯 maps into 𝐻1/2(Γ;𝐑𝑟), while the codomain of 𝐷 is 𝑊, and it appears that 𝐷 has better properties than the trace operator. Namely, using the operator 𝐷 instead of 𝒯 in [7], the following weak well-posedness result has been shown.

Theorem 1.1. Assume that there exists an operator 𝑀(𝑊;𝑊) satisfying (𝗎𝑊)𝑊𝑀𝗎,𝗎𝑊0,(M1)𝑊=ker(𝐷𝑀)+ker(𝐷+𝑀).(M2) Then, the restricted operators |ker(𝐷𝑀)ker(𝐷𝑀)L2(Ω;𝐑𝑟),|)ker(𝐷+𝑀ker𝐷+𝑀L2(Ω;𝐑𝑟)(1.11) are isomorphisms.

The operator 𝑀 from the theorem is also called the boundary operator, as ker𝑀=ker𝐷=𝑊0.

After rewriting the abstract theory of Ern et al. [7] in terms of Kreĭn spaces [2, 8, 9] and closing the questions they left open, in papers [10, 11] we investigated the precise relationship between the classical Friedrichs theory and its abstract counterpart and applied the new results on some examples.

To be specific, as the analogy between the properties (M) for operator 𝑀 and the conditions (FM) for matrix boundary condition 𝐌 is apparent, a natural question to be investigated is the nature of the relationship between the matrix field 𝐌 and the boundary operator 𝑀. More precisely, our goal was to find additional conditions on the matrix field 𝐌 with properties (FM) which will guarantee the existence of a suitable operator 𝑀(𝑊;𝑊), with properties (M).

For a given matrix field 𝐌, which 𝑀 will be a suitable operator? The condition is satisfied by such an operator 𝑀 that the result of Theorem 1.1 really presents the weak well-posedness result for (BVP) in the following sense: if, for given 𝖿L2(Ω;𝐑𝑟), 𝗎ker(𝐷𝑀) is such that 𝗎=𝖿, where we additionally have 𝗎C1(Ω;𝐑𝑟)C(𝖢𝗅Ω;𝐑𝑟), then 𝗎 satisfies (BVP) in the classical sense.

With such a connection between 𝐌 and the boundary operator 𝑀, applications of the abstract theory to some equations of particular interest will become easier, as calculations with matrices are simpler than those with operators. We also take it as a first step towards better understanding of the relation between the existence and uniqueness results for the Friedrichs systems as in [7, 8] and the earlier classical results [1, 3, 5].

In [10] we have established this connection between 𝐌 and 𝑀 using two different approaches: via boundary operator 𝐷 and via the trace operator 𝒯. Based on (1.10) and (1.6), in both these approaches we look for 𝑀 of the form (see [6]) 𝗎,𝗏𝐻1(Ω;𝐑𝑟)𝑊𝑀𝗎,𝗏𝑊=Γ𝐌(𝐱)𝒯𝐻1𝗎(𝐱)𝒯𝐻1𝗏(𝐱)𝑑𝑆(𝐱),(1.12) where we naturally assume that 𝐌 is bounded, that is, 𝐌L(Γ;𝑀𝑟(𝐑)), and both the approaches make use of the following lemma.

Lemma 1.2. Let matrix function 𝐌 satisfy (FM1). Then, the following statements are equivalent.
(a)𝐌 satisfies (FM2).(b)For almost every 𝐱Γ, there is a projector 𝐒(𝐱), such that 𝐌(𝐱)=(𝐈2𝐒(𝐱))𝐀𝝂(𝐱).(c)For almost every 𝐱Γ, there is a projector 𝐏(𝐱), such that 𝐌(𝐱)=𝐀𝝂(𝐱)(𝐈2𝐏(𝐱)).

As properties (FM) do not guarantee that the preceding formula defines a continuous operator 𝑀𝑊𝑊 satisfying (M), we have found [10] two different sets of additional conditions under which the desired properties are satisfied. The conditions that we got by using the trace operator are given in the next theorem.

Theorem 1.3. Assume that the matrix field 𝐌L(Γ;𝑀𝑟(𝐑)) satisfies (FM) and that by (1.12) an operator 𝑀(𝑊;𝑊) is defined. Then, (M1) holds.
Let the matrix function 𝐒 from Lemma 1.2 additionally satisfy 𝐒C0,1/2(Γ;𝑀𝑟(𝐑)). If by 𝒮(𝐻1/2(Γ;𝐑𝑟)) one denotes the multiplication operator𝒮(𝗓)=𝐒𝗓,𝗓𝐻1/2(Γ;𝐑𝑟),(1.13) by 𝒮(𝐻1/2(Γ;𝐑𝑟)) its adjoint operator defined by 𝐻1/2(Γ;𝐑𝑟)𝒮𝑇,𝗓𝐻1/2(Γ;𝐑𝑟)=𝐻1/2(Γ;𝐑𝑟)𝑇,𝒮𝗓𝐻1/2(Γ;𝐑𝑟),𝑇𝐻1/2(Γ;𝐑𝑟),𝗓𝐻1/2(Γ;𝐑𝑟),(1.14) and by 𝒯𝑊𝐻1/2(Γ;𝐑𝑟) the trace operator, then the condition 𝒮(im𝒯)im𝒯 implies (M2).

The representation of 𝐌 as a product of 𝐀𝝂 by some matrix field 𝐈2𝐒 is the essential ingredient in the proof of Theorem 1.3. However, in [11] we have noted that the requirement for 𝐒 to be a projector appears overly restrictive in applications (this seems to be particularly true for hyperbolic equations), which motivated further investigation of possible improvements of Lemma 1.2. As a result we have realised that 𝐒 needs to be a projector only at points where 𝐀𝝂 is a regular matrix. Our goal here is to verify whether Theorem 1.3 (or some variant of it) holds in case when 𝐒 is not a projector.

The paper is organised as follows. In Section 2 we propose an extension of the method from [10], the main result being Theorem 2.5. On the part of the boundary where 𝐀𝝂 is singular, the matrix 𝐒 appearing in Lemma 1.2(b) need not necessarily be a projector. This allows the treatment of hyperbolic equations, which is illustrated by an example in Section 3, where we also provide two sufficient conditions ensuring the assumptions of Theorem 2.5. Finally, in Section 4 we investigate whether we can get better results by using 𝐏 instead of 𝐒.

2. Approach via Trace Operator When S Is Not a Projector

Lemma 2.1. Let matrix function 𝐌L(Γ;𝑀𝑟(𝐑)) satisfy (FM1). Then the following statements are equivalent:
(a)𝐌 satisfies (FM2).(b)For almost every 𝐱Γ, there is a matrix 𝐒(𝐱), such that 𝐌(𝐱)=(𝐈2𝐒(𝐱))𝐀𝝂(𝐱) and 𝐀ker𝝂𝐀(𝐱)𝐒(𝐱)+ker𝝂(𝐱)(𝐈𝐒(𝐱))=𝐑𝑟.(2.1)(c)For almost every 𝐱Γ, there is a matrix 𝐏(𝐱), such that 𝐌(𝐱)=𝐀𝝂(𝐱)(𝐈2𝐏(𝐱)) and 𝐀ker𝝂𝐀(𝐱)𝐏(𝐱)+ker𝝂(𝐱)(𝐈𝐏(𝐱))=𝐑𝑟.(2.2)

Proof. As for 𝐌 from (c) we have 𝐀𝝂𝐌=2𝐀𝝂𝐏 and 𝐀𝝂+𝐌=2𝐀𝝂(𝐈𝐏); thus (FM2) holds. Note also that by Lemma 1.2, the part (a) implies (c).
In order to prove that (b) is equivalent to (a) and (c), we use the well-known fact [1, 12] that 𝐌 satisfies (FM) if and only if 𝐌 satisfies (FM). By (c) this is equivalent to the existence of 𝐒 such that 𝐌=𝐀𝝂(𝐈2𝐒) and ker(𝐀𝝂𝐒)+ker(𝐀𝝂(𝐈𝐒))=𝐑𝑟 a.e. on Γ, which is actually equivalent to (b).

Remark 2.2. Note that 𝐏 and 𝐒 from the previous lemma also satisfy 𝐒ker𝐀𝝂+ker𝐈𝐒𝐀𝝂=𝐑𝑟𝐏a.e.onΓ,ker𝐀𝝂+ker𝐈𝐏𝐀𝝂=𝐑𝑟a.e.onΓ.(2.3) This is a consequence of the already-mentioned statement that 𝐌 satisfies (FM) if and only if 𝐌 satisfies (FM). It is also obvious that 𝐒𝐀𝝂=𝐀𝝂𝐏 a.e. on Γ.

Remark 2.3. The result of Lemma 2.1 improves that of Lemma 1.2. We distinguish two situations that can occur at a fixed point 𝐱Γ (which we suppress in writing below).
If 𝐀𝝂 is a regular matrix, then (for 𝐏 as in Lemma 2.1) ker(𝐀𝝂𝐏)=ker𝐏 and ker(𝐀𝝂(𝐈𝐏))=ker(𝐈𝐏), and therefore ker(𝐀𝝂𝐏)+ker(𝐀𝝂(𝐈𝐏))=𝐑𝑟 is equivalent to ker𝐏+ker(𝐈𝐏)=𝐑𝑟, which is equivalent to the statement that 𝐏 is a projector.
If 𝐀𝝂 is not regular, then there can be several matrices 𝐏, which are not projectors but nevertheless satisfy ker(𝐀𝝂𝐏)+ker(𝐀𝝂(𝐈𝐏))=𝐑𝑟. For example, any matrix 𝐏, such that im𝐏ker𝐀𝝂 or im(𝐈𝐏)ker𝐀𝝂, would satisfy ker(𝐀𝝂𝐏)+ker(𝐀𝝂(𝐈𝐏))=𝐑𝑟, as for such 𝐏 either ker(𝐀𝝂𝐏)=𝐑𝑟 or ker(𝐀𝝂(𝐈𝐏))=𝐑𝑟.
The similar statements hold for 𝐒.

A variant of the following lemma has been proved in [11].

Lemma 2.4. If 𝐌 satisfies (FM), then for 𝐏 and 𝐒 as in Lemma 2.1 one has 𝐀𝝂𝐏(𝐈𝐏)=𝐀𝝂(𝐈𝐏)𝐏=𝐀𝝂𝐒(𝐈𝐒)=𝐀𝝂(𝐈𝐒)𝐒=𝟎a.e.onΓ.(2.4)

Proof. Any 𝗐𝐑𝑟 can be decomposed as 𝗐=𝝃+𝜼 such that 𝝃ker(𝐀𝝂𝐏) and 𝜼ker(𝐀𝝂(𝐈𝐏)). Now using 𝐀𝝂𝐏=𝐒𝐀𝝂 we easily get 𝐀𝝂𝐏(𝐈𝐏)𝗐=𝐀𝝂𝐏(𝐈𝐏)𝝃+𝐀𝝂𝐏(𝐈𝐏)𝜼=𝐀𝝂𝐏𝝃𝐒𝐀𝝂𝐏𝝃+𝐒𝐀𝝂(𝐈𝐏)𝜼=𝟢,(2.5) which concludes the proof for 𝐏, while for 𝐒 one can argue analogously.

Next we prove a new version of Theorem 1.3, with 𝐒 not necessarily being a projector.

Theorem 2.5. Assume that the matrix field 𝐌L(Γ;𝑀𝑟(𝐑)) satisfies (FM) and that by (1.12) an operator 𝑀(𝑊;𝑊) is defined. Then (M1) holds.
Let the matrix function 𝐒 from Lemma 2.1 additionally satisfy 𝐒C0,1/2(Γ;𝑀𝑟(𝐑)) such that the multiplication operator 𝒮 defined by (1.13) belongs to (𝐻1/2(Γ;𝐑𝑟)). If one denotes by 𝒮(𝐻1/2(Γ;𝐑𝑟)) the adjoint operator of 𝒮 and by 𝒯𝑊𝐻1/2(Γ;𝐑𝑟) the trace operator on the graph space, then the condition 𝒮(im𝒯)im𝒯 implies (M2).

Proof. It only remains to show (M2). First we prove 𝒮𝐻1/2𝒮𝒯=𝐻1/2𝒮𝒮𝒯=0,(2.6) where 𝐻1/2 is the identity on 𝐻1/2(Γ;𝐑𝑟). By using Lemma 2.4, for 𝗎𝐻1(Ω;𝐑𝑟)and 𝗓𝐻1/2(Γ;𝐑𝑟) we have 𝐻1/2(Γ;𝐑𝑟)𝒮𝐻1/2𝒮𝒯𝗎,𝗓𝐻1/2(Γ;𝐑𝑟)=𝐻1/2(Γ;𝐑𝑟)𝒯𝗎,𝐻1/2𝒮𝒮𝗓𝐻1/2(Γ;𝐑𝑟)=Γ𝐀𝝂(𝐱)𝒯𝐻1=𝗎(𝐱)(𝐈𝐒(𝐱))𝐒(𝐱)𝗓(𝐱)𝑑𝑆(𝐱)Γ𝐒(𝐱)𝐈𝐒𝐀(𝐱)𝝂(𝐱)𝒯𝐻1=𝗎(𝐱)𝗓(𝐱)𝑑𝑆(𝐱)Γ𝐀𝝂(𝐱)(𝐈𝐒(𝐱))𝐒(𝐱)𝒯𝐻1𝗎(𝐱)𝗓(𝐱)𝑑𝑆(𝐱)=0,(2.7) where 𝐻1/2𝐻1/2(Γ;𝐑𝑟)𝐻1/2(Γ;𝐑𝑟) is the identity. Therefore, 𝒮(𝐻1/2𝒮)𝒯𝗎=𝟢 for every 𝗎𝐻1(Ω;𝐑𝑟), and, since 𝐻1(Ω;𝐑𝑟) is dense in 𝑊, we have (2.6).
Just as in the proof of [10, Theorem  2], we will use the representations of operators 𝐷 and 𝑀 through the trace operator 𝒯; for 𝗎𝑊 and 𝗏𝐻1(Ω;𝐑𝑟), we have 𝑊𝐷𝗎,𝗏𝑊=𝐻1/2(Γ;𝐑𝑟)𝒯𝗎,𝒯𝐻1𝗏𝐻1/2(Γ;𝐑𝑟),𝑊𝑀𝗎,𝗏𝑊=𝐻1/2(Γ;𝐑𝑟)(𝐻1/22𝒮)𝒯𝗎,𝒯𝐻1𝗏𝐻1/2(Γ;𝐑𝑟).(2.8)
By assumption 𝒮(im𝒯)im𝒯, for given 𝗐𝑊, we can define 𝗎=𝗐𝒮𝒯𝗐,𝗏=𝒮𝒯𝗐,(2.9) where im𝒯𝑊 is the right inverse of the operator 𝒯, as before. Obviously, the decomposition 𝗐=𝗎+𝗏 is valid.
Let us show that 𝗎ker(𝐷𝑀): for 𝗌𝐻1(Ω;𝐑𝑟) by (2.6) and (2.8) we get 𝑊(𝐷𝑀)𝗎,𝗌𝑊=𝐻1/2(Γ;𝐑𝑟)2𝒮𝒯𝗎,𝒯𝐻1𝗌𝐻1/2(Γ;𝐑𝑟)=𝐻1/2(Γ;𝐑𝑟)2𝒮𝒯(𝗐𝒮𝒯𝗐),𝒯𝐻1𝗌𝐻1/2(Γ;𝐑𝑟)=𝐻1/2(Γ;𝐑𝑟)2𝒮(𝐻1/2𝒮)𝒯𝗐,𝒯𝐻1𝗌𝐻1/2(Γ;𝐑𝑟)=0,(2.10) thus (𝐷𝑀)𝗎=0, as 𝐻1(Ω;𝐑𝑟) is dense in 𝑊.
It remains to show that 𝗏ker(𝐷+𝑀). For 𝗌𝐻1(Ω;𝐑𝑟), similarly as above, it follows that 𝑊(𝐷+𝑀)𝗏,𝗌𝑊=𝐻1/2(Γ;𝐑𝑟)2(𝐻1/2𝒮)𝒯𝗏,𝒯𝐻1𝗌𝐻1/2(Γ;𝐑𝑟)=𝐻1/2(Γ;𝐑𝑟)2(𝐻1/2𝒮)𝒯𝒮𝒯𝗐,𝒯𝐻1𝗌𝐻1/2(Γ;𝐑𝑟)=𝐻1/2(Γ;𝐑𝑟)2(𝐻1/2𝒮)𝒮𝒯𝗐,𝒯𝐻1𝗌𝐻1/2(Γ;𝐑𝑟)=0,(2.11) thus (𝐷+𝑀)𝗏=0 and we have the claim.

Theorem 2.5 provides us with sufficient conditions for continuous operator 𝑀𝑊𝑊, defined by (1.12), to satisfy (M). A natural question arises whether these conditions are feasible. The condition 𝐒C0,1/2(Γ;𝑀𝑟(𝐑)) does not appear particularly restrictive, as it is expected that the continuity of 𝑀 requires even higher regularity of 𝐒 (see [10]). However, the other condition, requiring that the image of the trace operator is invariant under 𝒮 appears somewhat artificial and unnatural. This is particularly true because in all examples to which we have applied the theory of Friedrichs systems [10] this condition is satisfied. At this point we still do not know whether it is always fulfilled.

3. On Feasibility of Assumptions

The following example illustrates the applicability of Theorem 2.5 for hyperbolic equations, in a simple situation.

Example 3.1. The wave equation 𝑢𝑡𝑡𝛾2𝑢𝑥𝑥=𝑓 can be written as the following symmetric system for 𝗎=(𝑢,𝑢𝑡+𝛾𝑢𝑥): 𝜕𝑡𝗎1001+𝜕𝑥𝗎+0𝑓𝛾00𝛾0100𝗎=.(3.1) After introducing a new unknown 𝗏=e𝜆𝑡𝗎, we obtain a positive symmetric system (for 𝜆>0 large enough) 𝜕𝑡𝗏1001+𝜕𝑥𝗏+0e𝛾00𝛾𝜆10𝜆𝗏=𝜆𝑡𝑓.(3.2) As 𝐀𝝂=𝜈1+𝛾𝜈200𝜈1𝛾𝜈2,(3.3) in order to make calculations simpler, we take domain Ω𝐑2 to be a parallelogram with sides laying on the characteristic lines 𝑥±𝛾𝑡=±1 of the original wave equation, as presented in Figure 1. The straight parts of the boundary (open segments) are denoted by Γ1,,Γ4.
Let us take a matrix function 𝐒C0,1/2(Γ;𝑀2(𝐑)) with the entries 𝐒=𝑎𝑏𝑐𝑑,(3.4) and consider 𝐌=(𝐈2𝐒)𝐀𝝂. Depending on the particular part of the boundary, matrix function 𝐌 satisfies (FM) if and only if onΓ1𝑐=0,𝑑=0,onΓ2𝑐=0,𝑑=1,onΓ3𝑎=0,𝑏=0,onΓ4𝑎=1,𝑏=0.(3.5) A straightforward calculation gives us the formula for 𝒯 on 𝐻1(Ω;𝐑2): 𝒯(𝑢,𝑤)=0,𝒯𝐻1𝑤,onΓ1,0,𝒯𝐻1𝑤,onΓ2,𝒯𝐻1𝑢,0,onΓ3,𝒯𝐻1𝑢,0,onΓ4.(3.6) Multiplying with possible values of 𝐒 given above, for any (𝑢,𝑤)𝐻1(Ω;𝐑2) we have 𝒮𝒯(𝑢,𝑤)=0,onΓ1,𝒯(𝑢,𝑤),onΓ2,0,onΓ3,𝒯(𝑢,𝑤),onΓ4,(3.7) and one can easily check that this equals 𝒯(̃𝑢,𝑤) with ̃𝑢=(1𝑥𝛾𝑡)𝑢/2 and 𝑤=(1+𝑥𝛾𝑡)𝑤/2. By the density of 𝐻1(Ω;𝐑2) in 𝑊, the continuity of 𝒮(𝐻1/2(Γ;𝐑2)) and the fact that 𝒯(𝑊;𝐻1/2(Γ;𝐑2)), as well as the continuity of linear mapping (𝑢,𝑣)(̃𝑢,𝑤) from 𝑊 to 𝑊, we infer that the equality 𝒮𝒯(𝑢,𝑤)=𝒯(̃𝑢,𝑤) is valid for (𝑢,𝑤)𝑊. Therefore, by this construction, we obtained the inclusion 𝒮(im𝒯)im𝒯.
The corresponding boundary operator 𝑀𝑊𝑊 is continuous, so we can apply Theorem 2.5. It is simple to interpret the boundary conditions: they are not imposed on the part Γ1Γ3 of the boundary (as 𝐀𝜈𝐌=𝟎 there), while the boundary condition on Γ2 is 𝑤=0 and on Γ4 we have 𝑢=0 (as 𝐀𝜈𝐌=𝐀𝜈 on these parts of the boundary).

The arguments used in this example are particularly simple due to the specific form of boundary Γ. Let us consider a more complicated pentagonal domain: cut the set Ω by a horizontal line and introduce a new horizontal part Γ5 of the boundary (on the top of Ω). A similar calculation leads us to the following relations that should be satisfied on Γ5: 1𝑎21,𝑑2,𝑎+𝑑=1,𝑎𝑑=𝑏𝑐,(𝑏𝑐)2(12𝑎)(2𝑑1),(3.8) and to the following equalities on Γ5 (valid for (𝑢,𝑤)𝐻1(Ω;𝐑2)), 𝒯𝒯(𝑢,𝑤)=𝐻1𝑢,𝒯𝐻1𝑤,𝒮𝒯(𝑢,𝑤)=𝑎𝒯𝐻1𝑢𝑐𝒯𝐻1𝑤,𝑏𝒯𝐻1𝑢𝑑𝒯𝐻1𝑤.(3.9) However, the inclusion 𝒮(im𝒯)im𝒯 is no longer obvious, as functions 𝑎, 𝑏, 𝑐, and 𝑑 can be chosen quite arbitrarily.

Next we present some sufficient conditions which can be used to show that 𝒮(im𝒯)im𝒯.

Theorem 3.2. Let 𝐒,𝐏C0,1/2(Γ;𝑀𝑟(𝐑)) be matrix functions and 𝒮,𝒫(𝐻1/2(Γ;𝐑𝑟)) corresponding multiplication operators defined as in (1.13), and let 𝐒𝐀𝝂=𝐀𝝂𝐏 a.e. on Γ. Then 𝒯(𝐻1(Ω;𝐑𝑟)) is invariant under 𝒮. If one additionally assumes that im𝒯 is closed in 𝐻1/2(Γ;𝐑𝑟), then one also has 𝒮(im𝒯)im𝒯.

Proof. For 𝗎𝐻1(Ω;𝐑𝑟) and 𝗓𝐻1/2(Γ;𝐑𝑟), we have 𝐻1/2(Γ;𝐑𝑟)𝒮𝒯𝗎,𝗓𝐻1/2(Γ;𝐑𝑟)=𝐻1/2(Γ;𝐑𝑟)𝒯𝗎,𝒮𝗓𝐻1/2(Γ;𝐑𝑟)=Γ𝐀𝝂(𝐱)𝒯𝐻1=𝗎(𝐱)𝐒(𝐱)𝗓(𝐱)𝑑𝑆(𝐱)Γ𝐒(𝐱)𝐀𝝂(𝐱)𝒯𝐻1=𝗎(𝐱)𝗓(𝐱)𝑑𝑆(𝐱)Γ𝐀𝝂(𝐱)𝐏(𝐱)𝒯𝐻1=𝗎(𝐱)𝗓(𝐱)𝑑𝑆(𝐱)Γ𝐀𝝂(𝐱)𝒯𝐻1𝐻1𝐏(𝐱)𝒯𝐻1=𝗎(𝐱)𝗓(𝐱)𝑑𝑆(𝐱)𝐻1/2(Γ;𝐑𝑟)𝒯𝐻1𝑃𝒯𝐻1𝗎,𝗓𝐻1/2(Γ;𝐑𝑟),(3.10) where 𝐻1𝐻1/2(Γ;𝐑𝑟)𝐻1(Ω;𝐑𝑟) is the right inverse of the trace operator 𝒯𝐻1. Therefore, on 𝐻1(Ω;𝐑𝑟), we have 𝒮𝒯=𝒯𝐻1𝒫𝒯𝐻1, and in particular 𝒮𝒯(𝐻1(Ω;𝐑𝑟))𝒯(𝐻1(Ω;𝐑𝑟)).
Let us now additionally assume that im𝒯 is closed in 𝐻1/2(Γ;𝐑𝑟). For 𝗎𝑊, let 𝗎𝑛𝐻1(Ω;𝐑𝑟) be a sequence converging to 𝗎 in 𝑊. Then by continuity we also have 𝒮𝒯𝗎𝑛𝒮𝒯𝗎in𝐻1/2(Γ;𝐑𝑟),(3.11) while, from the equality 𝒮𝒯𝗎𝑛=𝒯𝐻1𝒫𝒯𝐻1𝗎𝑛im𝒯 and the closedness of im𝒯 in 𝐻1/2(Γ;𝐑𝑟), it follows that 𝒮𝒯𝗎im𝒯, which concludes the proof.

Note that in the above theorem 𝐒 and 𝐏 were arbitrary elements of C0,1/2(Γ;𝑀𝑟(𝐑)), not necessarily having properties from Lemma 2.1.

We close this section with a theorem showing that, if we impose conditions that ensure continuity of 𝑀 defined by (1.12), then we also have 𝒮(im𝒯)im𝒯. These conditions were used in applications of the theory of Friedrichs systems in [11], and it is important to note that we do not expect them to be necessary for continuity of 𝑀, but only sufficient.

Theorem 3.3. Let 𝐒C0,1/2(Γ;𝑀𝑟(𝐑)) be a matrix function and 𝒮(𝐻1/2(Γ;𝐑𝑟)) the corresponding multiplication operator defined by (1.13), and let 𝐏Γ𝑀𝑟(𝐑) be such that 𝐒𝐀𝝂=𝐀𝝂𝐏 a.e. on Γ. Additionally assume that 𝐏 can be extended to a measurable matrix function 𝐏𝑝𝖢𝗅Ω𝑀𝑟(𝐑) satisfying the following.
(S1)The multiplication operator 𝒫𝑝, defined by 𝒫𝑝(𝗏)=𝐏𝑝𝗏 for 𝗏𝑊 is a bounded linear operator on 𝑊.(S2)(𝗏𝐻1(Ω;𝐑𝑟))𝐏𝑝𝗏𝐻1(Ω;𝐑𝑟)&𝒯𝐻1(𝐏𝑝𝗏)=𝐏𝒯𝐻1𝗏.
Then 𝒮𝒯=𝒯𝒫𝑝, and thus 𝒮(im𝒯)im𝒯.

Proof. Similarly as in the proof of the previous theorem, for each 𝗎𝐻1(Ω;𝐑𝑟) and 𝗓𝐻1/2(Γ;𝐑𝑟) we have 𝐻1/2(Γ;𝐑𝑟)𝒮𝒯𝗎,𝗓𝐻1/2(Γ;𝐑𝑟)=𝐻1/2(Γ;𝐑𝑟)𝒯𝗎,𝒮𝗓𝐻1/2(Γ;𝐑𝑟)=Γ𝐀𝝂(𝐱)𝒯𝐻1=𝗎(𝐱)𝐒(𝐱)𝗓(𝐱)𝑑𝑆(𝐱)Γ𝐒(𝐱)𝐀𝝂(𝐱)𝒯𝐻1=𝗎(𝐱)𝗓(𝐱)𝑑𝑆(𝐱)Γ𝐀𝝂(𝐱)𝐏(𝐱)𝒯𝐻1=𝗎(𝐱)𝗓(𝐱)𝑑𝑆(𝐱)Γ𝐀𝝂(𝐱)𝒯𝐻1𝐏𝑝(=𝐱)𝗎(𝐱)𝗓(𝐱)𝑑𝑆(𝐱)𝐻1/2(Γ;𝐑𝑟)𝒯𝒫𝑝𝗎,𝗓𝐻1/2(Γ;𝐑𝑟),(3.12) and thus the equality 𝒮𝒯=𝒯𝒫𝑝 is valid on 𝐻1(Ω;𝐑𝑟). As all operators appearing in this equality are bounded, while 𝐻1(Ω;𝐑𝑟) is dense in 𝑊, the same equality is valid on 𝑊, which proves the claim.

4. Using P instead of S

In Theorem 2.5 a matrix function 𝐒 was used in order to impose sufficient conditions for (M) to hold. In Lemma 2.1 a matrix function 𝐏 also appears, with a similar role as 𝐒, the only difference being in the fact that in expression for 𝐌 function 𝐏 multiplies 𝐀𝝂 from a different side than 𝐒. Therefore, it is natural to check whether we could get a better result than Theorem 2.5 by using 𝐏 instead of 𝐒. In the next theorem we show what we have got by this approach.

Theorem 4.1. Assume that the matrix field 𝐌L(Γ;𝑀𝑟(𝐑)) satisfies (FM) and that by (1.12) a bounded operator 𝑀(𝑊;𝑊) is defined. Then (M1) holds.
Let the matrix function 𝐏 from Lemma 2.1 additionally satisfy 𝐏C0,1/2(Γ;𝑀𝑟(𝐑)), and let 𝒫(𝐻1/2(Γ;𝐑𝑟)) be the corresponding multiplication operator defined by 𝒫(𝗓)=𝐏𝗓. Then one has 𝐻1(Ω;𝐑𝑟)ker(𝐷𝑀)+ker(𝐷+𝑀).(4.1)

Proof. As in the proof of Theorem 2.5, we shall use the representations of operators 𝐷 and 𝑀 via trace operator 𝒯 and multiplication operator 𝒫; for 𝗎𝐻1(Ω;𝐑𝑟) and 𝗏𝐻1(Ω;𝐑𝑟) we have 𝑊𝐷𝗎,𝗏𝑊=Γ𝐀𝝂(𝐱)𝒯𝐻1𝗎(𝐱)𝒯𝐻1=𝗏(𝐱)𝑑𝑆(𝐱)Γ𝐀𝝂(𝐱)𝑇𝐻1𝗏(𝐱)𝒯𝐻1=𝗎(𝐱)𝑑𝑆(𝐱)𝐻1/2(Γ;𝐑𝑟)𝒯𝗏,𝒯𝐻1𝗎𝐻1/2(Γ;𝐑𝑟),𝑊𝑀𝗎,𝗏𝑊=Γ𝐀𝝂(𝐱)(𝐈2𝐏(𝐱))𝒯𝐻1𝗎(𝐱)𝒯𝐻1=𝗏(𝐱)𝑑𝑆(𝐱)Γ𝐀𝝂(𝐱)𝒯𝐻1𝗏(𝐱)(𝐈2𝐏(𝐱))𝒯𝐻1=𝗎(𝐱)𝑑𝑆(𝐱)𝐻1/2(Γ;𝐑𝑟)𝒯𝗏,𝐻1/2𝒯2𝒫𝐻1𝗎𝐻1/2(Γ;𝐑𝑟).(4.2)
For given 𝗐𝐻1(Ω;𝐑𝑟) we define 𝗎=𝐻1𝒫𝒯𝐻1𝗐, where 𝐻1 is the right inverse of 𝒯𝐻1, as before.
Let us show that 𝗎ker(𝐷+𝑀). For 𝗏𝐻1(Ω;𝐑𝑟) by using (4.2) and Lemma 2.4 we get 𝑊(𝐷+𝑀)𝗎,𝗏𝑊=𝐻1/2(Γ;𝐑𝑟)2𝒯𝗏,𝐻1/2𝒯𝒫𝐻1𝗎𝐻1/2(Γ;𝐑𝑟)=𝐻1/2(Γ;𝐑𝑟)2𝒯𝗏,𝐻1/2𝒯𝒫𝐻1𝐻1𝒫𝒯𝐻1𝗐𝐻1/2(Γ;𝐑𝑟)=2Γ𝐀𝝂(𝐱)𝒯𝐻1𝗏(𝐱)(𝐈𝐏(𝐱))𝐏(𝐱)𝒯𝐻1𝗐(𝐱)𝑑𝑆(𝐱)=2Γ𝒯𝐻1𝗏(𝐱)𝐀𝝂(𝐱)(𝐈𝐏(𝐱))𝐏(𝐱)𝒯𝐻1𝗐(𝐱)𝑑𝑆(𝐱)=0,(4.3) thus (𝐷+𝑀)𝗎=0, as 𝐻1(Ω;𝐑𝑟) is dense in 𝑊.
Similarly, we get 𝑊(𝐷𝑀)(𝗐𝗎),𝗏𝑊=2Γ𝒯𝐻1𝗏(𝐱)𝐀𝝂(𝐱)𝐏(𝐱)(𝐈𝐏(𝐱))𝒯𝐻1𝗐(𝐱)𝑑𝑆(𝐱)=0,(4.4) and thus (𝐷𝑀)(𝗐𝗎)=𝟢. As 𝗐=𝗎+(𝗐𝗎), we have the claim.

It appears that by using 𝐏 instead of 𝐒 we do not get better results.

Acknowledgment

This work is supported in part by the Croatian MZOS trough projects 037-0372787-2795 and 037-1193086-3226.