Abstract

An optimal lower eigenvalue system is studied, and main theorems including a series of necessary and suffcient conditions concerning existence and a Lipschitz continuity result concerning stability are obtained. As applications, solvability results to some von-Neumann-type input-output inequalities, growth, and optimal growth factors, as well as Leontief-type balanced and optimal balanced growth paths, are also gotten.

1. Introduction

1.1. The Optimal Lower Eigenvalue System

Arising from considering some inequality problems in input-output analysis such as von-Neumann type input-output inequalities, growth and optimal growth factors, as well as Leontief type balanced and optimal balanced growth paths, we will study an optimal lower eigenvalue system.

To this end, we denote by 𝑅 𝑘 = ( 𝑅 𝑘 , ) the real 𝑘 -dimensional Euclidean space with the dual 𝑅 𝑘 = 𝑅 𝑘 , 𝑅 𝑘 + the set of all nonnegative vectors of 𝑅 𝑘 , and i n t 𝑅 𝑘 + its interior. We also define 𝑦 1 ( o r > ) 𝑦 2 in 𝑅 𝑘 by 𝑦 1 𝑦 2 𝑅 𝑘 + (or i n t 𝑅 𝑘 + ).

Let 𝜆 𝑅 + = 𝑅 1 + , 𝐹 𝑅 𝑚 + , 𝑋 𝑅 𝑛 + , and 𝑇 = ( 𝑇 1 , , 𝑇 𝑚 ) , 𝑆 = ( 𝑆 1 , , 𝑆 𝑚 ) 𝑋 i n t 𝑅 𝑚 + be two single-valued maps, where 𝑚 may not be equal to 𝑛 . Then the optimal lower eigenvalue system that we will study and use to consider the preceding inequality problems can be described by 𝜆 , 𝐹 , 𝑋 , 𝑇 , and 𝑆 as follows: ( a ) 𝜆 > 0 𝑥 𝑋 s . t . 𝑇 𝑥 𝜆 𝑆 𝑥 𝐹 + 𝑅 𝑚 + , t h a t i s , 𝑇 𝑥 𝑆 𝑥 𝑐 f o r s o m e 𝑐 𝐹 , ( b ) 0 < 𝜆 m a x 𝑥 𝑋 s . t . 𝑇 𝑥 𝜆 𝑆 𝑥 𝐹 + 𝑅 𝑚 + . ( 1 . 1 ) We call 𝜆 ( > 0 ) a lower eigenvalue to (1.1) if it solves (a), and its solution 𝑥 the eigenvector, claim 𝜆 = 𝜆 ( 𝐹 ) ( > 0 ) the maximal lower eigenvalue to (1.1) if it maximizes (b) (i.e., 𝜆 solves (a), but 𝜇 not if 𝜇 > 𝜆 ), and its solution 𝑥 the optimal eigenvector.

In case 𝐹 = { 𝑐 } with 𝑐 𝑅 𝑚 + , then (1.1) becomes ( a ) 𝜆 > 0 𝑥 𝑋 s . t . 𝑇 𝑥 𝜆 𝑆 𝑥 + 𝑐 , ( b ) 0 < 𝜆 m a x 𝑥 𝑋 s . t . 𝑇 𝑥 𝜆 𝑆 𝑥 + 𝑐 . ( 1 . 2 ) All the concepts concerning (1.1) are reserved for (1.2), and for convenience, the maximal lower eigenvalue 𝜆 = 𝜆 ( { 𝑐 } ) to (1.2), if existed, is denoted by 𝜆 = 𝜆 ( 𝑐 ) .

1.2. Some Economic Backgrounds

As indicated above, the aim of this article is to consider some inequality problems in input-output analysis by studying (1.1). So it is natural to know how many (or what types of) problems in input-output analysis can be deduced from (1.1) or (1.2) by supplying 𝐹 , 𝑋 , 𝑇 , 𝑆 , 𝑐 , and 𝜆 with some proper economic implications. Indeed, in the input-output analysis found by Leontief [1], there are two classes of important economic systems.

One is the Leontief type input-output equality problem composed of an equation and an inclusion as follows: ( a ) 𝑥 𝑋 s . t . 𝑥 𝐴 𝑥 = 𝑐 , ( b ) 𝑥 𝑋 s . t . 𝑥 𝑆 𝑥 𝑐 , ( 1 . 3 ) where 𝑐 𝑅 𝑛 + is an expected demand of the market, 𝑋 𝑅 𝑛 + some enterprise's admission output bundle set, and 𝐴 𝑋 𝑅 𝑛 + or 𝑆 𝑋 2 𝑅 𝑛 + is the enterprise's single-valued or set-valued consuming map. The economic implication of (a) or (b) is whether there exists 𝑥 𝑋 or there exist 𝑥 𝑋 and 𝑦 𝑆 𝑥 such that the pure output 𝑥 𝐴 𝑥 or 𝑥 𝑦 is precisely equal to the expected demand 𝑐 . If 𝑋 = 𝑅 𝑛 + , and 𝐴 is described by a 𝑛 th square matrix, then (a) is precisely the classical Leontief input-output equation, which has been studied by Leontief [1] and Miller and Blair [2] with the matrix analysis method. If 𝑋 is convex compact, and 𝐴 is continuous, then (a) is a Leontief type input-output equation, which has been considered by Fujimoto [3] and Liu and Chen [4, 5] with the functional analysis approach. As for (b), in case 𝑋 is convex compact, and 𝑆 is convex compact-valued with and without the upper hemicontinuous condition, it has also been studied by Liu and Zhang [6, 7] with the nonlinear analysis methods attributed to [810], in particular, using the classical Rogalski-Cornet Theorem (see [8, Theorem 15.1.4]) and some Rogalski-Cornet type Theorems (see [6, Theorems 2.8, 2.9 and 2.12]). However, since the methods to tackle (1.3) are quite different from those to study (1.1), we do not consider it here.

Another is the von-Neumann type and Leontief type inequality problems which can be viewed as some special examples of (1.1) or (1.2).

(i) Assume that 𝐹 𝑅 𝑚 + or 𝑐 𝑅 𝑚 + is an expected demand set or an expected demand of the market, and 𝑋 𝑅 𝑛 + some enterprise's raw material bundle set. Then the von-Neumann type inequality problems including input-output inequalities, along with growth and optimal growth factors can be stated, respectively, as follows.

(1) If 𝑇 , 𝑆 𝑋 i n t 𝑅 𝑚 + are supposed to be the enterprise's output (or producing) and consuming maps, respectively, by taking 𝜆 = 1 , then from both (a) of (1.1) and (1.2), we obtain the von-Neumann type input-output inequalities: ( a ) 𝑥 𝑋 s . t . 𝑇 𝑥 𝑆 𝑥 𝐹 + 𝑅 𝑚 + , ( b ) 𝑥 𝑋 s . t . 𝑇 𝑥 𝑆 𝑥 𝑐 . ( 1 . 4 ) The economic implication of (a) or (b) is whether there exist 𝑥 𝑋 and 𝑐 𝐹 or there exists 𝑥 𝑋 such that the pure output 𝑇 𝑥 𝑆 𝑥 satisfies sufficiently the expected demand 𝑐 . If 𝑋 = 𝑅 𝑛 + , and 𝑇 , 𝑆 are described by two 𝑚 × 𝑛 matrixes, then (b) returns to the classical von-Neumann input-output inequality, which has also been studied by Leontief [1] and Miller and Blair [2] with the matrix analysis method. If 𝑋 is convex compact, and 𝑇 , 𝑆 are two nonlinear maps such that 𝑇 𝑖 , 𝑆 𝑖 are upper semicontinuous concave for any 𝑖 = 1 , , 𝑚 , then (b) (as a nonlinear von-Neumann input-output inequality) has been handled by Liu [11] and Liu and Zhang [12] with the nonlinear analysis methods in [810]. Along the way, in case 𝑋 is convex compact, and 𝑇 , 𝑆 are replaced by two upper semicontinuous convex set-valued maps with convex compact values, then (b) (as a set-valued von-Neumann input-output inequality) has also been studied by Liu [13, 14]. However, (a) has not been considered up to now. Since (a) (or (b)) is solvable if and only if 𝜆 = 1 makes (1.1)(a) (or makes (1.2)(a)) have solutions, and also, if and only if the maximal lower eigenvalue 𝜆 ( 𝐹 ) to (1.1) exists with 𝜆 ( 𝐹 ) 1 (or the maximal lower eigenvalue 𝜆 ( 𝑐 ) to (1.2) exists with 𝜆 ( 𝑐 ) 1 ), we see that the lower eigenvalue approach yielded from studying (1.1) or (1.2) may be applied to obtain some new solvability results to (1.4).

(2) If 𝑇 , 𝑆 𝑋 𝑅 𝑛 + i n t 𝑅 𝑚 + are supposed to be the enterprise's output and input (or invest) maps, respectively, and set Λ = { 𝜆 > 0 𝑥 𝑋 s . t . 𝑇 𝑥 𝜆 𝑆 𝑥 } , then Λ is nonempty, and in some degree, each 𝜆 Λ can be used to describe the enterprise's growth behavior. Since the enterprise always hopes his growth as big as possible, a fixed positive number 𝜆 0 can be selected to represent the enterprise's desired minimum growth no matter whether 𝜆 0 Λ or not. By taking 𝑐 = 0 and restricting 𝜆 𝜆 0 , then from (1.2) we obtain the von-Neumann type growth and optimal growth factor problem: ( a ) 𝜆 𝜆 0 , + 𝑥 𝑋 s . t . 𝑇 𝑥 𝜆 𝑆 𝑥 , ( b ) 𝜆 0 𝜆 m a x 𝑥 𝑋 s . t . 𝑇 𝑥 𝜆 𝑆 𝑥 . ( 1 . 5 ) We call 𝜆 a growth factor to (1.5) if it solves (a), its solution 𝑥 the intensity vector, and say that (1.5) is efficient if it has at least one growth factor. We also claim 𝜆 the optimal growth factor to (1.5) if it maximizes (b), and its solution 𝑥 the optimal intensity vector. If 𝑋 = 𝑅 𝑛 + , and 𝑆 , 𝑇 are described by two 𝑚 × 𝑛 matrixes, then (a) reduces to the classical von-Neumann growth model, and has been studied by Leontief [1], Miller and Blair [2], Medvegyev [15], and Bidard and Hosoda [16] with the matrix analysis method. Unfortunately, if 𝑇 , 𝑆 are nonlinear maps, in my knowledge, no any references regarding (1.5) can be seen. Clearly, the matrix analysis method is useless to the nonlinear version. On the other hand, it seems that the methods of [11, 12] fit for (1.4)(b) may probably be applied to tackle (a) because 𝑇 𝑥 𝜆 𝑆 𝑥 can be rewritten as 𝑇 𝑥 ( 𝜆 𝑆 ) 𝑥 0 . However, since the most important issue regarding (1.5) is to find the optimal growth fact (or equivalently, to search out all the growth facts), which is much more difficult to be tackled than to determine a single growth fact, we suspect that it is impossible to solve both (a) and (b) completely only using the methods of [11, 12]. So a possible idea to deal with (1.5) for the nonlinear version is to study (1.2) and obtain some meaningful results.

(ii) If 𝑚 = 𝑛 , 𝑋 𝑅 𝑛 + is the enterprise's admission output vector set, 𝐼 the identity map from 𝑅 𝑛 to itself, and 𝐴 = ( 𝑎 𝑖 𝑗 ) 𝑛 × 𝑛 , 𝐵 = ( 𝑏 𝑖 𝑗 ) 𝑛 × 𝑛 𝑅 𝑛 2 + are two 𝑛 th square matrixes used to describe the enterprise's consuming and reinvesting, respectively. Set 𝜆 = 𝜇 1 , 𝑆 = 𝐵 , 𝑇 = 𝐼 𝐴 , and 𝑐 = 0 , then under the zero profit principle, from (1.2) we obtain the Leontief type balanced and optimal balanced growth path problem: ( a ) 𝜇 > 1 𝑥 𝑋 s . t . ( 𝐼 𝐴 ) 𝑥 ( 𝜇 1 ) 𝐵 𝑥 , ( b ) 1 < 𝜇 m a x 𝑥 𝑋 s . t . ( 𝐼 𝐴 ) 𝑥 ( 𝜇 1 ) 𝐵 𝑥 . ( 1 . 6 ) Both (a) and (b) are just the static descriptions of the dynamic Leontief model ( a ) 𝜇 > 1 o r ( b ) 1 < 𝜇 m a x 𝑥 𝑋 s . t . 𝑥 ( 𝑡 ) = 𝜇 𝑡 𝑥 w i t h ( 𝐼 𝐴 + 𝐵 ) 𝑥 ( 𝑡 ) 𝐵 𝑥 ( 𝑡 + 1 ) , 𝑡 = 1 , 2 , . ( 1 . 7 ) This model also shows that why the Leontief model (1.6) should be restricted to the linear version. We call 𝜇 ( > 1 ) a balanced growth factor to (1.6) if it solves (a), (1.6) is efficient if it has at least one balanced growth factor, and claim 𝜇 ( > 1 ) the optimal balanced growth factor to (1.6) if it maximizes (b). It is also needed to stress that at least to my knowledge, only (1.6)(a) has been considered, that is to say, up to now we do not know under what conditions of 𝐴 and 𝐵 , the optimal balanced growth fact to (1.6) must exist, and how many possible balanced growth factors to (1.6) could be found. So we hope to consider (1.6) by studying (1.2), and obtain its solvability results.

1.3. Questions and Assumptions

In the sequel, taking (1.2) and (1.4)–(1.6) as the special examples of (1.1), we will devote to study (1.1) by considering the following three solvability questions.

Question 1 (Existence). If 𝜆 > 0 , does it solve (1.1)(a)? Can we presentany sufficient conditions, or if possible, any necessary and sufficient conditions?

Question 2 (Existence). Does the maximal lower eigenvalue 𝜆 = 𝜆 ( 𝐹 ) to (1.1) exist? How to describe it?

Question 3 (Stability). If the answer to the Question 2 is positive, whether the corresponding map 𝐹 𝜆 = 𝜆 ( 𝐹 ) is stable in any proper way?

In order to analyse the preceding questions and obtain some meaningful results, we need three assumptions as follows.

Assumption 1. 𝑋 𝑅 𝑛 + is nonempty, convex, and compact.

Assumption 2. For all 𝑖 = 1 , 2 , , 𝑚 , 𝑇 𝑖 𝑋 i n t 𝑅 + is upper semicontinuous and concave, 𝑆 𝑖 𝑋 i n t 𝑅 + is lower semicontinuous and convex.

Assumption 3. 𝔹 𝑚 + = { 𝐹 𝑅 𝑚 + 𝐹 is nonempty, convex, and compact} and 𝐹 𝔹 𝑚 + .

By virtue of the nonlinear analysis methods attributed to [810], in particular, using the minimax, saddle point, and the subdifferential techniques, we have made some progress for the solvability questions to (1.1) including a series of necessary and sufficient conditions concerning existence and a Lipschitz continuity result concerning stability. The plan of this paper is as follows, we introduce some concepts and known lemmas in Section 2, prove the main (solvability) theorems concerning (1.1) in Section 3, list the solvability results concerning (1.2) in Section 4, followed by some applications to (1.4)–(1.6) in Section 5, then present the conclusion in Section 6.

2. Terminology

Let 𝑓 , 𝑔 𝛼 ( 𝛼 Λ ) 𝑋 𝑅 𝑘 𝑅 and 𝜑 𝑃 × 𝑋 𝑅 𝑚 × 𝑅 𝑛 𝑅 be functions. In the sections below, we need some well known concepts of 𝑓 , 𝑔 𝛼 ( 𝛼 Λ ) and 𝜑 such as convex or concave, upper or lower semicontinuous (in short, u.s.c. or l.s.c.) and continuous (i.e., both u.s.c. and l.s.c.), whose definitions can be found in [810], so the details are omitted here. In order to deal with the solvability questions to (1.1) stated in Section 1, we also need some further concepts as follows.

Definition 2.1. (1) If i n f 𝑝 𝑃 s u p 𝑥 𝑋 𝜑 ( 𝑝 , 𝑥 ) = s u p 𝑥 𝑋 i n f 𝑝 𝑃 𝜑 ( 𝑝 , 𝑥 ) = 𝑣 ( 𝜑 ) , then we claim that the minimax value 𝑣 ( 𝜑 ) (of 𝜑 ) exists.
(2) If ( 𝑝 , 𝑥 ) 𝑃 × 𝑋 such that s u p 𝑥 𝑋 𝜑 ( 𝑝 , 𝑥 ) = i n f 𝑝 𝑃 𝜑 ( 𝑝 , 𝑥 ) , then we call ( 𝑝 , 𝑥 ) a saddle point of 𝜑 , and denote by 𝑆 ( 𝜑 ) the set of all saddle points.

Remark 2.2. From the definition, we can see that(1) 𝑣 ( 𝜑 ) exists if and only if i n f 𝑝 𝑃 s u p 𝑥 𝑋 𝜑 ( 𝑝 , 𝑥 ) s u p 𝑥 𝑋 i n f 𝑝 𝑃 𝜑 ( 𝑝 , 𝑥 ) ,(2) ( 𝑝 , 𝑥 ) 𝑆 ( 𝜑 ) if and only if ( 𝑝 , 𝑥 ) 𝑃 × 𝑋 with s u p 𝑥 𝑋 𝜑 ( 𝑝 , 𝑥 ) i n f 𝑝 𝑃 𝜑 ( 𝑝 , 𝑥 ) if and only if ( 𝑝 , 𝑥 ) 𝑃 × 𝑋 such that 𝜑 ( 𝑝 , 𝑥 ) 𝜑 ( 𝑝 , 𝑥 ) for ( 𝑝 , 𝑥 ) 𝑃 × 𝑋 ,(3)if 𝑆 ( 𝜑 ) , then 𝑣 ( 𝜑 ) exists and 𝑣 ( 𝜑 ) = 𝜑 ( 𝑝 , 𝑥 ) = s u p 𝑥 𝑋 𝜑 ( 𝑝 , 𝑥 ) = i n f 𝑝 𝑃 𝜑 ( 𝑝 , 𝑥 ) for all ( 𝑝 , 𝑥 ) 𝑆 ( 𝜑 ) .

Definition 2.3. Let 𝑓 be a function from 𝑅 𝑘 to 𝑅 { + } with the domain d o m ( 𝑓 ) = { 𝑥 𝑅 𝑘 𝑓 ( 𝑥 ) < + } and 𝑔 a function from 𝑅 𝑘 ( = 𝑅 𝑘 ) to 𝑅 { + } . Then one has the following.(1) 𝑓 is said to be proper if d o m ( 𝑓 ) . The epigraph e p i ( 𝑓 ) of 𝑓 is the subset of 𝑅 𝑘 × 𝑅 defined by e p i ( 𝑓 ) = { ( 𝑥 , 𝑎 ) 𝑅 𝑘 × 𝑅 𝑓 ( 𝑥 ) 𝑎 } .(2)The conjugate functions of 𝑓 and 𝑔 are the functions 𝑓 𝑅 𝑘 𝑅 { + } and 𝑔 𝑅 𝑘 𝑅 { + } defined by 𝑓 ( 𝑝 ) = s u p 𝑥 𝑅 𝑘 [ 𝑝 , 𝑥 𝑓 ( 𝑥 ) ] for 𝑝 𝑅 𝑘 and 𝑔 ( 𝑥 ) = s u p 𝑝 𝑅 𝑘 [ 𝑝 , 𝑥 𝑔 ( 𝑝 ) ] for 𝑥 𝑅 𝑘 , respectively. The biconjugate 𝑓 of 𝑓 is therefore defined on 𝑅 𝑘 ( = 𝑅 𝑘 ) by 𝑓 = ( 𝑓 ) .(3)If 𝑓 is a proper function from 𝑅 𝑘 to 𝑅 { + } and 𝑥 0 d o m ( 𝑓 ) , then the subdifferential of 𝑓 at 𝑥 0 is the (possibly empty) subset 𝜕 𝑓 ( 𝑥 0 ) of 𝑅 𝑘 defined by 𝜕 𝑓 ( 𝑥 0 ) = { 𝑝 𝑅 𝑘 𝑓 ( 𝑥 0 ) 𝑓 ( 𝑥 ) 𝑝 , 𝑥 0 𝑥 for all 𝑥 𝑅 𝑘 } .

Remark 2.4. If 𝑓 is a proper function from 𝑅 𝑘 to 𝑅 { } , then the domain of 𝑓 should be defined by d o m ( 𝑓 ) = { 𝑥 𝑅 𝑘 𝑓 ( 𝑥 ) > } , and 𝑓 is said to be proper if d o m ( 𝑓 ) .

Definition 2.5. Let 𝔹 ( 𝑅 𝑘 ) be the collection of all nonempty closed bounded subsets of 𝑅 𝑘 . Let 𝑥 𝑅 𝑘 and 𝐴 , 𝐵 𝔹 ( 𝑅 𝑘 ) . Then one has the following.(1)The distance 𝑑 ( 𝑥 , 𝐴 ) from 𝑥 to 𝐴 is defined by 𝑑 ( 𝑥 , 𝐴 ) = i n f 𝑦 𝐴 𝑑 ( 𝑥 , 𝑦 ) .(2)Let 𝜌 ( 𝐴 , 𝐵 ) = s u p 𝑥 𝐴 𝑑 ( 𝑥 , 𝐵 ) . Then the Hausdorff distance 𝑑 𝐻 ( 𝐴 , 𝐵 ) between 𝐴 and 𝐵 is defined by 𝑑 𝐻 ( 𝐴 , 𝐵 ) = m a x { 𝜌 ( 𝐴 , 𝐵 ) , 𝜌 ( 𝐵 , 𝐴 ) } .

The following lemmas are useful to prove the main theorems in the next section.

Lemma 2.6 (see [9]). (1) A proper function 𝑓 𝑅 𝑘 𝑅 { + } is convex or l.s.c. if and only if its epigraph e p i ( 𝑓 ) is convex or closed in 𝑅 𝑘 × 𝑅 .
(2) The upper envelope s u p 𝛼 Λ 𝑓 𝛼 ( 𝑥 ) of proper convex (or l.s.c.) functions 𝑓 𝛼 ( 𝑥 ) 𝑅 𝑘 𝑅 { + } ( 𝛼 Λ ) is also proper convex (or l.s.c.) when the d o m ( s u p 𝛼 Λ 𝑓 𝛼 ) = { 𝑥 𝑅 𝑘 s u p 𝛼 Λ 𝑓 𝛼 ( 𝑥 ) < + } is nonempty.
(3) The lower envelope i n f 𝛼 Λ 𝑔 𝛼 ( 𝑥 ) of proper concave (or u.s.c.) functions 𝑔 𝛼 ( 𝑥 ) 𝑅 𝑘 𝑅 { + } ( 𝛼 Λ ) is also proper concave (or u.s.c.) when the d o m ( i n f 𝛼 Λ 𝑔 𝛼 ) = { 𝑥 𝑅 𝑘 i n f 𝛼 Λ 𝑔 𝛼 ( 𝑥 ) > } is nonempty.

Remark 2.7. Since e p i ( s u p 𝛼 Λ 𝑓 𝛼 ) = 𝛼 Λ e p i ( 𝑓 𝛼 ) thanks to Proposition 1.1.1 of [9], and a function 𝑓 defined on 𝑅 𝑘 is concave (or u.s.c.) if and only if 𝑓 is convex (or l.s.c.), it is easily to see that in Lemma 2.6, the proofs from (1) to (2) and (2) to (3) are simple.

Lemma 2.8 (see [9]). Let 𝑋 𝑅 𝑛 , 𝑌 be a compact subset of 𝑅 𝑚 , and let 𝑓 𝑋 × 𝑌 𝑅 be l.s.c. (or, u.s.c.). Then 𝑋 𝑅 defined by ( 𝑥 ) = i n f 𝑦 𝑌 𝑓 ( 𝑥 , 𝑦 ) (or, 𝑘 𝑋 𝑅 defined by 𝑘 ( 𝑥 ) = s u p 𝑦 𝑌 𝑓 ( 𝑥 , 𝑦 ) ) is also l.s.c. (or, u.s.c.).

Lemma 2.9 (see [8]). Let 𝑃 𝑅 𝑚 , 𝑋 𝑅 𝑛 be two convex compact subsets, and let 𝜑 𝑃 × 𝑋 𝑅 be a function such that for all 𝑥 𝑋 , 𝑝 𝜑 ( 𝑝 , 𝑥 ) is l.s.c. and convex on 𝑃 , and for all 𝑝 𝑃 , 𝑥 𝜑 ( 𝑝 , 𝑥 ) is u.s.c. and concave on 𝑋 . Then i n f 𝑝 𝑃 s u p 𝑥 𝑋 𝜑 ( 𝑝 , 𝑥 ) = s u p 𝑥 𝑋 i n f 𝑝 𝑃 𝜑 ( 𝑝 , 𝑥 ) and there exists ( 𝑝 , 𝑥 ) 𝑃 × 𝑋 such that 𝜑 ( 𝑝 , 𝑥 ) = s u p 𝑥 𝑋 𝜑 ( 𝑝 , 𝑥 ) = i n f 𝑝 𝑃 𝜑 ( 𝑝 , 𝑥 ) .

Lemma 2.10 (see [8]). A proper function 𝑓 defined on 𝑅 𝑘 is convex and l.s.c. if and only if 𝑓 = 𝑓 .

Lemma 2.11 (see [8]). Let 𝑓 be a proper function defined on 𝑅 𝑘 , and 𝑝 0 𝑅 𝑘 . Then 𝑥 0 minimizes 𝑥 𝑓 ( 𝑥 ) 𝑝 0 , 𝑥 on 𝑈 if and only if 𝑥 0 𝜕 𝑓 ( 𝑝 0 ) and 𝑓 ( 𝑥 0 ) = 𝑓 ( 𝑥 0 ) .

Remark 2.12. If 𝑓 is a finite function from 𝑋 𝑅 𝑘 to 𝑅 , define 𝑓 𝑋 by 𝑓 𝑋 ( 𝑥 ) = 𝑓 ( 𝑥 ) if 𝑥 𝑋 , or = + if 𝑥 𝑅 𝑘 𝑋 , then we can use the preceding associated concepts and lemmas for 𝑓 by identifying 𝑓 with 𝑓 𝑋 .

3. Solvability Results to (1.1)

3.1. Auxiliary Functions

In the sequel, we assume that ( 1 ) A s s u m p t i o n s 1 - 3 i n S e c t i o n 1 a r e s a t i s e d , a n d 𝜆 𝑅 + , 𝐹 𝔹 𝑚 + , ( 2 ) 𝑃 𝑅 𝑚 + { 0 } i s a c o n v e x c o m p a c t s u b s e t w i t h 𝑅 + 𝑃 = 𝑅 𝑚 + . ( 3 . 1 ) Denote by , the duality paring on 𝑅 𝑚 , 𝑅 𝑚 , and for each 𝜆 𝑅 + and 𝐹 𝔹 𝑚 + , define two auxiliary functions 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) and 𝑔 𝐹 ( 𝑝 , 𝑥 ) on 𝑃 × 𝑋 by 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) = s u p 𝑐 𝐹 𝑝 , 𝑇 𝑥 𝜆 𝑆 𝑥 𝑐 = s u p 𝑐 1 , 𝑐 2 , , 𝑐 𝑚 𝐹 𝑚 𝑖 = 1 𝑝 𝑖 𝑇 𝑖 𝑥 𝜆 𝑆 𝑖 𝑥 𝑐 𝑖 , ( 3 . 2 ) 𝑔 𝐹 ( 𝑝 , 𝑥 ) = s u p 𝑐 𝐹 𝑝 , 𝑇 𝑥 𝑐 𝑝 , 𝑆 𝑥 = s u p ( 𝑐 1 , 𝑐 2 , , 𝑐 𝑚 ) 𝐹 𝑚 𝑖 = 1 𝑝 𝑖 𝑇 𝑖 𝑥 𝑐 𝑖 𝑚 𝑖 = 1 𝑝 𝑖 𝑆 𝑖 𝑥 . ( 3 . 3 ) Just as indicated by Definition 2.1, the minimax values and saddle point sets of 𝜑 ( 𝑝 , 𝑥 ) = 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) and 𝜑 ( 𝑝 , 𝑥 ) = 𝑔 𝐹 ( 𝑝 , 𝑥 ) , if existed or nonempty, are denoted by 𝑣 ( 𝑓 𝜆 , 𝐹 ) , 𝑣 ( 𝑔 𝐹 ) , 𝑆 ( 𝑓 𝜆 , 𝐹 ) , and 𝑆 ( 𝑔 𝐹 ) , respectively.

By (3.1)–(3.3), ( 𝑝 , 𝑥 ) 𝑝 , 𝑆 𝑥 , and ( 𝑝 , 𝑥 ) 𝑝 , 𝑇 𝑥 are strictly positive on 𝑃 × 𝑋 , and the former is l.s.c. while the latter is u.s.c.. So we can see that 0 < 𝜀 0 = i n f 𝑝 𝑃 , 𝑥 𝑋 𝑝 , 𝑆 𝑥 < + , 0 < 𝜀 1 = s u p 𝑝 𝑃 , 𝑥 𝑋 𝑝 , 𝑇 𝑥 < + , ( 3 . 4 ) and both 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) and 𝑔 𝐹 ( 𝑝 , 𝑥 ) are finite for all 𝜆 𝑅 + , ( 𝑝 , 𝑥 ) 𝑃 × 𝑋 and 𝐹 𝔹 𝑚 + .

We also define the extensions 𝑥 ̂ 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) to 𝑥 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) (for each fixed 𝑝 𝑃 ) and 𝑝 ̃ 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) to 𝑝 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) (for each fixed 𝑥 𝑋 ) by ̂ 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) = 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) , 𝑥 𝑋 , + , 𝑥 𝑅 𝑛 𝑋 , ̃ 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) = 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) , 𝑝 𝑃 , + , 𝑝 𝑅 𝑚 𝑃 . ( 3 . 5 ) According to Definition 2.3, the conjugate and biconjugate functions of 𝑥 ̂ 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) and 𝑝 ̃ 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) are then denoted by 𝑞 ̂ 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑞 ) , 𝑞 𝑅 𝑛 , 𝑥 ̂ 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) , 𝑥 𝑅 𝑛 ( f o r e a c h x e d 𝑝 𝑃 ) , 𝑟 ̃ 𝑓 𝜆 , 𝐹 ( 𝑟 , 𝑥 ) , 𝑟 𝑅 𝑚 , 𝑝 ̃ 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) , 𝑝 𝑅 𝑚 ( f o r e a c h x e d 𝑥 𝑋 ) . ( 3 . 6 ) By Definition 2.5, the Hausdorff distance in 𝔹 𝑚 + (see Assumption 3) is provided by 𝑑 𝐻 𝐹 1 , 𝐹 2 = m a x s u p 𝑐 1 𝐹 1 𝑑 𝑐 1 , 𝐹 2 , s u p 𝑐 2 𝐹 2 𝑑 𝑐 2 , 𝐹 1 f o r 𝐹 1 , 𝐹 2 𝔹 𝑚 + . ( 3 . 7 )

3.2. Main Theorems to (1.1)

With (3.1)–(3.7), we state the main solvability theorems to (1.1) as follows.

Theorem 3.1. (1)   𝑣 ( 𝑓 𝜆 , 𝐹 ) exists and 𝑆 ( 𝑓 𝜆 , 𝐹 ) is a nonempty convex compact subset of 𝑃 × 𝑋 . Furthermore, 𝜆 𝑣 ( 𝑓 𝜆 , 𝐹 ) is continuous and strictly decreasing on 𝑅 + with 𝑣 ( 𝑓 + , 𝐹 ) = l i m 𝜆 + 𝑣 ( 𝑓 𝜆 , 𝐹 ) = .
(2)   𝑣 ( 𝑔 𝐹 ) exists if and only if 𝑆 ( 𝑔 𝐹 ) . Moreover, if 𝑣 ( 𝑓 0 , 𝐹 ) > 0 , then 𝑣 ( 𝑔 𝐹 ) exists and 𝑆 ( 𝑔 𝐹 ) is a nonempty compact subset of 𝑃 × 𝑋 .

Theorem 3.2. (1)   𝜆 is a lower eigenvalue to (1.1) and 𝑥 its eigenvector if and only if i n f 𝑝 𝑃 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) 0 if and only if i n f 𝑝 𝑃 𝑔 𝐹 ( 𝑝 , 𝑥 ) 𝜆 .
(2)   𝜆 is a lower eigenvalue to (1.1) if and only if one of the following statements is true:
(a) 𝑣 ( 𝑓 𝜆 , 𝐹 ) 0 ,(b) 𝑓 𝜆 , 𝐹 ( ̂ 𝑝 , ̂ 𝑥 ) 0 for ( ̂ 𝑝 , ̂ 𝑥 ) 𝑆 ( 𝑓 𝜆 , 𝐹 ) ,(c) 𝑣 ( 𝑔 𝐹 ) exists with 𝑣 ( 𝑔 𝐹 ) 𝜆 ,(d) 𝑆 ( 𝑔 𝐹 ) and 𝑔 𝐹 ( ̂ 𝑝 , ̂ 𝑥 ) 𝜆 for ( ̂ 𝑝 , ̂ 𝑥 ) 𝑆 ( 𝑔 𝐹 ) .
(3) The following statements are equivalent:
(a)System (1.1) has at least one lower eigenvalue,(b) 𝑣 ( 𝑓 0 , 𝐹 ) > 0 ,(c) 𝑣 ( 𝑔 𝐹 ) exists with 𝑣 ( 𝑔 𝐹 ) > 0 ,(d) 𝑆 ( 𝑔 𝐹 ) and 𝑔 𝐹 ( ̂ 𝑝 , ̂ 𝑥 ) > 0 for ( ̂ 𝑝 , ̂ 𝑥 ) 𝑆 ( 𝑔 𝐹 ) .

Theorem 3.3. (1) 𝜆 exists if and only if one of the following statements is true:
(a) 𝑣 ( 𝑓 0 , 𝐹 ) > 0 ,(b) 𝑓 0 , 𝐹 ( ̂ 𝑝 , ̂ 𝑥 ) > 0 for ( ̂ 𝑝 , ̂ 𝑥 ) 𝑆 ( 𝑓 0 , 𝐹 ) ,(c) 𝑣 ( 𝑓 𝜆 , 𝐹 ) = 0 ,(d) 𝑣 ( 𝑔 𝐹 ) exists with 𝑣 ( 𝑔 𝐹 ) = 𝜆 ,(e) 𝑆 ( 𝑔 𝐹 ) and 𝑔 𝐹 ( 𝑝 , 𝑥 ) = 𝜆 for ( 𝑝 , 𝑥 ) 𝑆 ( 𝑔 𝐹 ) . Where 𝜆 = 𝜆 ( 𝐹 ) ( > 0 ) is the maximal lower eigenvalue to (1.1).
(2) If 𝑣 ( 𝑓 0 , 𝐹 ) > 0 , or equivalently, if 𝑣 ( 𝑔 𝐹 ) exists with 𝑣 ( 𝑔 𝐹 ) > 0 , then one has the following.
(a) 𝑥 is an optimal eigenvector if and only if there exists 𝑝 𝑃 with ( 𝑝 , 𝑥 ) 𝑆 ( 𝑔 𝐹 ) if and only if i n f 𝑝 𝑃 𝑔 𝐹 ( 𝑝 , 𝑥 ) = 𝜆 .(b)There exist ̂ 𝑥 𝑋 , ̂ 𝑐 𝐹 and 𝑖 0 { 1 , 2 , , 𝑚 } such that 𝑇 ̂ 𝑥 𝜆 𝑆 ̂ 𝑥 + ̂ 𝑐 and 𝑇 𝑖 0 ̂ 𝑥 = 𝜆 𝑆 𝑖 0 ̂ 𝑥 + ̂ 𝑐 𝑖 0 .(c) 𝜆 = 𝜆 ( 𝐹 ) is the maximal lower eigenvalue to (1.1) and ( 𝑝 , 𝑥 ) 𝑆 ( 𝑔 𝐹 ) if and only if 𝜆 > 0 and ( 𝑝 , 𝑥 ) 𝑃 × 𝑋 satisfy 𝑥 𝜕 ̂ 𝑓 𝜆 , 𝐹 ( 𝑝 , 0 ) and 𝑝 𝜕 ̃ 𝑓 𝜆 , 𝐹 ( 0 , 𝑥 ) . Where 𝜕 ̂ 𝑓 𝜆 , 𝐹 ( 𝑝 , 0 ) and 𝜕 ̃ 𝑓 𝜆 , 𝐹 ( 0 , 𝑥 ) are the subdifferentials of ̂ 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑞 ) at 𝑞 = 0 and ̃ 𝑓 𝜆 , 𝐹 ( 𝑟 , 𝑥 ) at 𝑟 = 0 , respectively.(d)The set of all lower eigenvalues to (1.1) coincides with the interval ( 0 , 𝑣 ( 𝑔 𝐹 ) ] .
(3) Let 𝑚 + = { 𝐹 𝔹 𝑚 + 𝑣 ( 𝑓 0 , 𝐹 ) > 0 } , where 𝔹 𝑚 + is defined as in Assumption 3. Then
(a) 𝑚 + , and for each 𝐹 𝑚 + , 𝜆 = 𝜆 ( 𝐹 ) exists with 𝜆 ( 𝐹 ) = 𝑣 ( 𝑔 𝐹 ) ,(b)for all 𝐹 1 , 𝐹 2 𝑚 + , | 𝜆 ( 𝐹 1 ) 𝜆 ( 𝐹 2 ) | ( s u p 𝑝 𝑃 𝑝 / 𝜀 0 ) 𝑑 𝐻 ( 𝐹 1 , 𝐹 2 ) , where 𝜀 0 is defined by (3.4). Thus, 𝐹 𝜆 ( 𝐹 ) is Lipschitz on 𝑚 + with the Hausdorff distance 𝑑 𝐻 ( , ) .

Remark 3.4. If we take 𝑃 = Σ 𝑚 1 = { 𝑝 𝑅 𝑚 + 𝑚 𝑖 = 1 𝑝 𝑖 = 1 } , then Σ 𝑚 1 satisfies (3.1)(2), hence Theorems 3.13.3 are also true.

3.3. Proofs of the Main Theorems

In order to prove Theorems 3.13.3, we need the following eight lemmas.

Lemma 3.5. If 𝜆 𝑅 + is fixed, then one has the following. (1) 𝑝 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) ( 𝑥 𝑋 ) and 𝑝 s u p 𝑥 𝑋 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) are l.s.c. and convex on 𝑃 .(2) 𝑥 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) ( 𝑝 𝑃 ) and 𝑥 i n f 𝑝 𝑃 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) are u.s.c. and concave on 𝑋 .(3) 𝑣 ( 𝑓 𝜆 , 𝐹 ) exists and 𝑆 ( 𝑓 𝜆 , 𝐹 ) is a nonempty convex compact subset of 𝑃 × 𝑋 .

Proof. By (3.1)–(3.3), it is easily to see that ( a ) 𝑥 𝑋 , 𝑐 𝐹 , 𝑝 𝑝 , 𝑇 𝑥 𝜆 𝑆 𝑥 𝑐 i s c o n v e x l . s . c . o n 𝑃 , ( b ) 𝑝 𝑃 , ( 𝑥 , 𝑐 ) 𝑝 , 𝑇 𝑥 𝜆 𝑆 𝑥 𝑐 i s u . s . c . o n 𝑋 × 𝐹 . ( 3 . 8 ) Applying Lemma 2.6(2) (resp., Lemma 2.8) to the function of (3.8)(a) (resp., of (3.8)(b)), and using the fact that 𝐹 is compact, and any l.s.c. (or u.s.c.) function defined on a compact set attains its minimum (or its maximum), we obtain that 𝑥 𝑋 , 𝑝 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) i s c o n v e x l . s . c . o n 𝑃 a n d i n f 𝑝 𝑃 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) i s f i , 𝑝 𝑃 , 𝑥 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) i s u . s . c . o n 𝑋 a n d s u p 𝑥 𝑋 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) i s f i . ( 3 . 9 )
If 𝑥 𝑖 𝑋 ( 𝑖 = 1 , 2 ) , then by (3.2), there exist 𝑐 𝑖 = ( 𝑐 𝑖 1 , 𝑐 𝑖 2 , , 𝑐 𝑖 𝑚 ) 𝐹 ( 𝑖 = 1 , 2 ) such that 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 𝑖 ) = 𝑝 , 𝑇 𝑥 𝑖 𝜆 𝑆 𝑥 𝑖 𝑐 𝑖 . Since 𝑇 𝑖 , 𝜆 𝑆 𝑖 ( 𝑖 = 1 , 2 , , 𝑚 ) are concave, 𝑋 , 𝐹 are convex and 𝑝 𝑃 is nonnegative, we have for each 𝛼 [ 0 , 1 ] , 𝑓 𝜆 , 𝐹 𝑝 , 𝛼 𝑥 1 + ( 1 𝛼 ) 𝑥 2 𝑝 , 𝑇 𝛼 𝑥 1 + ( 1 𝛼 ) 𝑥 2 𝜆 𝑆 𝛼 𝑥 1 + ( 1 𝛼 ) 𝑥 2 𝛼 𝑐 1 ( 1 𝛼 ) 𝑐 2 𝛼 𝑝 , 𝑇 𝑥 1 𝜆 𝑆 x 1 𝑐 1 + ( 1 𝛼 ) 𝑝 , 𝑇 𝑥 2 𝜆 𝑆 𝑥 2 𝑐 2 = 𝛼 𝑓 𝜆 , 𝐹 𝑝 , 𝑥 1 + ( 1 𝛼 ) 𝑓 𝜆 , 𝐹 𝑝 , 𝑥 2 , t h a t i s , 𝑥 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) i s c o n c a v e o n 𝑋 . ( 3 . 1 0 ) Combining (3.9) with (3.10), and using Lemmas 2.6(2)(3) and 2.9, it follows that both statements (1) and (2) hold, 𝑣 ( 𝑓 𝜆 , 𝐹 ) exists and 𝑆 ( 𝑓 𝜆 , 𝐹 ) is nonempty. It remains to verify that 𝑆 ( 𝑓 𝜆 , 𝐹 ) is convex and closed because 𝑃 × 𝑋 is convex and compact.
If 𝛼 [ 0 , 1 ] and ( 𝑝 𝑖 , 𝑥 𝑖 ) 𝑆 ( 𝑓 𝜆 , 𝐹 ) ( 𝑖 = 1 , 2 ) , then s u p 𝑥 𝑋 𝑓 𝜆 , 𝐹 ( 𝑝 𝑖 , 𝑥 ) = i n f 𝑝 𝑃 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 𝑖 ) for 𝑖 = 1 , 2 . By (1) and (2) (i.e., 𝑝 s u p 𝑥 𝑋 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) is convex on 𝑃 and 𝑥 i n f 𝑝 𝑃 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) is concave on 𝑋 ), we have s u p 𝑥 𝑋 𝑓 𝜆 , 𝐹 𝛼 𝑝 1 + ( 1 𝛼 ) 𝑝 2 , 𝑥 𝛼 s u p 𝑥 𝑋 𝑓 𝜆 , 𝐹 𝑝 1 , 𝑥 + ( 1 𝛼 ) s u p 𝑥 𝑋 𝑓 𝜆 , 𝐹 𝑝 2 , 𝑥 = 𝛼 i n f 𝑝 𝑃 𝑓 𝜆 , 𝐹 𝑝 , 𝑥 1 + ( 1 𝛼 ) i n f 𝑝 𝑃 𝑓 𝜆 , 𝐹 𝑝 , 𝑥 2 i n f 𝑝 𝑃 𝑓 𝜆 , 𝐹 𝑝 , 𝛼 𝑥 1 + ( 1 𝛼 ) 𝑥 2 . ( 3 . 1 1 ) This implies by Remark 2.2(2) that 𝛼 ( 𝑝 1 , 𝑥 1 ) + ( 1 𝛼 ) ( 𝑝 2 , 𝑥 2 ) 𝑆 ( 𝑓 𝜆 , 𝐹 ) , and thus 𝑆 ( 𝑓 𝜆 , 𝐹 ) is convex.
If ( 𝑝 𝑘 , 𝑥 𝑘 ) 𝑆 ( 𝑓 𝜆 , 𝐹 ) with ( 𝑝 𝑘 , 𝑥 𝑘 ) ( 𝑝 0 , 𝑥 0 ) 𝑃 × 𝑋 ( 𝑘 ) , then s u p 𝑥 𝑋 𝑓 𝜆 , 𝐹 ( 𝑝 𝑘 , 𝑥 ) = i n f 𝑝 𝑃 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 𝑘 ) for all 𝑘 = 1 , 2 , . By taking 𝑘 , from (1) and (2) (that is, 𝑝 s u p 𝑥 𝑋 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) is l.s.c. on 𝑃 and 𝑥 i n f 𝑝 𝑃 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) is u.s.c. on 𝑋 ), we obtain that s u p 𝑥 X 𝑓 𝜆 , 𝐹 𝑝 0 , 𝑥 l i m i n f 𝑘 s u p 𝑥 𝑋 𝑓 𝜆 , 𝐹 𝑝 𝑘 , 𝑥 l i m s u p 𝑘 i n f 𝑝 𝑃 𝑓 𝜆 , 𝐹 𝑝 , 𝑥 𝑘 i n f 𝑝 𝑃 𝑓 𝜆 , 𝐹 𝑝 , 𝑥 0 . ( 3 . 1 2 ) Hence by Remark 2.2(2), ( 𝑝 0 , 𝑥 0 ) 𝑆 ( 𝑓 𝜆 , 𝐹 ) and 𝑆 ( 𝑓 𝜆 , 𝐹 ) is closed. Hence the first lemma follows.

Lemma 3.6. 𝜆 𝑣 ( 𝑓 𝜆 , 𝐹 ) is continuous and strictly decreasing on 𝑅 + with 𝑣 ( 𝑓 + , 𝐹 ) = l i m 𝜆 + 𝑣 ( 𝑓 𝜆 , 𝐹 ) = .

Proof. Since ( 𝜆 , 𝑝 ) 𝑝 , 𝑇 𝑥 𝜆 𝑆 𝑥 𝑐 is continuous on 𝑅 + × 𝑃 for each 𝑐 𝐹 and 𝑥 𝑋 , ( 𝜆 , 𝑥 , 𝑐 ) 𝑝 , 𝑇 𝑥 𝜆 𝑆 𝑥 𝑐 is u.s.c. on 𝑅 + × 𝑋 × 𝐹 for each 𝑝 𝑃 , and 𝐹 is compact, by Lemmas 2.6(2) and 2.8, we see that 𝑅 + × 𝑃 𝑅 ( 𝜆 , 𝑝 ) 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) = s u p 𝑐 𝐹 𝑝 , 𝑇 𝑥 𝜆 𝑆 𝑥 𝑐 i s l . s . c . , 𝑅 + × 𝑋 𝑅 ( 𝜆 , 𝑥 ) 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) = s u p 𝑐 𝐹 𝑝 , 𝑇 𝑥 𝜆 𝑆 𝑥 𝑐 i s u . s . c . . ( 3 . 1 3 ) From Lemma 2.6(2)-(3), it follows that 𝑅 + × 𝑃 𝑅 ( 𝜆 , 𝑝 ) s u p 𝑥 𝑋 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) i s l . s . c . , 𝑅 + × 𝑋 𝑅 ( 𝜆 , 𝑥 ) i n f 𝑝 𝑃 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) i s u . s . c . . ( 3 . 1 4 ) First applying Lemma 2.8 to both functions of (3.14), and then using Lemma 3.5(3), we further obtain that 𝑅 + 𝑅 𝜆 i n f 𝑝 𝑃 s u p 𝑥 𝑋 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) i s l . s . c . , 𝑅 + 𝑅 𝜆 s u p 𝑥 𝑋 i n f 𝑝 𝑃 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) i s u . s . c . , ( 3 . 1 5 ) and thus 𝜆 𝑣 ( 𝑓 𝜆 , 𝐹 ) is continuous on 𝑅 + .
Suppose that 𝜆 2 > 𝜆 1 0 , then by (3.2), 𝑓 𝜆 1 , 𝐹 ( 𝑝 , 𝑥 ) = 𝑓 𝜆 2 , 𝐹 ( 𝑝 , 𝑥 ) + ( 𝜆 2 𝜆 1 ) 𝑝 , 𝑆 𝑥 for all ( 𝑝 , 𝑥 ) 𝑃 × 𝑋 . This implies by (3.4) that 𝑣 ( 𝑓 𝜆 1 , 𝐹 ) 𝑣 ( 𝑓 𝜆 2 , 𝐹 ) + ( 𝜆 2 𝜆 1 ) 𝜀 0 , where 𝜀 0 = i n f 𝑝 𝑃 , 𝑥 𝑋 𝑝 , 𝑆 𝑥 ( 0 , + ) . Hence 𝜆 𝑣 ( 𝑓 𝜆 , 𝐹 ) is strictly decreasing.
By Lemma 3.5(3), Remark 2.2(3) and (3.2), it is easily to see that for each 𝜆 𝑅 + and ( 𝑝 , 𝑥 ) 𝑆 ( 𝑓 𝜆 , 𝐹 ) , 𝑣 𝑓 𝜆 , 𝐹 = s u p 𝑐 𝐹 𝑝 , 𝑇 𝑥 𝜆 𝑆 𝑥 𝑐 s u p 𝑝 𝑃 , 𝑥 𝑋 𝑝 , 𝑇 𝑥 𝜆 i n f 𝑝 𝑃 , 𝑥 𝑋 𝑝 , 𝑆 𝑥 = 𝜀 1 𝜆 𝜀 0 . ( 3 . 1 6 ) Hence by (3.4), 𝑣 ( 𝑓 + , 𝐹 ) = and the second lemma is proved.

Lemma 3.7. (1) 𝜆 is a lower eigenvalue to (1.1) and 𝑥 its eigenvector if and only if i n f 𝑝 𝑃 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) 0 .
(2) 𝜆 is a lower eigenvalue to (1.1) if and only if 𝑣 ( 𝑓 𝜆 , 𝐹 ) 0 if and only if 𝑓 𝜆 , 𝐹 ( ̂ 𝑝 , ̂ 𝑥 ) 0 for ( ̂ 𝑝 , ̂ 𝑥 ) 𝑆 ( 𝑓 𝜆 , 𝐹 ) .

Proof. (1) If 𝜆 > 0 and ( 𝑥 , 𝑐 ) 𝑋 × 𝐹 satisfy 𝑇 𝑥 𝜆 𝑆 𝑥 + 𝑐 , then for each 𝑝 𝑃 ( 𝑅 𝑚 + ) , 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) 𝑝 , 𝑇 𝑥 𝜆 𝑆 𝑥 𝑐 0 . Hence, i n f 𝑝 𝑃 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) 0 .
If 𝜆 > 0 and 𝑥 𝑋 satisfy i n f 𝑝 𝑃 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) 0 , but no 𝑐 𝐹 can be found such that 𝑇 𝑥 𝜆 𝑆 𝑥 + 𝑐 , then ( 𝑇 𝑥 𝜆 𝑆 𝑥 𝐹 ) 𝑅 𝑚 + = . Since 𝑇 𝑥 𝜆 𝑆 𝑥 𝐹 is convex compact and 𝑅 𝑚 + is closed convex, the Hahn-Banach separation theorem implies that there exists 𝑝 𝑅 𝑚 { 0 } such that < s u p 𝑐 𝐹 𝑝 , 𝑇 𝑥 𝜆 𝑆 𝑥 𝑐 < i n f 𝑦 𝑅 𝑚 + 𝑝 , 𝑦 . Clearly, we have 𝑝 𝑅 𝑚 + { 0 } (or else, we obtain i n f 𝑦 𝑅 𝑚 + 𝑝 , 𝑦 = , which is impossible), i n f 𝑦 𝑅 𝑚 + 𝑝 , 𝑦 = 0 and thus s u p 𝑐 𝐹 𝑝 , 𝑇 𝑥 𝜆 𝑆 𝑥 𝑐 < 0 . Since 𝑅 + 𝑃 = 𝑅 𝑚 + , there exist 𝑡 > 0 and ̂ 𝑝 𝑃 with ̂ 𝑝 = 𝑡 𝑝 . It follows that i n f 𝑝 𝑃 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) 𝑓 𝜆 , 𝐹 ( ̂ 𝑝 , 𝑥 ) = 𝑡 s u p 𝑐 𝐹 𝑝 , 𝑇 𝑥 𝜆 𝑆 𝑥 𝑐 < 0 . This is a contradiction. So we can select 𝑐 𝐹 such that 𝑇 𝑥 𝜆 𝑆 𝑥 + 𝑐 .
(2) If 𝜆 > 0 is a lower eigenvalue to (1.1), then there exists an eigenvector 𝑥 𝜆 𝑋 , which gives, by statement (1) and Lemma 3.5(3), 𝑣 ( 𝑓 𝜆 , 𝐹 ) i n f 𝑝 𝑃 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 𝜆 ) 0 . If 𝑣 ( 𝑓 𝜆 , 𝐹 ) 0 , then Remark 2.2(3) and Lemma 3.5(3) imply that 𝑓 𝜆 , 𝐹 ( ̂ 𝑝 , ̂ 𝑥 ) = 𝑣 ( 𝑓 𝜆 , 𝐹 ) 0 for all ( ̂ 𝑝 , ̂ 𝑥 ) 𝑆 ( 𝑓 𝜆 , 𝐹 ) . If ( ̂ 𝑝 , ̂ 𝑥 ) 𝑆 ( 𝑓 𝜆 , 𝐹 ) with 𝑓 𝜆 , 𝐹 ( ̂ 𝑝 , ̂ 𝑥 ) 0 , then i n f 𝑝 𝑃 𝑓 𝜆 , 𝐹 ( 𝑝 , ̂ 𝑥 ) = 𝑓 𝜆 , 𝐹 ( ̂ 𝑝 , ̂ 𝑥 ) 0 , which gives, by statement (1), that 𝜆 is a lower eigenvalue to (1.1) and ̂ 𝑥 its eigenvector. This completes the proof.

Lemma 3.8. (1) The following statements are equivalent.
(a)System (1.1) has at least one lower eigenvalue.(b) 𝑣 ( 𝑓 0 , 𝐹 ) > 0 . (c) 𝑓 0 , 𝐹 ( ̂ 𝑝 , ̂ 𝑥 ) > 0 for ( ̂ 𝑝 , ̂ 𝑥 ) 𝑆 ( 𝑓 0 , 𝐹 ) .(d)There is a unique ̂ 𝜆 > 0 with 𝑣 ( 𝑓 ̂ 𝜆 , 𝐹 ) = 0 .(e)The maximal lower eigenvalue 𝜆 = 𝜆 ( 𝐹 ) to (1.1) exists. In particular, ̂ 𝜆 = 𝜆 if either 𝑣 ( 𝑓 0 , 𝐹 ) > 0 or one of the ̂ 𝜆 and 𝜆 exists.
(2) If 𝑣 ( 𝑓 0 , 𝐹 ) > 0 , then the set of all lower eigenvalues to (1.1) equals to ( 0 , 𝜆 ] .

Proof. (1) If 𝜆 0 ( > 0 ) is a lower eigenvalue to (1.1), then by Lemmas 3.6 and 3.7(2), 𝑣 ( 𝑓 0 , 𝐹 ) > 𝑣 ( 𝑓 𝜆 0 , 𝐹 ) 0 . In view of Lemma 3.5(3) and Remark 2.2, we also see that 𝑣 ( 𝑓 0 , 𝐹 ) > 0 if and only if 𝑓 0 , 𝐹 ( ̂ 𝑝 , ̂ 𝑥 ) > 0 for any ( ̂ 𝑝 , ̂ 𝑥 ) 𝑆 ( 𝑓 0 , 𝐹 ) . If 𝑣 ( 𝑓 0 , 𝐹 ) > 0 , then also by Lemmas 3.6 and 3.7(2), there exists a unique ̂ 𝜆 > 0 such that 𝑣 ( 𝑓 ̂ 𝜆 , 𝐹 ) = 0 , and ̂ 𝜆 is precisely the maximal lower eigenvalue 𝜆 . If the maximal lower eigenvalue 𝜆 to (1.1) exists, then 𝜆 is also a lower eigenvalue to (1.1). Hence statement (1) follows.
(2) Statement (2) is obvious. Thus the lemma follows.

Lemma 3.9. If 𝐹 𝔹 𝑚 + , then one has the following. (1) 𝑝 𝑔 𝐹 ( 𝑝 , 𝑥 ) ( 𝑥 𝑋 ) and 𝑝 s u p 𝑥 𝑋 𝑔 𝐹 ( 𝑝 , 𝑥 ) are continuous on 𝑃 .(2) 𝑥 𝑔 𝐹 ( 𝑝 , 𝑥 ) ( 𝑝 𝑃 ) and 𝑥 i n f 𝑝 𝑃 𝑔 𝐹 ( 𝑝 , 𝑥 ) are u.s.c. on 𝑋 .(3) 𝑣 ( 𝑔 𝐹 ) exists if and only if 𝑆 ( 𝑔 𝐹 ) .

Proof. (1) Since for each 𝑥 𝑋 and 𝑐 𝐹 , 𝑝 𝑝 , 𝑇 𝑥 𝑐 / 𝑝 , 𝑆 𝑥 is continuous on 𝑃 , by (3.3), and Lemma 2.6(2), we see that 𝑝 𝑔 𝐹 ( 𝑝 , 𝑥 ) = s u p 𝑐 𝐹 ( 𝑝 , 𝑇 𝑥 𝑐 / 𝑝 , 𝑆 𝑥 ) ( 𝑥 𝑋 ) and 𝑝 s u p 𝑥 𝑋 𝑔 𝐹 ( 𝑝 , 𝑥 ) are l.s.c. on 𝑃 . On the other hand, by Assumptions 13, we can verify that ( 𝑝 , 𝑥 , 𝑐 ) 𝑝 , 𝑇 𝑥 𝑐 / 𝑝 , 𝑆 𝑥 is u.s.c. on 𝑃 × 𝑋 × 𝐹 . It follows from Lemma 2.8 that both functions ( 𝑝 , 𝑥 ) 𝑔 𝐹 ( 𝑝 , 𝑥 ) = s u p 𝑐 𝐹 ( 𝑝 , 𝑇 𝑥 𝑐 / 𝑝 , 𝑆 𝑥 ) on 𝑃 × 𝑋 and 𝑝 s u p 𝑥 𝑋 𝑔 𝐹 ( 𝑝 , 𝑥 ) on 𝑃 are u.s.c., so is 𝑝 𝑔 𝐹 ( 𝑝 , 𝑥 ) . Hence (1) is true.
(2) As proved above, we know that for each 𝑝 𝑃 , 𝑥 𝑔 𝐹 ( 𝑝 , 𝑥 ) is u.s.c. on 𝑋 , so is 𝑥 i n f 𝑝 𝑃 𝑔 𝐹 ( 𝑝 , 𝑥 ) because of Lemma 2.6(3).
(3) By Remark 2.2(3), we only need to prove the necessary part. Assume 𝑣 ( 𝑔 𝐹 ) exists, that is, i n f 𝑝 𝑃 s u p 𝑥 𝑋 𝑔 𝐹 ( 𝑝 , 𝑥 ) = s u p 𝑥 𝑋 i n f 𝑝 𝑃 𝑔 𝐹 ( 𝑝 , 𝑥 ) , then both (1) and (2) imply that there exist 𝑝 𝑃 and 𝑥 𝑋 with s u p 𝑥 𝑋 𝑔 𝐹 ( 𝑝 , 𝑥 ) = i n f 𝑝 𝑃 𝑔 𝐹 ( 𝑝 , 𝑥 ) , which means that ( 𝑝 , 𝑥 ) 𝑆 ( 𝑔 𝐹 ) and 𝑆 ( 𝑔 𝐹 ) is nonempty. Hence the lemma is true.

Lemma 3.10. (1)   𝜆 is a lower eigenvalue to (1.1) and 𝑥 its eigenvector if and only if i n f 𝑝 𝑃 𝑔 𝐹 ( 𝑝 , 𝑥 ) 𝜆 .
(2)   𝜆 is a lower eigenvalue to (1.1) if and only if s u p 𝑥 𝑋 i n f 𝑝 𝑃 𝑔 𝐹 ( 𝑝 , 𝑥 ) 𝜆 .

Proof. (1) Suppose 𝜆 > 0 and 𝑥 𝑋 . Since for each 𝑝 𝑃 , 𝑔 𝐹 ( 𝑝 , 𝑥 ) = s u p 𝑐 𝐹 ( 𝑝 , 𝑇 𝑥 𝑐 / 𝑝 , 𝑆 𝑥 ) 𝜆 equals to 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) = s u p 𝑐 𝐹 𝑝 , 𝑇 𝑥 𝜆 𝑆 𝑥 𝑐 0 , which implies that i n f 𝑝 𝑃 𝑔 𝐹 ( 𝑝 , 𝑥 ) 𝜆 if and only if i n f 𝑝 𝑃 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) 0 . Combining this with Lemma 3.7(1), we know that (1) is true.
(2) By (1), it is enough to prove the sufficient part. If s u p 𝑥 𝑋 i n f 𝑝 𝑃 𝑔 𝐹 ( 𝑝 , 𝑥 ) 𝜆 ( > 0 ) , then Lemma 3.9(2) shows that there exists 𝑥 𝜆 𝑋 with i n f 𝑝 𝑃 𝑔 𝐹 ( 𝑝 , 𝑥 𝜆 ) = s u p 𝑥 𝑋 i n f 𝑝 𝑃 𝑔 𝐹 ( 𝑝 , 𝑥 ) 𝜆 . Hence 𝜆 is a lower eigenvalue to (1.1) and 𝑥 𝜆 its eigenvector. This completes the proof.

Lemma 3.11. (1)   𝑣 ( 𝑓 0 , 𝐹 ) > 0 if and only if 𝑣 ( 𝑔 𝐹 ) exists with 𝑣 ( 𝑔 𝐹 ) = 𝜆 if and only if 𝑆 ( 𝑔 𝐹 ) and 𝑔 𝐹 ( 𝑝 , 𝑥 ) = 𝜆 for ( 𝑝 , 𝑥 ) 𝑆 ( 𝑔 𝐹 ) . Where 𝜆 = 𝜆 ( 𝐹 ) > 0 is the maximal lower eigenvalue to (1.1).
(2)   𝜆 is a lower eigenvalue to (1.1) if and only if 𝑣 ( 𝑔 𝐹 ) exists with 𝑣 ( 𝑔 𝐹 ) 𝜆 if and only if 𝑆 ( 𝑔 𝐹 ) and 𝑔 𝐹 ( ̂ 𝑝 , ̂ 𝑥 ) 𝜆 for ( ̂ 𝑝 , ̂ 𝑥 ) 𝑆 ( 𝑔 𝐹 ) .
(3) System (1.1) has at least one lower eigenvalue if and only if 𝑣 ( 𝑔 𝐹 ) exist with 𝑣 ( 𝑔 𝐹 ) > 0 if and only if 𝑆 ( 𝑔 𝐹 ) and 𝑔 𝐹 ( ̂ 𝑝 , ̂ 𝑥 ) > 0 for ( ̂ 𝑝 , ̂ 𝑥 ) 𝑆 ( 𝑔 𝐹 ) .

Proof. (1) We divide the proof of (1) into three steps.Step 1. If 𝑣 ( 𝑓 0 , 𝐹 ) > 0 , then by Lemma 3.8(1), the maximal eigenvalue 𝜆 ( > 0 ) to (1.1) exists with 𝑣 ( 𝑓 𝜆 , 𝐹 ) = 0 . We will prove that 𝑣 ( 𝑔 𝐹 ) exists with 𝑣 ( 𝑔 𝐹 ) = 𝜆 . Let 𝜆 = s u p 𝑥 𝑋 i n f 𝑝 𝑃 𝑔 𝐹 ( 𝑝 , 𝑥 ) , 𝜆 = i n f 𝑝 𝑃 s u p 𝑥 𝑋 𝑔 𝐹 ( 𝑝 , 𝑥 ) , then 𝜆 𝜆 , and the left is to show 𝜆 𝜆 𝜆 .
By Lemma 3.5(2), there exists 𝑥 𝑋 such that i n f 𝑝 𝑃 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) = 𝑣 ( 𝑓 𝜆 , 𝐹 ) = 0 . This shows that s u p 𝑐 𝐹 𝑝 , 𝑇 𝑥 𝜆 𝑆 𝑥 𝑐 = 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) 0 for any 𝑝 𝑃 , that is, 𝜆 s u p 𝑐 𝐹 ( 𝑝 , 𝑇 𝑥 𝑐 / 𝑝 , 𝑆 𝑥 ) = 𝑔 𝐹 ( 𝑝 , 𝑥 ) ( 𝑝 𝑃 ) . Hence, 𝜆 i n f 𝑝 𝑃 𝑔 𝐹 ( 𝑝 , 𝑥 ) 𝜆 . On the other hand, since for each 𝑝 𝑃 , 𝜆 s u p 𝑥 𝑋 𝑔 𝐹 ( 𝑝 , 𝑥 ) , by Lemma 3.9(2), there exists 𝑥 𝑝 𝑋 such that 𝜆 s u p 𝑥 𝑋 𝑔 𝐹 ( 𝑝 , 𝑥 ) = 𝑔 𝐹 ( 𝑝 , 𝑥 𝑝 ) = s u p 𝑐 𝐹 ( 𝑝 , 𝑇 𝑥 𝑝 𝑐 / 𝑝 , 𝑆 𝑥 𝑝 ) . It follows that s u p 𝑥 𝑋 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 𝑝 ) = s u p 𝑐 𝐹 𝑝 , 𝑇 𝑥 𝑝 𝜆 𝑆 𝑥 𝑝 𝑐 0 for any 𝑝 𝑃 . Hence by Lemma 3.5(3), 𝑣 ( 𝑓 𝜆 , 𝐹 ) = i n f 𝑝 𝑃 s u p 𝑥 𝑋 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) 0 . From Lemma 3.7(2), this implies that 𝜆 is a lower eigenvalue to (1.1), and thus 𝜆 𝜆 . Therefore, 𝑣 ( 𝑔 𝐹 ) exists with 𝑣 ( 𝑔 𝐹 ) = 𝜆 .
Step 2. If 𝑣 ( 𝑔 𝐹 ) exists with 𝑣 ( 𝑔 𝐹 ) = 𝜆 ( > 0 ) , then Lemma 3.9(3) and Remark 2.2(3) deduce that 𝑆 ( 𝑔 𝐹 ) and 𝑔 𝐹 ( 𝑝 , 𝑥 ) = 𝑣 ( 𝑔 𝐹 ) = 𝜆 > 0 for ( 𝑝 , 𝑥 ) 𝑆 ( 𝑔 𝐹 ) .Step 3. If 𝑆 ( 𝑔 𝐹 ) and ( 𝑝 , 𝑥 ) 𝑆 ( 𝑔 𝐹 ) with 𝑔 𝐹 ( 𝑝 , 𝑥 ) = 𝜆 ( > 0 ) , then i n f 𝑝 𝑃 𝑔 𝐹 ( 𝑝 , 𝑥 ) = 𝜆 ( > 0 ) . This implies by Lemmas 3.10(1) and 3.8(1) that 𝜆 is a lower eigenvalue to (1.1), and thus 𝑣 ( 𝑓 0 , 𝐹 ) > 0 .
(2) If 𝜆 > 0 is a lower eigenvalue to (1.1), then Lemmas 3.8(1), 3.10(2) and statement (1) imply that 𝑣 ( 𝑓 0 , 𝐹 ) > 0 , 𝑣 ( 𝑔 𝐹 ) exists and 𝑣 ( 𝑔 𝐹 ) = s u p 𝑥 𝑋 i n f 𝑝 𝑃 𝑔 𝐹 ( 𝑝 , 𝑥 ) 𝜆 . If 𝑣 ( 𝑔 𝐹 ) exists with 𝑣 ( 𝑔 𝐹 ) 𝜆 , then from Lemma 3.9(3) and Remark 2.2(3), it follows that 𝑆 ( 𝑔 𝐹 ) and 𝑔 𝐹 ( ̂ 𝑝 , ̂ 𝑥 ) = 𝑣 ( 𝑔 𝐹 ) 𝜆 for ( ̂ 𝑝 , ̂ 𝑥 ) 𝑆 ( 𝑔 𝐹 ) . If 𝑆 ( 𝑔 𝐹 ) and 𝑔 𝐹 ( ̂ 𝑝 , ̂ 𝑥 ) 𝜆 for ( ̂ 𝑝 , ̂ 𝑥 ) 𝑆 ( 𝑔 𝐹 ) , then by Remark 2.2(3) and Lemma 3.10(1), we see that i n f 𝑝 𝑃 𝑔 𝐹 ( 𝑝 , ̂ 𝑥 ) = 𝑔 𝐹 ( ̂ 𝑝 , ̂ 𝑥 ) 𝜆 , and thus 𝜆 is a lower eigenvalue to (1.1) and ̂ 𝑥 its eigenvector.
(3) Statement (3) follows immediately from (1) and (2). This completes the proof.

Lemma 3.12. (1) If 𝑣 ( 𝑓 0 , 𝐹 ) > 0 , or equivalently, if 𝑣 ( 𝑔 𝐹 ) exists with 𝑣 ( 𝑔 𝐹 ) > 0 , then 𝑆 ( 𝑔 𝐹 ) is a nonempty compact subset of 𝑃 × 𝑋 .
(2) The first three statements of Theorem 3.3(2) are true.
(3) Theorem 3.3(3) is true.

Proof. (1) By Lemma 3.11(1), 𝑆 ( 𝑔 𝐹 ) is nonempty. Furthermore, with the same procedure as in proving the last part of Lemma 3.5 and using Lemma 3.9(1)-(2), we can show that if ( 𝑝 𝑘 , 𝑥 𝑘 ) 𝑆 ( 𝑔 𝐹 ) such that ( 𝑝 𝑘 , 𝑥 𝑘 ) ( 𝑝 0 , 𝑥 0 ) 𝑃 × 𝑋 as 𝑘 , then s u p 𝑥 𝑋 𝑔 𝐹 𝑝 0 , 𝑥 = l i m i n f 𝑘 s u p 𝑥 𝑋 𝑔 𝐹 𝑝 𝑘 , 𝑥 l i m s u p 𝑘 i n f 𝑝 𝑃 𝑔 𝐹 𝑝 , 𝑥 𝑘 i n f 𝑝 𝑃 𝑔 𝐹 𝑝 , 𝑥 0 . ( 3 . 1 7 ) Hence, 𝑆 ( 𝑔 𝐹 ) is closed, and also compact.
(2) Now we prove the first three statements of Theorem 3.3(2).
By the condition of Theorem 3.3(2), Lemmas 3.8(1) and 3.11(1), we know that the maximal lower eigenvalue 𝜆 to (1.1) and 𝑣 ( 𝑔 𝐹 ) exist with 𝑣 ( 𝑔 𝐹 ) = 𝜆 .
First we prove statement (a). If 𝑥 𝑋 is an optimal eigenvector, then by Lemma 3.10(1), we have i n f 𝑝 𝑃 𝑔 𝐹 ( 𝑝 , 𝑥 ) 𝜆 . On the other hand, by Lemma 3.9(1), there exists 𝑝 𝑃 such that 𝑣 ( 𝑔 𝐹 ) = s u p 𝑥 𝑋 𝑔 𝐹 ( 𝑝 , 𝑥 ) . So we obtain that s u p 𝑥 𝑋 𝑔 𝐹 ( 𝑝 , 𝑥 ) = 𝜆 i n f 𝑝 𝑃 𝑔 𝐹 ( 𝑝 , 𝑥 ) , and thus ( 𝑝 , 𝑥 ) 𝑆 ( 𝑔 𝐹 ) . If 𝑝 𝑃 such that ( 𝑝 , 𝑥 ) 𝑆 ( 𝑔 𝐹 ) , then Remark 2.2(3) implies that i n f 𝑝 𝑃 𝑔 𝐹 ( 𝑝 , 𝑥 ) = 𝑣 ( 𝑔 𝐹 ) = 𝜆 . If i n f 𝑝 𝑃 𝑔 𝐹 ( 𝑝 , 𝑥 ) = 𝜆 , then Lemma 3.10(1) shows that 𝑥 is an optimal eigenvector. Hence, Theorem 3.3(2)(a) follows.
Next we prove statement (b). By Lemmas 3.5(2) and 3.8(1), there exists ̂ 𝑥 𝑋 with 0 = 𝑣 𝑓 𝜆 , 𝐹 = s u p 𝑥 𝑋 i n f 𝑝 𝑃 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) = i n f 𝑝 𝑃 𝑓 𝜆 , 𝐹 ( 𝑝 , ̂ 𝑥 ) = i n f 𝑝 𝑃 s u p 𝑐 𝐹 𝑝 , 𝑇 ̂ 𝑥 𝜆 𝑆 ̂ 𝑥 𝑐 . ( 3 . 1 8 ) By applying Lemma 2.9 to 𝜑 ( 𝑝 , 𝑐 ) = 𝑝 , 𝑇 ̂ 𝑥 𝜆 𝑆 ̂ 𝑥 𝑐 on 𝑃 × 𝐹 , this leads to s u p 𝑐 𝐹 i n f 𝑝 𝑃 𝑝 , 𝑇 ̂ 𝑥 𝜆 𝑆 ̂ 𝑥 𝑐 = i n f 𝑝 𝑃 s u p 𝑐 𝐹 𝑝 , 𝑇 ̂ 𝑥 𝜆 𝑆 ̂ 𝑥 𝑐 = 0 . ( 3 . 1 9 ) Since 𝑐 i n f 𝑝 𝑃 𝑝 , 𝑇 ̂ 𝑥 𝜆 𝑆 ̂ 𝑥 𝑐 is u.s.c. on 𝐹 and 𝑝 𝑝 , 𝑇 ̂ 𝑥 𝜆 𝑆 ̂ 𝑥 𝑐 is continuous on 𝑃 , from (3.19), first there exists ̂ 𝑐 = ( ̂ 𝑐 1 , ̂ 𝑐 2 , , ̂ 𝑐 𝑚 ) 𝐹 and then there exists ̂ 𝑝 = ( ̂ 𝑝 1 , ̂ 𝑝 2 , , ̂ 𝑝 𝑚 ) 𝑃 such that 0 = s u p 𝑐 𝐹 i n f 𝑝 𝑃 𝑝 , 𝑇 ̂ 𝑥 𝜆 𝑆 ̂ 𝑥 𝑐 = i n f 𝑝 𝑃 𝑝 , 𝑇 ̂ 𝑥 𝜆 𝑆 ̂ 𝑥 ̂ 𝑐 = ̂ 𝑝 , 𝑇 ̂ 𝑥 𝜆 𝑆 ̂ 𝑥 ̂ 𝑐 . ( 3 . 2 0 ) As 𝑅 + 𝑃 = 𝑅 𝑚 + , for each 𝑖 = 1 , 2 , , 𝑚 , there exists 𝑡 𝑖 > 0 with 𝑝 𝑖 = ( 𝑖 0 , , 0 , 𝑡 𝑖 , 0 , , 0 𝑚 ) 𝑃 , which implies by (3.20) that for each 𝑖 = 1 , 2 , , 𝑚 , 𝑡 𝑖 𝑇 𝑖 ̂ 𝑥 𝜆 𝑆 𝑖 ̂ 𝑥 ̂ 𝑐 𝑖 = 𝑝 𝑖 , 𝑇 ̂ 𝑥 𝜆 𝑆 ̂ 𝑥 ̂ 𝑐 0 , t h a t i s , 𝑇 ̂ 𝑥 𝜆 𝑆 ̂ 𝑥 + ̂ 𝑐 . ( 3 . 2 1 )
On the other hand, we can see that 𝐼 ̂ 𝑝 = { 𝑖 ̂ 𝑝 𝑖 > 0 } is nonempty because ̂ 𝑝 𝑃 𝑅 𝑚 + { 0 } . This gives, by (3.20) and (3.21), that for each 𝑖 0 𝐼 ̂ 𝑝 , 0 ̂ 𝑝 𝑖 0 𝑇 𝑖 0 ̂ 𝑥 𝜆 𝑆 𝑖 0 ̂ 𝑥 ̂ 𝑐 𝑖 0 ̂ 𝑝 , 𝑇 ̂ 𝑥 𝜆 𝑆 ̂ 𝑥 ̂ 𝑐 = 0 , t h a t i s , 𝑇 𝑖 0 ̂ 𝑥 = 𝜆 𝑆 𝑖 0 ̂ 𝑥 + ̂ 𝑐 𝑖 0 . ( 3 . 2 2 ) Both (3.21) and (3.22) show that Theorem 3.3(2)(b) is true.
Then we prove statement (c). From (3.2), (3.3), and Lemmas 3.8(1) and 3.11(1), as well as Remark 2.2(2), we know that 𝜆 is the maximal lower eigenvalue to (1.1) and ( 𝑝 , 𝑥 ) 𝑆 ( 𝑔 𝐹 ) if and only if 𝜆 > 0 and ( 𝑝 , 𝑥 ) 𝑃 × 𝑋 satisfy 𝑔 𝐹 ( 𝑝 , 𝑥 ) 𝑔 𝐹 ( 𝑝 , 𝑥 ) = 𝜆 𝑔 𝐹 ( 𝑝 , 𝑥 ) for ( 𝑝 , 𝑥 ) 𝑃 × 𝑋 , which amounts to say 𝑓 𝜆 , 𝐹 𝑝 , 𝑥 𝑓 𝜆 , 𝐹 𝑝 , 𝑥 = 0 𝑓 𝜆 , 𝐹 𝑝 , 𝑥 , ( 𝑝 , 𝑥 ) 𝑃 × 𝑋 , ( 3 . 2 3 ) because for each ( 𝑝 , 𝑥 ) 𝑃 × 𝑋 , 𝑔 𝐹 𝑝 , 𝑥 = s u p 𝑐 𝐹 𝑝 , 𝑇 𝑥 𝑐 𝑝 , 𝑆 𝑥 𝜆 i 𝑓 𝜆 , 𝐹 𝑝 , 𝑥 = s u p 𝑐 𝐹 𝑝 , 𝑇 𝑥 𝜆 𝑆 𝑥 𝑐 0 , 𝑔 𝐹 𝑝 , 𝑥 = s u p 𝑐 𝐹 𝑝 , 𝑇 𝑥 𝑐 𝑝 , 𝑆 𝑥 = 𝜆 i 𝑓 𝜆 , 𝐹 𝑝 , 𝑥 = s u p 𝑐 𝐹 𝑝 , 𝑇 𝑥 𝜆 𝑆 𝑥 𝑐 = 0 , 𝑔 𝐹 𝑝 , 𝑥 = s u p 𝑐 𝐹 𝑝 , 𝑇 𝑥 𝑐 𝑝 , 𝑆 𝑥 𝜆 i 𝑓 𝜆 , 𝐹 𝑝 , 𝑥 = s u p 𝑐 𝐹 𝑝 , 𝑇 𝑥 𝜆 𝑆 𝑥 𝑐 0 . ( 3 . 2 4 ) In view of (3.5), we know that (3.23) is also equivalent to m i n 𝑥 𝑅 𝑛 ̂ 𝑓 𝜆 , 𝐹 𝑝 , 𝑥 = m i n 𝑥 𝑋 𝑓 𝜆 , 𝐹 𝑝 , 𝑥 = 0 = 𝑓 𝜆 , 𝐹 𝑝 , 𝑥 = ̂ 𝑓 𝜆 , 𝐹 𝑝 , 𝑥 , m i n 𝑝 𝑅 𝑚 ̃ 𝑓 𝜆 , 𝐹 𝑝 , 𝑥 = m i n 𝑝 𝑃 𝑓 𝜆 , 𝐹 𝑝 , 𝑥 = 0 = 𝑓 𝜆 , 𝐹 𝑝 , 𝑥 = ̃ 𝑓 𝜆 , 𝐹 𝑝 , 𝑥 . ( 3 . 2 5 ) Also by (3.5), we have e p i ̂ 𝑓 𝜆 , 𝐹 ( 𝑝 , ) = { ( 𝑥 , 𝑎 ) 𝑋 × 𝑅 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) 𝑎 } and e p i ̃ 𝑓 𝜆 , 𝐹 ( , 𝑥 ) = { ( 𝑝 , 𝑎 ) 𝑃 × 𝑅 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) 𝑎 } . Combining this with Lemma 3.5(1)-(2) and using the fact that 𝑋 and 𝑃 are convex compact, we can see that e p i ̂ 𝑓 𝜆 , 𝐹 ( 𝑝 , ) (or e p i ̃ 𝑓 𝜆 , 𝐹 ( , 𝑥 ) ) is closed convex in 𝑅 𝑛 × 𝑅 (or in 𝑅 𝑚 × 𝑅 ). Hence Lemmas 2.6(1) and 2.10 imply that both 𝑥 ̂ 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) ( 𝑥 𝑅 𝑛 ) and 𝑝 ̃ 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) ( 𝑝 𝑅 𝑚 ) are proper convex and l.s.c. with ̂ 𝑓 𝜆 , 𝐹 𝑝 , 𝑥 = ̂ 𝑓 𝜆 , 𝐹 𝑝 , 𝑥 ( 𝑥 𝑅 𝑛 ) , ̃ 𝑓 𝜆 , 𝐹 𝑝 , 𝑥 = ̃ 𝑓 𝜆 , 𝐹 𝑝 , 𝑥 ( 𝑝 𝑅 𝑚 ) . ( 3 . 2 6 ) Applying Lemma 2.11 to the functions 𝑥 ̂ 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) 𝑞 0 , 𝑥 on 𝑅 𝑛 with 𝑞 0 = 0 and 𝑝 ̃ 𝑓 𝜆 , 𝐹 ( 𝑝 , 𝑥 ) 𝑟 0 , 𝑝 on 𝑅 𝑚 with 𝑟 0 = 0 , and using (3.25) and (3.26), we conclude that (3.23) holds if and only if 𝑥 𝜕 ̂ 𝑓 𝜆 , 𝐹 ( 𝑝 , 0 ) and 𝑝 𝜕 ̃ 𝑓 𝜆 , 𝐹 ( 0 , 𝑥 ) . Hence Theorem 3.3(2)(c) is also true.
(3) Finally we prove Theorem 3.3(3).
(a) By (3.1), we know that for each 𝑥 0 𝑋 , 𝑇 𝑥 0 i n t 𝑅 𝑚 + . So there exists 𝑐 0 𝑅 𝑚 + such that 𝑐 0 < 𝑇 𝑥 0 (that is, 𝑐 0 𝑖 < 𝑇 𝑖 𝑥 0 for 𝑖 = 1 , 2 , , 𝑚 ). Take 𝐹 𝑥 0 = { 𝑐 0 } , then s u p 𝑥 𝑋 𝑓 0 , 𝐹 𝑥 0 ( 𝑝 , 𝑥 ) = s u p 𝑥 𝑋 𝑝 , 𝑇 𝑥 𝑐 0 𝑝 , 𝑇 𝑥 0 𝑐 0 > 0 for any 𝑝 𝑃 . Hence, 𝑣 ( 𝑓 0 , 𝐹 𝑥 0 ) = i n f 𝑝 𝑃 s u p 𝑥 𝑋 𝑓 0 , 𝐹 𝑥 0 ( 𝑝 , 𝑥 ) > 0 because 𝑝 s u p 𝑥 𝑋 𝑓 0 , 𝐹 𝑥 0 ( 𝑝 , 𝑥 ) is l.s.c. on 𝑃 . This shows that 𝑚 + = { 𝐹 𝔹 𝑚 + 𝑣 ( 𝑓 0 , 𝐹 ) > 0 } is nonempty. Moreover, Lemma 3.11(1) implies that 𝜆 ( 𝐹 ) exists with 𝜆 ( 𝐹 ) = 𝑣 ( 𝑔 𝐹 ) for any 𝐹 𝑚 + . Hence statement (a) follows.
(b) Let 𝐹 𝑖 𝑚 + ( 𝑖 = 1 , 2 ) , then we have 𝜆 ( 𝐹 𝑖 ) = 𝑣 ( 𝑔 𝐹 𝑖 ) ( 𝑖 = 1 , 2 ) . Suppose that ( 𝑝 , 𝑥 ) 𝑃 × 𝑋 . Since 𝐹 𝑖 ( 𝑖 = 1 , 2 ) are compact, we can select 𝑐 𝑖 𝐹 𝑖 ( 𝑖 = 1 , 2 ) such that 𝑔 𝐹 1 ( 𝑝 , 𝑥 ) = s u p 𝑐 𝐹 1 ( 𝑝 , 𝑇 𝑥 𝑐 / 𝑝 , 𝑆 𝑥 ) = 𝑝 , 𝑇 𝑥 𝑐 1 / 𝑝 , 𝑆 𝑥 and 𝑐 1 𝑐 2 = 𝑑 ( 𝑐 1 , 𝐹 2 ) . This deduces that 𝑔 𝐹 1 ( 𝑝 , 𝑥 ) = 𝑝 , 𝑇 𝑥 𝑐 2 + 𝑐 2 𝑐 1 𝑝 , 𝑆 𝑥 𝑔 𝐹 2 ( 𝑝 , 𝑥 ) + s u p 𝑝 𝑃 𝑝 𝜀 0 𝑑 𝐻 𝐹 1 , 𝐹 2 , ( 3 . 2 7 ) because 𝑝 , 𝑇 𝑥 𝑐 2 / 𝑝 , 𝑆 𝑥 𝑔 𝐹 2 ( 𝑝 , 𝑥 ) , 𝑐 1 𝑐 2 = 𝑑 ( 𝑐 1 , 𝐹 2 ) 𝑑 𝐻 ( 𝐹 1 , 𝐹 2 ) and 𝜀 0 = i n f 𝑝 𝑃 , 𝑥 𝑋 𝑝 , 𝑆 𝑥 is positive. By taking minimax values for both sides of (3.27), we have 𝑣 ( 𝑔 𝐹 1 ) 𝑣 ( 𝑔 𝐹 2 ) + ( s u p 𝑝 𝑃 𝑝 / 𝜀 0 ) 𝑑 𝐻 ( 𝐹 1 , 𝐹 2 ) . Therefore, | 𝜆 ( 𝐹 1 ) 𝜆 ( 𝐹 2 ) | = | 𝑣 ( 𝑔 𝐹 1 ) 𝑣 ( 𝑔 𝐹 2 ) | ( s u p 𝑝 𝑃 𝑝 / 𝜀 0 ) 𝑑 𝐻 ( 𝐹 1 , 𝐹 2 ) because 𝑑 𝐻 ( 𝐹 1 , 𝐹 2 ) = 𝑑 𝐻 ( 𝐹 2 , 𝐹 1 ) , and the last lemma follows.

Proofs of Theorems 3.13.3
Proof. (i) For Theorem 3.1. (1) follows from Lemmas 3.5(3) and 3.6, and (2) from Lemmas 3.9(3), 3.11(1), and 3.12(1).
(ii) For Theorem 3.2. (1) can be deduced from Lemmas 3.7(1) and 3.10(1), (2) from Lemmas 3.7(2) and 3.11(2), while (3) from Lemmas 3.8(1) and 3.11(3).
(iii) For Theorem 3.3. By Lemmas 3.5(3), 3.8(1) and 3.11(1), (1) is true. From Lemmas 3.8(2) and 3.12(2), (2) is valid. Applying Lemma 3.12(3), we obtain the last statement.

4. Solvability Results to (1.2)

Let 𝐹 = { 𝑐 } ( 𝑐 𝑅 𝑚 + ) , 𝜆 𝑅 + , 𝑓 𝜆 , 𝐹 = 𝑓 𝜆 , 𝑐 , 𝑔 𝐹 = 𝑔 𝑐 , and 𝜆 ( 𝐹 ) = 𝜆 ( 𝑐 ) (if exists), then ( 𝑝 , 𝑥 ) 𝑃 × 𝑋 , 𝑓 𝜆 , 𝑐 ( 𝑝 , 𝑥 ) = 𝑝 , 𝑇 𝑥 𝜆 𝑆 𝑥 𝑐 , 𝑔 𝑐 ( 𝑝 , 𝑥 ) = 𝑝 , 𝑇 𝑥 𝑐 𝑝 , 𝑆 𝑥 , ( 4 . 1 ) and the functions ̂ 𝑓 𝜆 , 𝑐 ( 𝑝 , 𝑥 ) , ̃ 𝑓 𝜆 , 𝑐 ( 𝑝 , 𝑥 ) , ̂ 𝑓 𝜆 , 𝑐 ( 𝑝 , 𝑞 ) , ̂ 𝑓 𝜆 , 𝑐 ( 𝑝 , 𝑥 ) , ̃ 𝑓 𝜆 , 𝑐 ( 𝑟 , 𝑥 ) and ̃ 𝑓 𝜆 , 𝑐 ( 𝑝 , 𝑥 ) can be obtained from (3.5) and (3.6) by replacing 𝑐 for 𝐹 , respectively. From Theorems 3.13.3, we immediately obtain the solvability results to (1.2) as follows.

Theorem 4.1. (1) 𝑣 ( 𝑓 𝜆 , 𝑐 ) exists and 𝑆 ( 𝑓 𝜆 , 𝑐 ) is a nonempty convex compact subset of 𝑃 × 𝑋 . Furthermore, 𝜆 𝑣 ( 𝑓 𝜆 , 𝑐 ) is continuous and strictly decreasing on 𝑅 + with 𝑣 ( 𝑓 + , 𝑐 ) = l i m 𝜆 + 𝑣 ( 𝑓 𝜆 , 𝑐 ) = .
(2) 𝑣 ( 𝑔 𝑐 ) exists if and only if 𝑆 ( 𝑔 𝑐 ) . Moreover, if 𝑣 ( 𝑓 0 , 𝑐 ) > 0 , then 𝑣 ( 𝑔 𝑐 ) exists and 𝑆 ( 𝑔 𝑐 ) is a nonempty compact subset of 𝑃 × 𝑋 .

Theorem 4.2. (1) 𝜆 is a lower eigenvalue to (1.2) and 𝑥 its eigenvector if and only if i n f 𝑝 𝑃 𝑓 𝜆 , 𝑐 ( 𝑝 , 𝑥 ) 0 if and only if i n f 𝑝 𝑃 𝑔 𝑐 ( 𝑝 , 𝑥 ) 𝜆 .
(2) 𝜆 is a lower eigenvalue to (1.2) if and only if one of the following statements is true.
(a) 𝑣 ( 𝑓 𝜆 , 𝑐 ) 0 ,(b) 𝑓 𝜆 , 𝑐 ( ̂ 𝑝 , ̂ 𝑥 ) 0 for ( ̂ 𝑝 , ̂ 𝑥 ) 𝑆 ( 𝑓 𝜆 , 𝑐 ) ,(c) 𝑣 ( 𝑔 𝑐 ) exists with 𝑣 ( 𝑔 𝑐 ) 𝜆 ,(d) 𝑆 ( 𝑔 𝑐 ) and 𝑔 𝑐 ( ̂ 𝑝 , ̂ 𝑥 ) 𝜆 for ( ̂ 𝑝 , ̂ 𝑥 ) 𝑆 ( 𝑔 𝑐 ) .
(3) The following statements are equivalent.
(a)System (1.2) has at least one lower eigenvalue,(b) 𝑣 ( 𝑓 0 , 𝑐 ) > 0 ,(c) 𝑣 ( 𝑔 𝑐 ) exists with 𝑣 ( 𝑔 𝑐 ) > 0 ,(d) 𝑆 ( 𝑔 𝑐 ) and 𝑔 𝑐 ( ̂ 𝑝 , ̂ 𝑥 ) > 0 for ( ̂ 𝑝 , ̂ 𝑥 ) 𝑆 ( 𝑔 𝑐 ) .

Theorem 4.3. (1) 𝜆 exists if and only if one of the following statements is true.
(a) 𝑣 ( 𝑓 0 , 𝑐 ) > 0 .(b) 𝑓 0 , 𝑐 ( ̂ 𝑝 , ̂ 𝑥 ) > 0 for ( ̂ 𝑝 , ̂ 𝑥 ) 𝑆 ( 𝑓 0 , 𝑐 ) .(c) 𝑣 ( 𝑓 𝜆 , 𝑐 ) = 0 .(d) 𝑣 ( 𝑔 𝑐 ) exists with 𝑣 ( 𝑔 𝑐 ) = 𝜆 .(e) 𝑆 ( 𝑔 𝑐 ) and 𝑔 𝑐 ( 𝑝 , 𝑥 ) = 𝜆 for ( 𝑝 , 𝑥 ) 𝑆 ( 𝑔 𝑐 ) . Where 𝜆 = 𝜆 ( 𝑐 ) ( > 0 ) is the maximal lower eigenvalue to (1.2).
(2) If 𝑣 ( 𝑓 0 , 𝑐 ) > 0 , or equivalently, if 𝑣 ( 𝑔 𝑐 ) exists with 𝑣 ( 𝑔 𝑐 ) > 0 , then
(a) 𝑥 is an optimal eigenvector if and only if there exists 𝑝 𝑃 with ( 𝑝 , 𝑥 ) 𝑆 ( 𝑔 𝑐 ) if and only if i n f 𝑝 𝑃 𝑔 𝑐 ( 𝑝 , 𝑥 ) = 𝜆 .(b)There exist ̂ 𝑥 𝑋 and 𝑖 0 { 1 , 2 , , 𝑚 } such that 𝑇 ̂ 𝑥 𝜆 𝑆 ̂ 𝑥 + 𝑐 and 𝑇 𝑖 0 ̂ 𝑥 = 𝜆 𝑆 𝑖 0 ̂ 𝑥 + 𝑐 𝑖 0 .(c) 𝜆 = 𝜆 ( 𝑐 ) is the maximal lower eigenvalue to (1.2) and ( 𝑝 , 𝑥 ) 𝑆 ( 𝑔 𝑐 ) if and only if 𝜆 > 0 and ( 𝑝 , 𝑥 ) 𝑃 × 𝑋 satisfy 𝑥 𝜕 ̂ 𝑓 𝜆 , 𝑐 ( 𝑝 , 0 ) and 𝑝 𝜕 ̃ 𝑓 𝜆 , 𝑐 ( 0 , 𝑥 ) . Where 𝜕 ̂ 𝑓 𝜆 , 𝑐 ( 𝑝 , 0 ) and 𝜕 ̃ 𝑓 𝜆 , 𝑐 ( 0 , 𝑥 ) are the subdifferentials of ̂ 𝑓 𝜆 , 𝑐 ( 𝑝 , 𝑞 ) at 𝑞 = 0 and ̃ 𝑓 𝜆 , 𝑐 ( 𝑟 , 𝑥 ) at 𝑟 = 0 , respectively.(d)The set of all lower eigenvalues to (1.2) coincides with the interval ( 0 , 𝑣 ( 𝑔 𝑐 ) ] .
(3) Let + = { 𝑐 𝑅 𝑚 + 𝑣 ( 𝑓 0 , 𝑐 ) > 0 } . Then one has the following.
(a) + , and for each 𝑐 + , 𝜆 = 𝜆 ( 𝑐 ) exists with 𝜆 ( 𝑐 ) = 𝑣 ( 𝑔 𝑐 ) .(b) | 𝜆 ( 𝑐 1 ) 𝜆 ( 𝑐 2 ) | ( s u p 𝑝 𝑃 𝑝 / 𝜀 0 ) 𝑐 1 𝑐 2 ( 𝑐 1 , 𝑐 2 + ) , where 𝜀 0 is also defined by (3.4). Hence, 𝑐 𝜆 ( 𝑐 ) is Lipschitz on + .

5. Solvability Results to (1.4)–(1.6)

We now use Theorems 3.13.3 and 4.14.3 to study the solvability of (1.4)–(1.6). For convenience sake, we only present some essential results.

5.1. Solvability to (1.4)

Since 𝐹 𝔹 𝑚 + (or 𝑐 𝑅 𝑚 + ) makes (a) (or (b)) of (1.4) solvable if and only if 𝜆 = 1 is a lower eigenvalue to (1.1) (or (1.2)) if and only if the maximal lower eigenvalue 𝜆 = 𝜆 ( 𝐹 ) to (1.1) (or 𝜆 = 𝜆 ( 𝑐 ) to (1.2)) exists with 𝜆 ( 𝐹 ) 1 (or 𝜆 ( 𝑐 ) 1 ), by applying Theorems 3.3 and 4.3, we have the solvability results to (1.4) as follows.

Theorem 5.1. (1) Inequality (1.4)(a) is solvable to 𝐹 𝔹 𝑚 + if and only if one of the following statements is true.
(a)There exists 𝜆 1 with 𝑣 ( 𝑓 𝜆 , 𝐹 ) = 0 .(b) 𝑣 ( 𝑔 𝐹 ) exists with 𝑣 ( 𝑔 𝐹 ) 1 .(c) 𝑆 ( 𝑔 𝐹 ) and 𝑔 𝐹 ( 𝑝 , 𝑥 ) 1 for ( 𝑝 , 𝑥 ) 𝑆 ( 𝑔 𝐹 ) .
(2) Inequality (1.4)(b) is solvable to 𝑐 𝑅 𝑚 + if and only if one of the following statements is true.
(a)There exists 𝜆 1 with 𝑣 ( 𝑓 𝜆 , 𝑐 ) = 0 .(b) 𝑣 ( 𝑔 𝑐 ) exists with 𝑣 ( 𝑔 𝑐 ) 1 .(c) 𝑆 ( 𝑔 𝑐 ) and 𝑔 𝑐 ( 𝑝 , 𝑥 ) 1 for ( 𝑝 , 𝑥 ) 𝑆 ( 𝑔 𝑐 ) .

5.2. Solvability to (1.5)

By Theorem 4.1, for each 𝜆 0 , 𝑣 ( 𝑓 𝜆 , 0 ) exists, 𝑆 ( 𝑓 𝜆 , 0 ) is nonempty, and if ( 𝑝 , 𝑥 ) 𝑆 ( 𝑓 0 , 0 ) , then 𝑣 ( 𝑓 0 , 0 ) = 𝑝 , 𝑇 𝑥 > 0 . Hence 𝑣 ( 𝑔 0 ) exists, 𝑆 ( 𝑔 0 ) is nonempty, and the maximal lower eigenvalue 𝜆 = 𝜆 ( 0 ) to (1.2) exists with 𝑣 ( 𝑔 0 ) = 𝜆 = 𝑔 0 ( 𝑝 , 𝑥 ) for ( 𝑝 , 𝑥 ) 𝑆 ( 𝑔 0 ) . By Theorems 4.2 and 4.3 for 𝑐 = 0 , we obtain the solvability results to (1.5) as follows.

Theorem 5.2. (1) 𝜆 is a growth factor to (1.5) and 𝑥 its intensity vector if and only if 𝜆 𝜆 0 with i n f 𝑝 𝑃 𝑓 𝜆 , 0 ( 𝑝 , 𝑥 ) 0 if and only if i n f 𝑝 𝑃 𝑔 0 ( 𝑝 , 𝑥 ) 𝜆 𝜆 0 .
(2) 𝜆 is a growth factor to (1.5) if and only if 𝜆 𝜆 0 with 𝑣 ( 𝑓 𝜆 , 0 ) 0 if and only if 𝑣 ( 𝑔 0 ) 𝜆 𝜆 0 .
(3) Growth fact problem (1.5) is efficient if and only if there exists 𝜆 𝜆 0 with 𝑣 ( 𝑓 𝜆 , 0 ) 0 if and only if 𝑣 ( 𝑔 0 ) 𝜆 0 .
(4) 𝜆 is the optimal growth factor to (1.5) if and only if 𝜆 𝜆 0 with 𝑣 ( 𝑓 𝜆 , 0 ) = 0 if and only if 𝑣 ( 𝑔 0 ) = 𝜆 𝜆 0 .
(5) If 𝑣 ( 𝑔 0 ) = 𝜆 𝜆 0 , then there exist 𝑥 𝑋 and 𝑖 0 { 1 , 2 , , 𝑚 } such that 𝑇 𝑥 𝜆 𝑆 𝑥 and 𝑇 𝑖 0 𝑥 = 𝜆 𝑆 𝑖 0 𝑥 .
(6) 𝜆 is the optimal growth factor to (1.5) and ( 𝑝 , 𝑥 ) 𝑆 ( 𝑔 0 ) if and only if 𝜆 𝜆 0 and ( 𝑝 , 𝑥 ) 𝑃 × 𝑋 satisfy 𝑥 𝜕 ̂ 𝑓 𝜆 , 0 ( 𝑝 , 0 ) and 𝑝 𝜕 ̃ 𝑓 𝜆 , 0 ( 0 , 𝑥 ) .

5.3. Solvability to (1.6)

To present the solvability results to (1.6), we assume that ( a ) 𝑋 i n t 𝑅 𝑛 + i s c o n v e x c o m p a c t , ( b ) 𝐴 , 𝐵 𝑅 𝑛 2 + w i t h ( 𝐼 𝐴 ) 𝑋 i n t 𝑅 𝑛 + , 𝐵 𝑋 i n t 𝑅 𝑛 + , ( 5 . 1 ) and define 𝑓 𝜆 ( 𝑝 , 𝑥 ) = 𝑓 𝜆 , 0 ( 𝑝 , 𝑥 ) and 𝑔 ( 𝑝 , 𝑥 ) = 𝑔 0 ( 𝑝 , 𝑥 ) on Σ 𝑛 1 × 𝑋 by 𝑓 𝜆 ( 𝑝 , 𝑥 ) = 𝑝 , ( 𝐼 𝐴 ) 𝑥 𝜆 𝐵 𝑥 , ( 𝑝 , 𝑥 ) = 𝑝 , ( 𝐼 𝐴 ) 𝑥 𝑝 , 𝐵 𝑥 f o r ( 𝑝 , 𝑥 ) Σ 𝑛 1 × 𝑋 , ( 5 . 2 ) where Σ 𝑛 1 is the ( 𝑛 1 ) simplex. Applying Theorems 4.14.3 to 𝑆 = 𝐵 , 𝑇 = 𝐼 𝐴 , 𝑐 = 0 and 𝜆 = 𝜇 1 , we obtain existence results to (1.6) as follows.

Theorem 5.3. If (5.1) holds and 𝑓 𝜆 , 𝑔 be defined by (5.1). Then one has the following. (1)There exists 𝜆 > 0 such that 𝑣 ( 𝑓 𝜆 ) = 0 and 𝜆 = 𝑣 ( 𝑔 ) . (2) 𝜇 = 𝜆 + 1 is the optimal balanced growth factor to (1.6).(3)Growth path problem (1.6) is efficient, and 𝜇 is a balanced growth factor to (1.6) if and only if 𝜇 ( 1 , 1 + 𝜆 ] .

Remark 5.4. Assumption (5.1) is only an essential condition to get the conclusions of Theorem 5.3. By applying Theorems 4.14.3 and using some analysis methods or matrix techniques, one may obtain some more solvability results to the Leontief-type balanced and optimal balanced growth path problem.

6. Conclusion

In this article, we have studied an optimal lower eigenvalue system (namely, (1.1)), and proved three solvability theorems (i.e., Theorems 3.13.3) including a series of necessary and sufficient conditions concerning existence and a Lipschitz continuity result concerning stability. With the theorems, we have also obtained some existence criteria (namely, Theorems 5.15.3) to the von-Neumann type input-output inequalities, growth and optimal growth factors, as well as to the Leontief type balanced and optimal balanced growth path problems.