Table of Contents Author Guidelines Submit a Manuscript
Abstract and Applied Analysis
VolumeΒ 2011, Article IDΒ 208624, 20 pages
http://dx.doi.org/10.1155/2011/208624
Research Article

An Optimal Lower Eigenvalue System

Department of Mathematics, College of Science, Nanjing University of Posts and Telecommunications, Nanjing 210046, China

Received 8 November 2010; Accepted 6 April 2011

Academic Editor: Nicholas D.Β Alikakos

Copyright Β© 2011 Yingfan Liu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

An optimal lower eigenvalue system is studied, and main theorems including a series of necessary and suffcient conditions concerning existence and a Lipschitz continuity result concerning stability are obtained. As applications, solvability results to some von-Neumann-type input-output inequalities, growth, and optimal growth factors, as well as Leontief-type balanced and optimal balanced growth paths, are also gotten.

1. Introduction

1.1. The Optimal Lower Eigenvalue System

Arising from considering some inequality problems in input-output analysis such as von-Neumann type input-output inequalities, growth and optimal growth factors, as well as Leontief type balanced and optimal balanced growth paths, we will study an optimal lower eigenvalue system.

To this end, we denote by 𝑅 π‘˜ = ( 𝑅 π‘˜ , β€– β‹… β€– ) the real π‘˜ -dimensional Euclidean space with the dual 𝑅 π‘˜ βˆ— = 𝑅 π‘˜ , 𝑅 π‘˜ + the set of all nonnegative vectors of 𝑅 π‘˜ , and i n t 𝑅 π‘˜ + its interior. We also define 𝑦 1 β‰₯ ( o r > ) 𝑦 2 in 𝑅 π‘˜ by 𝑦 1 βˆ’ 𝑦 2 ∈ 𝑅 π‘˜ + (or ∈ i n t 𝑅 π‘˜ + ).

Let πœ† ∈ 𝑅 +  = 𝑅 1 + , 𝐹 βŠ† 𝑅 π‘š + , 𝑋 βŠ† 𝑅 𝑛 + , and 𝑇  = ( 𝑇 1 , … , 𝑇 π‘š ) , 𝑆  = ( 𝑆 1 , … , 𝑆 π‘š ) ∢ 𝑋 β†’ i n t 𝑅 π‘š + be two single-valued maps, where π‘š may not be equal to 𝑛 . Then the optimal lower eigenvalue system that we will study and use to consider the preceding inequality problems can be described by πœ† , 𝐹 , 𝑋 , 𝑇 , and 𝑆 as follows: ( a ) ⎧ βŽͺ βŽͺ ⎨ βŽͺ βŽͺ ⎩ πœ† > 0 ∢ βˆƒ π‘₯ ∈ 𝑋 s . t . 𝑇 π‘₯ βˆ’ πœ† 𝑆 π‘₯ ∈ 𝐹 + 𝑅 π‘š + , t h a t i s , 𝑇 π‘₯ βˆ’ 𝑆 π‘₯ β‰₯ 𝑐 f o r s o m e 𝑐 ∈ 𝐹 , ( b ) ⎧ βŽͺ ⎨ βŽͺ ⎩ 0 < πœ† ⟢ m a x ∢ βˆƒ π‘₯ ∈ 𝑋 s . t . 𝑇 π‘₯ βˆ’ πœ† 𝑆 π‘₯ ∈ 𝐹 + 𝑅 π‘š + . ( 1 . 1 ) We call πœ† ( > 0 ) a lower eigenvalue to (1.1) if it solves (a), and its solution π‘₯ the eigenvector, claim πœ† = πœ† ( 𝐹 ) ( > 0 ) the maximal lower eigenvalue to (1.1) if it maximizes (b) (i.e., πœ† solves (a), but πœ‡ not if πœ‡ > πœ† ), and its solution π‘₯ the optimal eigenvector.

In case 𝐹 = { 𝑐 } with 𝑐 ∈ 𝑅 π‘š + , then (1.1) becomes ( a ) ⎧ βŽͺ ⎨ βŽͺ ⎩ πœ† > 0 ∢ βˆƒ π‘₯ ∈ 𝑋 s . t . 𝑇 π‘₯ β‰₯ πœ† 𝑆 π‘₯ + 𝑐 , ( b ) ⎧ βŽͺ ⎨ βŽͺ ⎩ 0 < πœ† ⟢ m a x ∢ βˆƒ π‘₯ ∈ 𝑋 s . t . 𝑇 π‘₯ β‰₯ πœ† 𝑆 π‘₯ + 𝑐 . ( 1 . 2 ) All the concepts concerning (1.1) are reserved for (1.2), and for convenience, the maximal lower eigenvalue πœ† = πœ† ( { 𝑐 } ) to (1.2), if existed, is denoted by πœ† = πœ† ( 𝑐 ) .

1.2. Some Economic Backgrounds

As indicated above, the aim of this article is to consider some inequality problems in input-output analysis by studying (1.1). So it is natural to know how many (or what types of) problems in input-output analysis can be deduced from (1.1) or (1.2) by supplying 𝐹 , 𝑋 , 𝑇 , 𝑆 , 𝑐 , and πœ† with some proper economic implications. Indeed, in the input-output analysis found by Leontief [1], there are two classes of important economic systems.

One is the Leontief type input-output equality problem composed of an equation and an inclusion as follows: ( a ) ⎧ βŽͺ ⎨ βŽͺ ⎩ βˆƒ π‘₯ ∈ 𝑋 s . t . π‘₯ βˆ’ 𝐴 π‘₯ = 𝑐 , ( b ) ⎧ βŽͺ ⎨ βŽͺ ⎩ βˆƒ π‘₯ ∈ 𝑋 s . t . π‘₯ βˆ’ 𝑆 π‘₯ βˆ‹ 𝑐 , ( 1 . 3 ) where 𝑐 ∈ 𝑅 𝑛 + is an expected demand of the market, 𝑋 βŠ‚ 𝑅 𝑛 + some enterprise's admission output bundle set, and 𝐴 ∢ 𝑋 β†’ 𝑅 𝑛 + or 𝑆 ∢ 𝑋 β†’ 2 𝑅 𝑛 + is the enterprise's single-valued or set-valued consuming map. The economic implication of (a) or (b) is whether there exists π‘₯ ∈ 𝑋 or there exist π‘₯ ∈ 𝑋 and 𝑦 ∈ 𝑆 π‘₯ such that the pure output π‘₯ βˆ’ 𝐴 π‘₯ or π‘₯ βˆ’ 𝑦 is precisely equal to the expected demand 𝑐 . If 𝑋 = 𝑅 𝑛 + , and 𝐴 is described by a 𝑛 th square matrix, then (a) is precisely the classical Leontief input-output equation, which has been studied by Leontief [1] and Miller and Blair [2] with the matrix analysis method. If 𝑋 is convex compact, and 𝐴 is continuous, then (a) is a Leontief type input-output equation, which has been considered by Fujimoto [3] and Liu and Chen [4, 5] with the functional analysis approach. As for (b), in case 𝑋 is convex compact, and 𝑆 is convex compact-valued with and without the upper hemicontinuous condition, it has also been studied by Liu and Zhang [6, 7] with the nonlinear analysis methods attributed to [810], in particular, using the classical Rogalski-Cornet Theorem (see [8, Theorem 15.1.4]) and some Rogalski-Cornet type Theorems (see [6, Theorems 2.8, 2.9 and 2.12]). However, since the methods to tackle (1.3) are quite different from those to study (1.1), we do not consider it here.

Another is the von-Neumann type and Leontief type inequality problems which can be viewed as some special examples of (1.1) or (1.2).

(i) Assume that 𝐹 βŠ† 𝑅 π‘š + or 𝑐 ∈ 𝑅 π‘š + is an expected demand set or an expected demand of the market, and 𝑋 βŠ† 𝑅 𝑛 + some enterprise's raw material bundle set. Then the von-Neumann type inequality problems including input-output inequalities, along with growth and optimal growth factors can be stated, respectively, as follows.

(1) If 𝑇 , 𝑆 ∢ 𝑋 β†’ i n t 𝑅 π‘š + are supposed to be the enterprise's output (or producing) and consuming maps, respectively, by taking πœ† = 1 , then from both (a) of (1.1) and (1.2), we obtain the von-Neumann type input-output inequalities: ( a ) ⎧ βŽͺ ⎨ βŽͺ ⎩ π‘₯ ∈ 𝑋 s . t . 𝑇 π‘₯ βˆ’ 𝑆 π‘₯ ∈ 𝐹 + 𝑅 π‘š + , ( b ) ⎧ βŽͺ ⎨ βŽͺ ⎩ π‘₯ ∈ 𝑋 s . t . 𝑇 π‘₯ βˆ’ 𝑆 π‘₯ β‰₯ 𝑐 . ( 1 . 4 ) The economic implication of (a) or (b) is whether there exist π‘₯ ∈ 𝑋 and 𝑐 ∈ 𝐹 or there exists π‘₯ ∈ 𝑋 such that the pure output 𝑇 π‘₯ βˆ’ 𝑆 π‘₯ satisfies sufficiently the expected demand 𝑐 . If 𝑋 = 𝑅 𝑛 + , and 𝑇 , 𝑆 are described by two π‘š Γ— 𝑛 matrixes, then (b) returns to the classical von-Neumann input-output inequality, which has also been studied by Leontief [1] and Miller and Blair [2] with the matrix analysis method. If 𝑋 is convex compact, and 𝑇 , 𝑆 are two nonlinear maps such that 𝑇 𝑖 , βˆ’ 𝑆 𝑖 are upper semicontinuous concave for any 𝑖 = 1 , … , π‘š , then (b) (as a nonlinear von-Neumann input-output inequality) has been handled by Liu [11] and Liu and Zhang [12] with the nonlinear analysis methods in [810]. Along the way, in case 𝑋 is convex compact, and 𝑇 , 𝑆 are replaced by two upper semicontinuous convex set-valued maps with convex compact values, then (b) (as a set-valued von-Neumann input-output inequality) has also been studied by Liu [13, 14]. However, (a) has not been considered up to now. Since (a) (or (b)) is solvable if and only if πœ† = 1 makes (1.1)(a) (or makes (1.2)(a)) have solutions, and also, if and only if the maximal lower eigenvalue πœ† ( 𝐹 ) to (1.1) exists with πœ† ( 𝐹 ) β‰₯ 1 (or the maximal lower eigenvalue πœ† ( 𝑐 ) to (1.2) exists with πœ† ( 𝑐 ) β‰₯ 1 ), we see that the lower eigenvalue approach yielded from studying (1.1) or (1.2) may be applied to obtain some new solvability results to (1.4).

(2) If 𝑇 , 𝑆 ∢ 𝑋 βŠ† 𝑅 𝑛 + β†’ i n t 𝑅 π‘š + are supposed to be the enterprise's output and input (or invest) maps, respectively, and set Ξ›  = { πœ† > 0 ∢ βˆƒ π‘₯ ∈ 𝑋 s . t . 𝑇 π‘₯ β‰₯ πœ† 𝑆 π‘₯ } , then Ξ› is nonempty, and in some degree, each πœ† ∈ Ξ› can be used to describe the enterprise's growth behavior. Since the enterprise always hopes his growth as big as possible, a fixed positive number πœ† 0 can be selected to represent the enterprise's desired minimum growth no matter whether πœ† 0 ∈ Ξ› or not. By taking 𝑐 = 0 and restricting πœ† β‰₯ πœ† 0 , then from (1.2) we obtain the von-Neumann type growth and optimal growth factor problem: ( a ) ⎧ βŽͺ ⎨ βŽͺ ⎩ πœ† ∈ ξ€Ί πœ† 0 , + ∞ ξ€Έ ∢ βˆƒ π‘₯ ∈ 𝑋 s . t . 𝑇 π‘₯ β‰₯ πœ† 𝑆 π‘₯ , ( b ) ⎧ βŽͺ ⎨ βŽͺ ⎩ πœ† 0 ≀ πœ† ⟢ m a x ∢ βˆƒ π‘₯ ∈ 𝑋 s . t . 𝑇 π‘₯ β‰₯ πœ† 𝑆 π‘₯ . ( 1 . 5 ) We call πœ† a growth factor to (1.5) if it solves (a), its solution π‘₯ the intensity vector, and say that (1.5) is efficient if it has at least one growth factor. We also claim πœ† the optimal growth factor to (1.5) if it maximizes (b), and its solution π‘₯ the optimal intensity vector. If 𝑋 = 𝑅 𝑛 + , and 𝑆 , 𝑇 are described by two π‘š Γ— 𝑛 matrixes, then (a) reduces to the classical von-Neumann growth model, and has been studied by Leontief [1], Miller and Blair [2], Medvegyev [15], and Bidard and Hosoda [16] with the matrix analysis method. Unfortunately, if 𝑇 , 𝑆 are nonlinear maps, in my knowledge, no any references regarding (1.5) can be seen. Clearly, the matrix analysis method is useless to the nonlinear version. On the other hand, it seems that the methods of [11, 12] fit for (1.4)(b) may probably be applied to tackle (a) because 𝑇 π‘₯ β‰₯ πœ† 𝑆 π‘₯ can be rewritten as 𝑇 π‘₯ βˆ’ ( πœ† 𝑆 ) π‘₯ β‰₯ 0 . However, since the most important issue regarding (1.5) is to find the optimal growth fact (or equivalently, to search out all the growth facts), which is much more difficult to be tackled than to determine a single growth fact, we suspect that it is impossible to solve both (a) and (b) completely only using the methods of [11, 12]. So a possible idea to deal with (1.5) for the nonlinear version is to study (1.2) and obtain some meaningful results.

(ii) If π‘š = 𝑛 , 𝑋 βŠ† 𝑅 𝑛 + is the enterprise's admission output vector set, 𝐼 the identity map from 𝑅 𝑛 to itself, and 𝐴 = ( π‘Ž 𝑖 𝑗 ) 𝑛 Γ— 𝑛 , 𝐡 = ( 𝑏 𝑖 𝑗 ) 𝑛 Γ— 𝑛 ∈ 𝑅 𝑛 2 + are two 𝑛 th square matrixes used to describe the enterprise's consuming and reinvesting, respectively. Set πœ† = πœ‡ βˆ’ 1 , 𝑆 = 𝐡 , 𝑇 = 𝐼 βˆ’ 𝐴 , and 𝑐 = 0 , then under the zero profit principle, from (1.2) we obtain the Leontief type balanced and optimal balanced growth path problem: ( a ) ⎧ βŽͺ ⎨ βŽͺ ⎩ πœ‡ > 1 ∢ βˆƒ π‘₯ ∈ 𝑋 s . t . ( 𝐼 βˆ’ 𝐴 ) π‘₯ β‰₯ ( πœ‡ βˆ’ 1 ) 𝐡 π‘₯ , ( b ) ⎧ βŽͺ ⎨ βŽͺ ⎩ 1 < πœ‡ ⟢ m a x ∢ βˆƒ π‘₯ ∈ 𝑋 s . t . ( 𝐼 βˆ’ 𝐴 ) π‘₯ β‰₯ ( πœ‡ βˆ’ 1 ) 𝐡 π‘₯ . ( 1 . 6 ) Both (a) and (b) are just the static descriptions of the dynamic Leontief model ( a ) πœ‡ > 1 o r ( b ) 1 < πœ‡ ⟢ m a x ∢ βˆƒ π‘₯ ∈ 𝑋 s . t . π‘₯ ( 𝑑 ) = πœ‡ 𝑑 π‘₯ w i t h ( 𝐼 βˆ’ 𝐴 + 𝐡 ) π‘₯ ( 𝑑 ) β‰₯ 𝐡 π‘₯ ( 𝑑 + 1 ) , 𝑑 = 1 , 2 , … . ( 1 . 7 ) This model also shows that why the Leontief model (1.6) should be restricted to the linear version. We call πœ‡ ( > 1 ) a balanced growth factor to (1.6) if it solves (a), (1.6) is efficient if it has at least one balanced growth factor, and claim πœ‡ ( > 1 ) the optimal balanced growth factor to (1.6) if it maximizes (b). It is also needed to stress that at least to my knowledge, only (1.6)(a) has been considered, that is to say, up to now we do not know under what conditions of 𝐴 and 𝐡 , the optimal balanced growth fact to (1.6) must exist, and how many possible balanced growth factors to (1.6) could be found. So we hope to consider (1.6) by studying (1.2), and obtain its solvability results.

1.3. Questions and Assumptions

In the sequel, taking (1.2) and (1.4)–(1.6) as the special examples of (1.1), we will devote to study (1.1) by considering the following three solvability questions.

Question 1 (Existence). If πœ† > 0 , does it solve (1.1)(a)? Can we presentany sufficient conditions, or if possible, any necessary and sufficient conditions?

Question 2 (Existence). Does the maximal lower eigenvalue πœ† = πœ† ( 𝐹 ) to (1.1) exist? How to describe it?

Question 3 (Stability). If the answer to the Question 2 is positive, whether the corresponding map 𝐹 ↦ πœ† = πœ† ( 𝐹 ) is stable in any proper way?

In order to analyse the preceding questions and obtain some meaningful results, we need three assumptions as follows.

Assumption 1. 𝑋 βŠ‚ 𝑅 𝑛 + is nonempty, convex, and compact.

Assumption 2. For all 𝑖 = 1 , 2 , … , π‘š , 𝑇 𝑖 ∢ 𝑋 β†’ i n t 𝑅 + is upper semicontinuous and concave, 𝑆 𝑖 ∢ 𝑋 β†’ i n t 𝑅 + is lower semicontinuous and convex.

Assumption 3. 𝔹 π‘š + = { 𝐹 βŠ‚ 𝑅 π‘š + ∢ 𝐹 is nonempty, convex, and compact} and 𝐹 ∈ 𝔹 π‘š + .

By virtue of the nonlinear analysis methods attributed to [810], in particular, using the minimax, saddle point, and the subdifferential techniques, we have made some progress for the solvability questions to (1.1) including a series of necessary and sufficient conditions concerning existence and a Lipschitz continuity result concerning stability. The plan of this paper is as follows, we introduce some concepts and known lemmas in Section 2, prove the main (solvability) theorems concerning (1.1) in Section 3, list the solvability results concerning (1.2) in Section 4, followed by some applications to (1.4)–(1.6) in Section 5, then present the conclusion in Section 6.

2. Terminology

Let 𝑓 , 𝑔 𝛼 ( 𝛼 ∈ Ξ› ) ∢ 𝑋 βŠ‚ 𝑅 π‘˜ β†’ 𝑅 and πœ‘ ∢ 𝑃 Γ— 𝑋 βŠ‚ 𝑅 π‘š Γ— 𝑅 𝑛 β†’ 𝑅 be functions. In the sections below, we need some well known concepts of 𝑓 , 𝑔 𝛼 ( 𝛼 ∈ Ξ› ) and πœ‘ such as convex or concave, upper or lower semicontinuous (in short, u.s.c. or l.s.c.) and continuous (i.e., both u.s.c. and l.s.c.), whose definitions can be found in [810], so the details are omitted here. In order to deal with the solvability questions to (1.1) stated in Section 1, we also need some further concepts as follows.

Definition 2.1. (1) If i n f 𝑝 ∈ 𝑃 s u p π‘₯ ∈ 𝑋 πœ‘ ( 𝑝 , π‘₯ ) = s u p π‘₯ ∈ 𝑋 i n f 𝑝 ∈ 𝑃 πœ‘ ( 𝑝 , π‘₯ )  = 𝑣 ( πœ‘ ) , then we claim that the minimax value 𝑣 ( πœ‘ ) (of πœ‘ ) exists.
(2) If ( 𝑝 , π‘₯ ) ∈ 𝑃 Γ— 𝑋 such that s u p π‘₯ ∈ 𝑋 πœ‘ ( 𝑝 , π‘₯ ) = i n f 𝑝 ∈ 𝑃 πœ‘ ( 𝑝 , π‘₯ ) , then we call ( 𝑝 , π‘₯ ) a saddle point of πœ‘ , and denote by 𝑆 ( πœ‘ ) the set of all saddle points.

Remark 2.2. From the definition, we can see that(1) 𝑣 ( πœ‘ ) exists if and only if i n f 𝑝 ∈ 𝑃 s u p π‘₯ ∈ 𝑋 πœ‘ ( 𝑝 , π‘₯ ) ≀ s u p π‘₯ ∈ 𝑋 i n f 𝑝 ∈ 𝑃 πœ‘ ( 𝑝 , π‘₯ ) ,(2) ( 𝑝 , π‘₯ ) ∈ 𝑆 ( πœ‘ ) if and only if ( 𝑝 , π‘₯ ) ∈ 𝑃 Γ— 𝑋 with s u p π‘₯ ∈ 𝑋 πœ‘ ( 𝑝 , π‘₯ ) ≀ i n f 𝑝 ∈ 𝑃 πœ‘ ( 𝑝 , π‘₯ ) if and only if ( 𝑝 , π‘₯ ) ∈ 𝑃 Γ— 𝑋 such that πœ‘ ( 𝑝 , π‘₯ ) ≀ πœ‘ ( 𝑝 , π‘₯ ) for ( 𝑝 , π‘₯ ) ∈ 𝑃 Γ— 𝑋 ,(3)if 𝑆 ( πœ‘ ) β‰  βˆ… , then 𝑣 ( πœ‘ ) exists and 𝑣 ( πœ‘ ) = πœ‘ ( 𝑝 , π‘₯ ) = s u p π‘₯ ∈ 𝑋 πœ‘ ( 𝑝 , π‘₯ ) = i n f 𝑝 ∈ 𝑃 πœ‘ ( 𝑝 , π‘₯ ) for all ( 𝑝 , π‘₯ ) ∈ 𝑆 ( πœ‘ ) .

Definition 2.3. Let 𝑓 be a function from 𝑅 π‘˜ to 𝑅 βˆͺ { + ∞ } with the domain d o m ( 𝑓 ) = { π‘₯ ∈ 𝑅 π‘˜ ∢ 𝑓 ( π‘₯ ) < + ∞ } and 𝑔 a function from 𝑅 π‘˜ βˆ— ( = 𝑅 π‘˜ ) to 𝑅 βˆͺ { + ∞ } . Then one has the following.(1) 𝑓 is said to be proper if d o m ( 𝑓 ) β‰  βˆ… . The epigraph e p i ( 𝑓 ) of 𝑓 is the subset of 𝑅 π‘˜ Γ— 𝑅 defined by e p i ( 𝑓 ) = { ( π‘₯ , π‘Ž ) ∈ 𝑅 π‘˜ Γ— 𝑅 ∢ 𝑓 ( π‘₯ ) ≀ π‘Ž } .(2)The conjugate functions of 𝑓 and 𝑔 are the functions 𝑓 βˆ— ∢ 𝑅 π‘˜ βˆ— β†’ 𝑅 βˆͺ { + ∞ } and 𝑔 βˆ— ∢ 𝑅 π‘˜ β†’ 𝑅 βˆͺ { + ∞ } defined by 𝑓 βˆ— ( 𝑝 ) = s u p π‘₯ ∈ 𝑅 π‘˜ [ ⟨ 𝑝 , π‘₯ ⟩ βˆ’ 𝑓 ( π‘₯ ) ] for 𝑝 ∈ 𝑅 π‘˜ βˆ— and 𝑔 βˆ— ( π‘₯ ) = s u p 𝑝 ∈ 𝑅 π‘˜ [ ⟨ 𝑝 , π‘₯ ⟩ βˆ’ 𝑔 ( 𝑝 ) ] for π‘₯ ∈ 𝑅 π‘˜ , respectively. The biconjugate 𝑓 βˆ— βˆ— of 𝑓 is therefore defined on 𝑅 π‘˜ βˆ— βˆ— ( = 𝑅 π‘˜ ) by 𝑓 βˆ— βˆ— = ( 𝑓 βˆ— ) βˆ— .(3)If 𝑓 is a proper function from 𝑅 π‘˜ to 𝑅 βˆͺ { + ∞ } and π‘₯ 0 ∈ d o m ( 𝑓 ) , then the subdifferential of 𝑓 at π‘₯ 0 is the (possibly empty) subset πœ• 𝑓 ( π‘₯ 0 ) of 𝑅 π‘˜ βˆ— defined by πœ• 𝑓 ( π‘₯ 0 ) = { 𝑝 ∈ 𝑅 π‘˜ βˆ— ∢ 𝑓 ( π‘₯ 0 ) βˆ’ 𝑓 ( π‘₯ ) ≀ ⟨ 𝑝 , π‘₯ 0 βˆ’ π‘₯ ⟩ for all π‘₯ ∈ 𝑅 π‘˜ } .

Remark 2.4. If 𝑓 is a proper function from 𝑅 π‘˜ to 𝑅 βˆͺ { βˆ’ ∞ } , then the domain of 𝑓 should be defined by d o m ( 𝑓 ) = { π‘₯ ∈ 𝑅 π‘˜ ∢ 𝑓 ( π‘₯ ) > βˆ’ ∞ } , and 𝑓 is said to be proper if d o m ( 𝑓 ) β‰  βˆ… .

Definition 2.5. Let 𝔹 ( 𝑅 π‘˜ ) be the collection of all nonempty closed bounded subsets of 𝑅 π‘˜ . Let π‘₯ ∈ 𝑅 π‘˜ and 𝐴 , 𝐡 ∈ 𝔹 ( 𝑅 π‘˜ ) . Then one has the following.(1)The distance 𝑑 ( π‘₯ , 𝐴 ) from π‘₯ to 𝐴 is defined by 𝑑 ( π‘₯ , 𝐴 ) = i n f 𝑦 ∈ 𝐴 𝑑 ( π‘₯ , 𝑦 ) .(2)Let 𝜌 ( 𝐴 , 𝐡 ) = s u p π‘₯ ∈ 𝐴 𝑑 ( π‘₯ , 𝐡 ) . Then the Hausdorff distance 𝑑 𝐻 ( 𝐴 , 𝐡 ) between 𝐴 and 𝐡 is defined by 𝑑 𝐻 ( 𝐴 , 𝐡 ) = m a x { 𝜌 ( 𝐴 , 𝐡 ) , 𝜌 ( 𝐡 , 𝐴 ) } .

The following lemmas are useful to prove the main theorems in the next section.

Lemma 2.6 (see [9]). (1) A proper function 𝑓 ∢ 𝑅 π‘˜ β†’ 𝑅 βˆͺ { + ∞ } is convex or l.s.c. if and only if its epigraph e p i ( 𝑓 ) is convex or closed in 𝑅 π‘˜ Γ— 𝑅 .
(2) The upper envelope s u p 𝛼 ∈ Ξ› 𝑓 𝛼 ( π‘₯ ) of proper convex (or l.s.c.) functions 𝑓 𝛼 ( π‘₯ ) ∢ 𝑅 π‘˜ β†’ 𝑅 βˆͺ { + ∞ } ( 𝛼 ∈ Ξ› ) is also proper convex (or l.s.c.) when the d o m ( s u p 𝛼 ∈ Ξ› 𝑓 𝛼 ) = { π‘₯ ∈ 𝑅 π‘˜ ∢ s u p 𝛼 ∈ Ξ› 𝑓 𝛼 ( π‘₯ ) < + ∞ } is nonempty.
(3) The lower envelope i n f 𝛼 ∈ Ξ› 𝑔 𝛼 ( π‘₯ ) of proper concave (or u.s.c.) functions 𝑔 𝛼 ( π‘₯ ) ∢ 𝑅 π‘˜ β†’ 𝑅 βˆͺ { + ∞ } ( 𝛼 ∈ Ξ› ) is also proper concave (or u.s.c.) when the d o m ( i n f 𝛼 ∈ Ξ› 𝑔 𝛼 ) = { π‘₯ ∈ 𝑅 π‘˜ ∢ i n f 𝛼 ∈ Ξ› 𝑔 𝛼 ( π‘₯ ) > βˆ’ ∞ } is nonempty.

Remark 2.7. Since e p i ( s u p 𝛼 ∈ Ξ› 𝑓 𝛼 ) = β‹‚ 𝛼 ∈ Ξ› e p i ( 𝑓 𝛼 ) thanks to Proposition 1.1.1 of [9], and a function 𝑓 defined on 𝑅 π‘˜ is concave (or u.s.c.) if and only if βˆ’ 𝑓 is convex (or l.s.c.), it is easily to see that in Lemma 2.6, the proofs from (1) to (2) and (2) to (3) are simple.

Lemma 2.8 (see [9]). Let 𝑋 βŠ‚ 𝑅 𝑛 , π‘Œ be a compact subset of 𝑅 π‘š , and let 𝑓 ∢ 𝑋 Γ— π‘Œ β†’ 𝑅 be l.s.c. (or, u.s.c.). Then β„Ž ∢ 𝑋 β†’ 𝑅 defined by β„Ž ( π‘₯ ) = i n f 𝑦 ∈ π‘Œ 𝑓 ( π‘₯ , 𝑦 ) (or, π‘˜ ∢ 𝑋 β†’ 𝑅 defined by π‘˜ ( π‘₯ ) = s u p 𝑦 ∈ π‘Œ 𝑓 ( π‘₯ , 𝑦 ) ) is also l.s.c. (or, u.s.c.).

Lemma 2.9 (see [8]). Let 𝑃 βŠ† 𝑅 π‘š , 𝑋 βŠ† 𝑅 𝑛 be two convex compact subsets, and let πœ‘ ∢ 𝑃 Γ— 𝑋 β†’ 𝑅 be a function such that for all π‘₯ ∈ 𝑋 , 𝑝 ↦ πœ‘ ( 𝑝 , π‘₯ ) is l.s.c. and convex on 𝑃 , and for all 𝑝 ∈ 𝑃 , π‘₯ ↦ πœ‘ ( 𝑝 , π‘₯ ) is u.s.c. and concave on 𝑋 . Then i n f 𝑝 ∈ 𝑃 s u p π‘₯ ∈ 𝑋 πœ‘ ( 𝑝 , π‘₯ ) = s u p π‘₯ ∈ 𝑋 i n f 𝑝 ∈ 𝑃 πœ‘ ( 𝑝 , π‘₯ ) and there exists ( 𝑝 , π‘₯ ) ∈ 𝑃 Γ— 𝑋 such that πœ‘ ( 𝑝 , π‘₯ ) = s u p π‘₯ ∈ 𝑋 πœ‘ ( 𝑝 , π‘₯ ) = i n f 𝑝 ∈ 𝑃 πœ‘ ( 𝑝 , π‘₯ ) .

Lemma 2.10 (see [8]). A proper function 𝑓 defined on 𝑅 π‘˜ is convex and l.s.c. if and only if 𝑓 = 𝑓 βˆ— βˆ— .

Lemma 2.11 (see [8]). Let 𝑓 be a proper function defined on 𝑅 π‘˜ , and 𝑝 0 ∈ 𝑅 π‘˜ βˆ— . Then π‘₯ 0 minimizes π‘₯ ↦ 𝑓 ( π‘₯ ) βˆ’ ⟨ 𝑝 0 , π‘₯ ⟩ on π‘ˆ if and only if π‘₯ 0 ∈ πœ• 𝑓 βˆ— ( 𝑝 0 ) and 𝑓 ( π‘₯ 0 ) = 𝑓 βˆ— βˆ— ( π‘₯ 0 ) .

Remark 2.12. If 𝑓 is a finite function from 𝑋 βŠ† 𝑅 π‘˜ to 𝑅 , define 𝑓 𝑋 by 𝑓 𝑋 ( π‘₯ ) = 𝑓 ( π‘₯ ) if π‘₯ ∈ 𝑋 , or = + ∞ if π‘₯ ∈ 𝑅 π‘˜ ⧡ 𝑋 , then we can use the preceding associated concepts and lemmas for 𝑓 by identifying 𝑓 with 𝑓 𝑋 .

3. Solvability Results to (1.1)

3.1. Auxiliary Functions

In the sequel, we assume that ( 1 ) A s s u m p t i o n s 1 - 3 i n S e c t i o n 1 a r e s a t i s fi e d , a n d πœ† ∈ 𝑅 + , 𝐹 ∈ 𝔹 π‘š + , ( 2 ) 𝑃 βŠ† 𝑅 π‘š + ⧡ { 0 } i s a c o n v e x c o m p a c t s u b s e t w i t h 𝑅 + 𝑃 = 𝑅 π‘š + . ( 3 . 1 ) Denote by ⟨ β‹… , β‹… ⟩ the duality paring on ⟨ 𝑅 π‘š βˆ— , 𝑅 π‘š ⟩ , and for each πœ† ∈ 𝑅 + and 𝐹 ∈ 𝔹 π‘š + , define two auxiliary functions 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) and 𝑔 𝐹 ( 𝑝 , π‘₯ ) on 𝑃 Γ— 𝑋 by 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) = s u p 𝑐 ∈ 𝐹 ⟨ 𝑝 , 𝑇 π‘₯ βˆ’ πœ† 𝑆 π‘₯ βˆ’ 𝑐 ⟩ = s u p ξ€· 𝑐 1 , 𝑐 2 , … , 𝑐 π‘š ξ€Έ ∈ 𝐹 π‘š  𝑖 = 1 𝑝 𝑖 ξ€· 𝑇 𝑖 π‘₯ βˆ’ πœ† 𝑆 𝑖 π‘₯ βˆ’ 𝑐 𝑖 ξ€Έ , ( 3 . 2 ) 𝑔 𝐹 ( 𝑝 , π‘₯ ) = s u p 𝑐 ∈ 𝐹 ⟨ 𝑝 , 𝑇 π‘₯ βˆ’ 𝑐 ⟩ ⟨ 𝑝 , 𝑆 π‘₯ ⟩ = s u p ( 𝑐 1 , 𝑐 2 , … , 𝑐 π‘š ) ∈ 𝐹 βˆ‘ π‘š 𝑖 = 1 𝑝 𝑖 ξ€· 𝑇 𝑖 π‘₯ βˆ’ 𝑐 𝑖 ξ€Έ βˆ‘ π‘š 𝑖 = 1 𝑝 𝑖 𝑆 𝑖 π‘₯ . ( 3 . 3 ) Just as indicated by Definition 2.1, the minimax values and saddle point sets of πœ‘ ( 𝑝 , π‘₯ ) = 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) and πœ‘ ( 𝑝 , π‘₯ ) = 𝑔 𝐹 ( 𝑝 , π‘₯ ) , if existed or nonempty, are denoted by 𝑣 ( 𝑓 πœ† , 𝐹 ) , 𝑣 ( 𝑔 𝐹 ) , 𝑆 ( 𝑓 πœ† , 𝐹 ) , and 𝑆 ( 𝑔 𝐹 ) , respectively.

By (3.1)–(3.3), ( 𝑝 , π‘₯ ) ↦ ⟨ 𝑝 , 𝑆 π‘₯ ⟩ , and ( 𝑝 , π‘₯ ) ↦ ⟨ 𝑝 , 𝑇 π‘₯ ⟩ are strictly positive on 𝑃 Γ— 𝑋 , and the former is l.s.c. while the latter is u.s.c.. So we can see that 0 < πœ€ 0 = i n f 𝑝 ∈ 𝑃 , π‘₯ ∈ 𝑋 ⟨ 𝑝 , 𝑆 π‘₯ ⟩ < + ∞ , 0 < πœ€ 1 = s u p 𝑝 ∈ 𝑃 , π‘₯ ∈ 𝑋 ⟨ 𝑝 , 𝑇 π‘₯ ⟩ < + ∞ , ( 3 . 4 ) and both 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) and 𝑔 𝐹 ( 𝑝 , π‘₯ ) are finite for all πœ† ∈ 𝑅 + , ( 𝑝 , π‘₯ ) ∈ 𝑃 Γ— 𝑋 and 𝐹 ∈ 𝔹 π‘š + .

We also define the extensions π‘₯ ↦ Μ‚ β€Œ 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) to π‘₯ ↦ βˆ’ 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) (for each fixed 𝑝 ∈ 𝑃 ) and 𝑝 ↦ Μƒ β€Œ 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) to 𝑝 ↦ 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) (for each fixed π‘₯ ∈ 𝑋 ) by Μ‚ β€Œ 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) = ⎧ βŽͺ ⎨ βŽͺ ⎩ βˆ’ 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) , π‘₯ ∈ 𝑋 , + ∞ , π‘₯ ∈ 𝑅 𝑛 ⧡ 𝑋 , Μƒ β€Œ 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) = ⎧ βŽͺ ⎨ βŽͺ ⎩ 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) , 𝑝 ∈ 𝑃 , + ∞ , 𝑝 ∈ 𝑅 π‘š ⧡ 𝑃 . ( 3 . 5 ) According to Definition 2.3, the conjugate and biconjugate functions of π‘₯ ↦ Μ‚ β€Œ 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) and 𝑝 ↦ Μƒ β€Œ 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) are then denoted by π‘ž ⟼ Μ‚ β€Œ 𝑓 βˆ— πœ† , 𝐹 ( 𝑝 , π‘ž ) , π‘ž ∈ 𝑅 𝑛 , π‘₯ ⟼ Μ‚ β€Œ 𝑓 βˆ— βˆ— πœ† , 𝐹 ( 𝑝 , π‘₯ ) , π‘₯ ∈ 𝑅 𝑛 ( f o r e a c h fi x e d 𝑝 ∈ 𝑃 ) , π‘Ÿ ⟼ Μƒ β€Œ 𝑓 βˆ— πœ† , 𝐹 ( π‘Ÿ , π‘₯ ) , π‘Ÿ ∈ 𝑅 π‘š , 𝑝 ⟼ Μƒ β€Œ 𝑓 βˆ— βˆ— πœ† , 𝐹 ( 𝑝 , π‘₯ ) , 𝑝 ∈ 𝑅 π‘š ( f o r e a c h fi x e d π‘₯ ∈ 𝑋 ) . ( 3 . 6 ) By Definition 2.5, the Hausdorff distance in 𝔹 π‘š + (see Assumption 3) is provided by 𝑑 𝐻 ξ€· 𝐹 1 , 𝐹 2 ξ€Έ = m a x ξ‚» s u p 𝑐 1 ∈ 𝐹 1 𝑑 ξ€· 𝑐 1 , 𝐹 2 ξ€Έ , s u p 𝑐 2 ∈ 𝐹 2 𝑑 ξ€· 𝑐 2 , 𝐹 1 ξ€Έ ξ‚Ό f o r 𝐹 1 , 𝐹 2 ∈ 𝔹 π‘š + . ( 3 . 7 )

3.2. Main Theorems to (1.1)

With (3.1)–(3.7), we state the main solvability theorems to (1.1) as follows.

Theorem 3.1. (1)   𝑣 ( 𝑓 πœ† , 𝐹 ) exists and 𝑆 ( 𝑓 πœ† , 𝐹 ) is a nonempty convex compact subset of 𝑃 Γ— 𝑋 . Furthermore, πœ† ↦ 𝑣 ( 𝑓 πœ† , 𝐹 ) is continuous and strictly decreasing on 𝑅 + with 𝑣 ( 𝑓 + ∞ , 𝐹 )  = l i m πœ† β†’ + ∞ 𝑣 ( 𝑓 πœ† , 𝐹 ) = βˆ’ ∞ .
(2)   𝑣 ( 𝑔 𝐹 ) exists if and only if 𝑆 ( 𝑔 𝐹 ) β‰  βˆ… . Moreover, if 𝑣 ( 𝑓 0 , 𝐹 ) > 0 , then 𝑣 ( 𝑔 𝐹 ) exists and 𝑆 ( 𝑔 𝐹 ) is a nonempty compact subset of 𝑃 Γ— 𝑋 .

Theorem 3.2. (1)   πœ† is a lower eigenvalue to (1.1) and π‘₯ its eigenvector if and only if i n f 𝑝 ∈ 𝑃 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) β‰₯ 0 if and only if i n f 𝑝 ∈ 𝑃 𝑔 𝐹 ( 𝑝 , π‘₯ ) β‰₯ πœ† .
(2)   πœ† is a lower eigenvalue to (1.1) if and only if one of the following statements is true:
(a) 𝑣 ( 𝑓 πœ† , 𝐹 ) β‰₯ 0 ,(b) 𝑓 πœ† , 𝐹 ( Μ‚ 𝑝 , Μ‚ π‘₯ ) β‰₯ 0 for ( Μ‚ 𝑝 , Μ‚ π‘₯ ) ∈ 𝑆 ( 𝑓 πœ† , 𝐹 ) ,(c) 𝑣 ( 𝑔 𝐹 ) exists with 𝑣 ( 𝑔 𝐹 ) β‰₯ πœ† ,(d) 𝑆 ( 𝑔 𝐹 ) β‰  βˆ… and 𝑔 𝐹 ( Μ‚ 𝑝 , Μ‚ π‘₯ ) β‰₯ πœ† for ( Μ‚ 𝑝 , Μ‚ π‘₯ ) ∈ 𝑆 ( 𝑔 𝐹 ) .
(3) The following statements are equivalent:
(a)System (1.1) has at least one lower eigenvalue,(b) 𝑣 ( 𝑓 0 , 𝐹 ) > 0 ,(c) 𝑣 ( 𝑔 𝐹 ) exists with 𝑣 ( 𝑔 𝐹 ) > 0 ,(d) 𝑆 ( 𝑔 𝐹 ) β‰  βˆ… and 𝑔 𝐹 ( Μ‚ 𝑝 , Μ‚ π‘₯ ) > 0 for ( Μ‚ 𝑝 , Μ‚ π‘₯ ) ∈ 𝑆 ( 𝑔 𝐹 ) .

Theorem 3.3. (1) πœ† exists if and only if one of the following statements is true:
(a) 𝑣 ( 𝑓 0 , 𝐹 ) > 0 ,(b) 𝑓 0 , 𝐹 ( Μ‚ 𝑝 , Μ‚ π‘₯ ) > 0 for ( Μ‚ 𝑝 , Μ‚ π‘₯ ) ∈ 𝑆 ( 𝑓 0 , 𝐹 ) ,(c) 𝑣 ( 𝑓 πœ† , 𝐹 ) = 0 ,(d) 𝑣 ( 𝑔 𝐹 ) exists with 𝑣 ( 𝑔 𝐹 ) = πœ† ,(e) 𝑆 ( 𝑔 𝐹 ) β‰  βˆ… and 𝑔 𝐹 ( 𝑝 , π‘₯ ) = πœ† for ( 𝑝 , π‘₯ ) ∈ 𝑆 ( 𝑔 𝐹 ) . Where πœ† = πœ† ( 𝐹 ) ( > 0 ) is the maximal lower eigenvalue to (1.1).
(2) If 𝑣 ( 𝑓 0 , 𝐹 ) > 0 , or equivalently, if 𝑣 ( 𝑔 𝐹 ) exists with 𝑣 ( 𝑔 𝐹 ) > 0 , then one has the following.
(a) π‘₯ is an optimal eigenvector if and only if there exists 𝑝 ∈ 𝑃 with ( 𝑝 , π‘₯ ) ∈ 𝑆 ( 𝑔 𝐹 ) if and only if i n f 𝑝 ∈ 𝑃 𝑔 𝐹 ( 𝑝 , π‘₯ ) = πœ† .(b)There exist Μ‚ π‘₯ ∈ 𝑋 , Μ‚ 𝑐 ∈ 𝐹 and 𝑖 0 ∈ { 1 , 2 , … , π‘š } such that 𝑇 Μ‚ π‘₯ β‰₯ πœ† 𝑆 Μ‚ π‘₯ + Μ‚ 𝑐 and 𝑇 𝑖 0 Μ‚ π‘₯ = πœ† 𝑆 𝑖 0 Μ‚ π‘₯ + Μ‚ 𝑐 𝑖 0 .(c) πœ† = πœ† ( 𝐹 ) is the maximal lower eigenvalue to (1.1) and ( 𝑝 , π‘₯ ) ∈ 𝑆 ( 𝑔 𝐹 ) if and only if πœ† > 0 and ( 𝑝 , π‘₯ ) ∈ 𝑃 Γ— 𝑋 satisfy π‘₯ ∈ πœ• Μ‚ β€Œ 𝑓 βˆ— πœ† , 𝐹 ( 𝑝 , 0 ) and 𝑝 ∈ πœ• Μƒ β€Œ 𝑓 βˆ— πœ† , 𝐹 ( 0 , π‘₯ ) . Where πœ• Μ‚ β€Œ 𝑓 βˆ— πœ† , 𝐹 ( 𝑝 , 0 ) and πœ• Μƒ β€Œ 𝑓 βˆ— πœ† , 𝐹 ( 0 , π‘₯ ) are the subdifferentials of Μ‚ β€Œ 𝑓 βˆ— πœ† , 𝐹 ( 𝑝 , π‘ž ) at π‘ž = 0 and Μƒ β€Œ 𝑓 βˆ— πœ† , 𝐹 ( π‘Ÿ , π‘₯ ) at π‘Ÿ = 0 , respectively.(d)The set of all lower eigenvalues to (1.1) coincides with the interval ( 0 , 𝑣 ( 𝑔 𝐹 ) ] .
(3) Let β„‚ π‘š + = { 𝐹 ∈ 𝔹 π‘š + ∢ 𝑣 ( 𝑓 0 , 𝐹 ) > 0 } , where 𝔹 π‘š + is defined as in Assumption 3. Then
(a) β„‚ π‘š + β‰  βˆ… , and for each 𝐹 ∈ β„‚ π‘š + , πœ† = πœ† ( 𝐹 ) exists with πœ† ( 𝐹 ) = 𝑣 ( 𝑔 𝐹 ) ,(b)for all 𝐹 1 , 𝐹 2 ∈ β„‚ π‘š + , | πœ† ( 𝐹 1 ) βˆ’ πœ† ( 𝐹 2 ) | ≀ ( s u p 𝑝 ∈ 𝑃 β€– 𝑝 β€– / πœ€ 0 ) 𝑑 𝐻 ( 𝐹 1 , 𝐹 2 ) , where πœ€ 0 is defined by (3.4). Thus, 𝐹 ↦ πœ† ( 𝐹 ) is Lipschitz on β„‚ π‘š + with the Hausdorff distance 𝑑 𝐻 ( β‹… , β‹… ) .

Remark 3.4. If we take 𝑃 = Ξ£ π‘š βˆ’ 1 = { 𝑝 ∈ 𝑅 π‘š + ∢ βˆ‘ π‘š 𝑖 = 1 𝑝 𝑖 = 1 } , then Ξ£ π‘š βˆ’ 1 satisfies (3.1)(2), hence Theorems 3.13.3 are also true.

3.3. Proofs of the Main Theorems

In order to prove Theorems 3.13.3, we need the following eight lemmas.

Lemma 3.5. If πœ† ∈ 𝑅 + is fixed, then one has the following. (1) 𝑝 ↦ 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) ( π‘₯ ∈ 𝑋 ) and 𝑝 ↦ s u p π‘₯ ∈ 𝑋 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) are l.s.c. and convex on 𝑃 .(2) π‘₯ ↦ 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) ( 𝑝 ∈ 𝑃 ) and π‘₯ ↦ i n f 𝑝 ∈ 𝑃 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) are u.s.c. and concave on 𝑋 .(3) 𝑣 ( 𝑓 πœ† , 𝐹 ) exists and 𝑆 ( 𝑓 πœ† , 𝐹 ) is a nonempty convex compact subset of 𝑃 Γ— 𝑋 .

Proof. By (3.1)–(3.3), it is easily to see that ( a ) βˆ€ π‘₯ ∈ 𝑋 , βˆ€ 𝑐 ∈ 𝐹 , 𝑝 ⟼ ⟨ 𝑝 , 𝑇 π‘₯ βˆ’ πœ† 𝑆 π‘₯ βˆ’ 𝑐 ⟩ i s c o n v e x l . s . c . o n 𝑃 , ( b ) βˆ€ 𝑝 ∈ 𝑃 , ( π‘₯ , 𝑐 ) ⟼ ⟨ 𝑝 , 𝑇 π‘₯ βˆ’ πœ† 𝑆 π‘₯ βˆ’ 𝑐 ⟩ i s u . s . c . o n 𝑋 Γ— 𝐹 . ( 3 . 8 ) Applying Lemma 2.6(2) (resp., Lemma 2.8) to the function of (3.8)(a) (resp., of (3.8)(b)), and using the fact that 𝐹 is compact, and any l.s.c. (or u.s.c.) function defined on a compact set attains its minimum (or its maximum), we obtain that βˆ€ π‘₯ ∈ 𝑋 , 𝑝 ⟼ 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) i s c o n v e x l . s . c . o n 𝑃 a n d i n f 𝑝 ∈ 𝑃 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) i s f i , βˆ€ 𝑝 ∈ 𝑃 , π‘₯ ⟼ 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) i s u . s . c . o n 𝑋 a n d s u p π‘₯ ∈ 𝑋 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) i s f i . ( 3 . 9 )
If π‘₯ 𝑖 ∈ 𝑋 ( 𝑖 = 1 , 2 ) , then by (3.2), there exist 𝑐 𝑖 = ( 𝑐 𝑖 1 , 𝑐 𝑖 2 , … , 𝑐 𝑖 π‘š ) ∈ 𝐹 ( 𝑖 = 1 , 2 ) such that 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ 𝑖 ) = ⟨ 𝑝 , 𝑇 π‘₯ 𝑖 βˆ’ πœ† 𝑆 π‘₯ 𝑖 βˆ’ 𝑐 𝑖 ⟩ . Since 𝑇 𝑖 , βˆ’ πœ† 𝑆 𝑖 ( 𝑖 = 1 , 2 , … , π‘š ) are concave, 𝑋 , 𝐹 are convex and 𝑝 ∈ 𝑃 is nonnegative, we have for each 𝛼 ∈ [ 0 , 1 ] , 𝑓 πœ† , 𝐹 ξ€· 𝑝 , 𝛼 π‘₯ 1 + ( 1 βˆ’ 𝛼 ) π‘₯ 2 ξ€Έ β‰₯  𝑝 , 𝑇 ξ€Ί 𝛼 π‘₯ 1 + ( 1 βˆ’ 𝛼 ) π‘₯ 2 ξ€» βˆ’ πœ† 𝑆 ξ€Ί 𝛼 π‘₯ 1 + ( 1 βˆ’ 𝛼 ) π‘₯ 2 ξ€» βˆ’ 𝛼 𝑐 1 βˆ’ ( 1 βˆ’ 𝛼 ) 𝑐 2  β‰₯ 𝛼  𝑝 , 𝑇 π‘₯ 1 βˆ’ πœ† 𝑆 x 1 βˆ’ 𝑐 1  + ( 1 βˆ’ 𝛼 )  𝑝 , 𝑇 π‘₯ 2 βˆ’ πœ† 𝑆 π‘₯ 2 βˆ’ 𝑐 2  = 𝛼 𝑓 πœ† , 𝐹 ξ€· 𝑝 , π‘₯ 1 ξ€Έ + ( 1 βˆ’ 𝛼 ) 𝑓 πœ† , 𝐹 ξ€· 𝑝 , π‘₯ 2 ξ€Έ , t h a t i s , π‘₯ ⟼ 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) i s c o n c a v e o n 𝑋 . ( 3 . 1 0 ) Combining (3.9) with (3.10), and using Lemmas 2.6(2)(3) and 2.9, it follows that both statements (1) and (2) hold, 𝑣 ( 𝑓 πœ† , 𝐹 ) exists and 𝑆 ( 𝑓 πœ† , 𝐹 ) is nonempty. It remains to verify that 𝑆 ( 𝑓 πœ† , 𝐹 ) is convex and closed because 𝑃 Γ— 𝑋 is convex and compact.
If 𝛼 ∈ [ 0 , 1 ] and ( 𝑝 𝑖 , π‘₯ 𝑖 ) ∈ 𝑆 ( 𝑓 πœ† , 𝐹 ) ( 𝑖 = 1 , 2 ) , then s u p π‘₯ ∈ 𝑋 𝑓 πœ† , 𝐹 ( 𝑝 𝑖 , π‘₯ ) = i n f 𝑝 ∈ 𝑃 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ 𝑖 ) for 𝑖 = 1 , 2 . By (1) and (2) (i.e., 𝑝 ↦ s u p π‘₯ ∈ 𝑋 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) is convex on 𝑃 and π‘₯ ↦ i n f 𝑝 ∈ 𝑃 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) is concave on 𝑋 ), we have s u p π‘₯ ∈ 𝑋 𝑓 πœ† , 𝐹 ξ€· 𝛼 𝑝 1 + ( 1 βˆ’ 𝛼 ) 𝑝 2 , π‘₯ ξ€Έ ≀ 𝛼 s u p π‘₯ ∈ 𝑋 𝑓 πœ† , 𝐹 ξ€· 𝑝 1 , π‘₯ ξ€Έ + ( 1 βˆ’ 𝛼 ) s u p π‘₯ ∈ 𝑋 𝑓 πœ† , 𝐹 ξ€· 𝑝 2 , π‘₯ ξ€Έ = 𝛼 i n f 𝑝 ∈ 𝑃 𝑓 πœ† , 𝐹 ξ€· 𝑝 , π‘₯ 1 ξ€Έ + ( 1 βˆ’ 𝛼 ) i n f 𝑝 ∈ 𝑃 𝑓 πœ† , 𝐹 ξ€· 𝑝 , π‘₯ 2 ξ€Έ ≀ i n f 𝑝 ∈ 𝑃 𝑓 πœ† , 𝐹 ξ€· 𝑝 , 𝛼 π‘₯ 1 + ( 1 βˆ’ 𝛼 ) π‘₯ 2 ξ€Έ . ( 3 . 1 1 ) This implies by Remark 2.2(2) that 𝛼 ( 𝑝 1 , π‘₯ 1 ) + ( 1 βˆ’ 𝛼 ) ( 𝑝 2 , π‘₯ 2 ) ∈ 𝑆 ( 𝑓 πœ† , 𝐹 ) , and thus 𝑆 ( 𝑓 πœ† , 𝐹 ) is convex.
If ( 𝑝 π‘˜ , π‘₯ π‘˜ ) ∈ 𝑆 ( 𝑓 πœ† , 𝐹 ) with ( 𝑝 π‘˜ , π‘₯ π‘˜ ) β†’ ( 𝑝 0 , π‘₯ 0 ) ∈ 𝑃 Γ— 𝑋 ( π‘˜ β†’ ∞ ) , then s u p π‘₯ ∈ 𝑋 𝑓 πœ† , 𝐹 ( 𝑝 π‘˜ , π‘₯ ) = i n f 𝑝 ∈ 𝑃 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ π‘˜ ) for all π‘˜ = 1 , 2 , … . By taking π‘˜ β†’ ∞ , from (1) and (2) (that is, 𝑝 ↦ s u p π‘₯ ∈ 𝑋 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) is l.s.c. on 𝑃 and π‘₯ ↦ i n f 𝑝 ∈ 𝑃 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) is u.s.c. on 𝑋 ), we obtain that s u p π‘₯ ∈ X 𝑓 πœ† , 𝐹 ξ€· 𝑝 0 , π‘₯ ξ€Έ ≀ l i m i n f π‘˜ β†’ ∞ s u p π‘₯ ∈ 𝑋 𝑓 πœ† , 𝐹 ξ€· 𝑝 π‘˜ , π‘₯ ξ€Έ ≀ l i m s u p π‘˜ β†’ ∞ i n f 𝑝 ∈ 𝑃 𝑓 πœ† , 𝐹 ξ€· 𝑝 , π‘₯ π‘˜ ξ€Έ ≀ i n f 𝑝 ∈ 𝑃 𝑓 πœ† , 𝐹 ξ€· 𝑝 , π‘₯ 0 ξ€Έ . ( 3 . 1 2 ) Hence by Remark 2.2(2), ( 𝑝 0 , π‘₯ 0 ) ∈ 𝑆 ( 𝑓 πœ† , 𝐹 ) and 𝑆 ( 𝑓 πœ† , 𝐹 ) is closed. Hence the first lemma follows.

Lemma 3.6. πœ† ↦ 𝑣 ( 𝑓 πœ† , 𝐹 ) is continuous and strictly decreasing on 𝑅 + with 𝑣 ( 𝑓 + ∞ , 𝐹 )  = l i m πœ† β†’ + ∞ 𝑣 ( 𝑓 πœ† , 𝐹 ) = βˆ’ ∞ .

Proof. Since ( πœ† , 𝑝 ) ↦ ⟨ 𝑝 , 𝑇 π‘₯ βˆ’ πœ† 𝑆 π‘₯ βˆ’ 𝑐 ⟩ is continuous on 𝑅 + Γ— 𝑃 for each 𝑐 ∈ 𝐹 and π‘₯ ∈ 𝑋 , ( πœ† , π‘₯ , 𝑐 ) ↦ ⟨ 𝑝 , 𝑇 π‘₯ βˆ’ πœ† 𝑆 π‘₯ βˆ’ 𝑐 ⟩ is u.s.c. on 𝑅 + Γ— 𝑋 Γ— 𝐹 for each 𝑝 ∈ 𝑃 , and 𝐹 is compact, by Lemmas 2.6(2) and 2.8, we see that 𝑅 + Γ— 𝑃 β†’ 𝑅 ∢ ( πœ† , 𝑝 ) ⟼ 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) = s u p 𝑐 ∈ 𝐹 ⟨ 𝑝 , 𝑇 π‘₯ βˆ’ πœ† 𝑆 π‘₯ βˆ’ 𝑐 ⟩ i s l . s . c . , 𝑅 + Γ— 𝑋 β†’ 𝑅 ∢ ( πœ† , π‘₯ ) ⟼ 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) = s u p 𝑐 ∈ 𝐹 ⟨ 𝑝 , 𝑇 π‘₯ βˆ’ πœ† 𝑆 π‘₯ βˆ’ 𝑐 ⟩ i s u . s . c . . ( 3 . 1 3 ) From Lemma 2.6(2)-(3), it follows that 𝑅 + Γ— 𝑃 ⟼ 𝑅 ∢ ( πœ† , 𝑝 ) ⟼ s u p π‘₯ ∈ 𝑋 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) i s l . s . c . , 𝑅 + Γ— 𝑋 ⟼ 𝑅 ∢ ( πœ† , π‘₯ ) ⟼ i n f 𝑝 ∈ 𝑃 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) i s u . s . c . . ( 3 . 1 4 ) First applying Lemma 2.8 to both functions of (3.14), and then using Lemma 3.5(3), we further obtain that 𝑅 + ⟼ 𝑅 ∢ πœ† ⟼ i n f 𝑝 ∈ 𝑃 s u p π‘₯ ∈ 𝑋 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) i s l . s . c . , 𝑅 + ⟼ 𝑅 ∢ πœ† ⟼ s u p π‘₯ ∈ 𝑋 i n f 𝑝 ∈ 𝑃 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) i s u . s . c . , ( 3 . 1 5 ) and thus πœ† ↦ 𝑣 ( 𝑓 πœ† , 𝐹 ) is continuous on 𝑅 + .
Suppose that πœ† 2 > πœ† 1 β‰₯ 0 , then by (3.2), 𝑓 πœ† 1 , 𝐹 ( 𝑝 , π‘₯ ) = 𝑓 πœ† 2 , 𝐹 ( 𝑝 , π‘₯ ) + ( πœ† 2 βˆ’ πœ† 1 ) ⟨ 𝑝 , 𝑆 π‘₯ ⟩ for all ( 𝑝 , π‘₯ ) ∈ 𝑃 Γ— 𝑋 . This implies by (3.4) that 𝑣 ( 𝑓 πœ† 1 , 𝐹 ) β‰₯ 𝑣 ( 𝑓 πœ† 2 , 𝐹 ) + ( πœ† 2 βˆ’ πœ† 1 ) πœ€ 0 , where πœ€ 0 = i n f 𝑝 ∈ 𝑃 , π‘₯ ∈ 𝑋 ⟨ 𝑝 , 𝑆 π‘₯ ⟩ ∈ ( 0 , + ∞ ) . Hence πœ† ↦ 𝑣 ( 𝑓 πœ† , 𝐹 ) is strictly decreasing.
By Lemma 3.5(3), Remark 2.2(3) and (3.2), it is easily to see that for each πœ† ∈ 𝑅 + and ( 𝑝 , π‘₯ ) ∈ 𝑆 ( 𝑓 πœ† , 𝐹 ) , 𝑣 ξ€· 𝑓 πœ† , 𝐹 ξ€Έ = s u p 𝑐 ∈ 𝐹  𝑝 , 𝑇 π‘₯ βˆ’ πœ† 𝑆 π‘₯ βˆ’ 𝑐  ≀ s u p 𝑝 ∈ 𝑃 , π‘₯ ∈ 𝑋 ⟨ 𝑝 , 𝑇 π‘₯ ⟩ βˆ’ πœ† i n f 𝑝 ∈ 𝑃 , π‘₯ ∈ 𝑋 ⟨ 𝑝 , 𝑆 π‘₯ ⟩ = πœ€ 1 βˆ’ πœ† πœ€ 0 . ( 3 . 1 6 ) Hence by (3.4), 𝑣 ( 𝑓 + ∞ , 𝐹 ) = βˆ’ ∞ and the second lemma is proved.

Lemma 3.7. (1) πœ† is a lower eigenvalue to (1.1) and π‘₯ its eigenvector if and only if i n f 𝑝 ∈ 𝑃 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) β‰₯ 0 .
(2) πœ† is a lower eigenvalue to (1.1) if and only if 𝑣 ( 𝑓 πœ† , 𝐹 ) β‰₯ 0 if and only if 𝑓 πœ† , 𝐹 ( Μ‚ 𝑝 , Μ‚ π‘₯ ) β‰₯ 0 for ( Μ‚ 𝑝 , Μ‚ π‘₯ ) ∈ 𝑆 ( 𝑓 πœ† , 𝐹 ) .

Proof. (1) If πœ† > 0 and ( π‘₯ , 𝑐 ) ∈ 𝑋 Γ— 𝐹 satisfy 𝑇 π‘₯ β‰₯ πœ† 𝑆 π‘₯ + 𝑐 , then for each 𝑝 ∈ 𝑃 ( βŠ† 𝑅 π‘š + ) , 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) β‰₯ ⟨ 𝑝 , 𝑇 π‘₯ βˆ’ πœ† 𝑆 π‘₯ βˆ’ 𝑐 ⟩ β‰₯ 0 . Hence, i n f 𝑝 ∈ 𝑃 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) β‰₯ 0 .
If πœ† > 0 and π‘₯ ∈ 𝑋 satisfy i n f 𝑝 ∈ 𝑃 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) β‰₯ 0 , but no 𝑐 ∈ 𝐹 can be found such that 𝑇 π‘₯ β‰₯ πœ† 𝑆 π‘₯ + 𝑐 , then ( 𝑇 π‘₯ βˆ’ πœ† 𝑆 π‘₯ βˆ’ 𝐹 ) ∩ 𝑅 π‘š + = βˆ… . Since 𝑇 π‘₯ βˆ’ πœ† 𝑆 π‘₯ βˆ’ 𝐹 is convex compact and 𝑅 π‘š + is closed convex, the Hahn-Banach separation theorem implies that there exists 𝑝 βˆ— ∈ 𝑅 π‘š ⧡ { 0 } such that βˆ’ ∞ < s u p 𝑐 ∈ 𝐹 ⟨ 𝑝 βˆ— , 𝑇 π‘₯ βˆ’ πœ† 𝑆 π‘₯ βˆ’ 𝑐 ⟩ < i n f 𝑦 ∈ 𝑅 π‘š + ⟨ 𝑝 βˆ— , 𝑦 ⟩ . Clearly, we have 𝑝 βˆ— ∈ 𝑅 π‘š + ⧡ { 0 } (or else, we obtain i n f 𝑦 ∈ 𝑅 π‘š + ⟨ 𝑝 βˆ— , 𝑦 ⟩ = βˆ’ ∞ , which is impossible), i n f 𝑦 ∈ 𝑅 π‘š + ⟨ 𝑝 βˆ— , 𝑦 ⟩ = 0 and thus s u p 𝑐 ∈ 𝐹 ⟨ 𝑝 βˆ— , 𝑇 π‘₯ βˆ’ πœ† 𝑆 π‘₯ βˆ’ 𝑐 ⟩ < 0 . Since 𝑅 + 𝑃 = 𝑅 π‘š + , there exist 𝑑 > 0 and Μ‚ 𝑝 ∈ 𝑃 with Μ‚ 𝑝 = 𝑑 𝑝 βˆ— . It follows that i n f 𝑝 ∈ 𝑃 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) ≀ 𝑓 πœ† , 𝐹 ( Μ‚ 𝑝 , π‘₯ ) = 𝑑 s u p 𝑐 ∈ 𝐹 ⟨ 𝑝 βˆ— , 𝑇 π‘₯ βˆ’ πœ† 𝑆 π‘₯ βˆ’ 𝑐 ⟩ < 0 . This is a contradiction. So we can select 𝑐 ∈ 𝐹 such that 𝑇 π‘₯ β‰₯ πœ† 𝑆 π‘₯ + 𝑐 .
(2) If πœ† > 0 is a lower eigenvalue to (1.1), then there exists an eigenvector π‘₯ πœ† ∈ 𝑋 , which gives, by statement (1) and Lemma 3.5(3), 𝑣 ( 𝑓 πœ† , 𝐹 ) β‰₯ i n f 𝑝 ∈ 𝑃 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ πœ† ) β‰₯ 0 . If 𝑣 ( 𝑓 πœ† , 𝐹 ) β‰₯ 0 , then Remark 2.2(3) and Lemma 3.5(3) imply that 𝑓 πœ† , 𝐹 ( Μ‚ 𝑝 , Μ‚ π‘₯ ) = 𝑣 ( 𝑓 πœ† , 𝐹 ) β‰₯ 0 for all ( Μ‚ 𝑝 , Μ‚ π‘₯ ) ∈ 𝑆 ( 𝑓 πœ† , 𝐹 ) . If ( Μ‚ 𝑝 , Μ‚ π‘₯ ) ∈ 𝑆 ( 𝑓 πœ† , 𝐹 ) with 𝑓 πœ† , 𝐹 ( Μ‚ 𝑝 , Μ‚ π‘₯ ) β‰₯ 0 , then i n f 𝑝 ∈ 𝑃 𝑓 πœ† , 𝐹 ( 𝑝 , Μ‚ π‘₯ ) = 𝑓 πœ† , 𝐹 ( Μ‚ 𝑝 , Μ‚ π‘₯ ) β‰₯ 0 , which gives, by statement (1), that πœ† is a lower eigenvalue to (1.1) and Μ‚ π‘₯ its eigenvector. This completes the proof.

Lemma 3.8. (1) The following statements are equivalent.
(a)System (1.1) has at least one lower eigenvalue.(b) 𝑣 ( 𝑓 0 , 𝐹 ) > 0 . (c) 𝑓 0 , 𝐹 ( Μ‚ 𝑝 , Μ‚ π‘₯ ) > 0 for ( Μ‚ 𝑝 , Μ‚ π‘₯ ) ∈ 𝑆 ( 𝑓 0 , 𝐹 ) .(d)There is a unique Μ‚ β€Œ πœ† > 0 with 𝑣 ( 𝑓 Μ‚ β€Œ πœ† , 𝐹 ) = 0 .(e)The maximal lower eigenvalue πœ† = πœ† ( 𝐹 ) to (1.1) exists. In particular, Μ‚ β€Œ πœ† = πœ† if either 𝑣 ( 𝑓 0 , 𝐹 ) > 0 or one of the Μ‚ β€Œ πœ† and πœ† exists.
(2) If 𝑣 ( 𝑓 0 , 𝐹 ) > 0 , then the set of all lower eigenvalues to (1.1) equals to ( 0 , πœ† ] .

Proof. (1) If πœ† 0 ( > 0 ) is a lower eigenvalue to (1.1), then by Lemmas 3.6 and 3.7(2), 𝑣 ( 𝑓 0 , 𝐹 ) > 𝑣 ( 𝑓 πœ† 0 , 𝐹 ) β‰₯ 0 . In view of Lemma 3.5(3) and Remark 2.2, we also see that 𝑣 ( 𝑓 0 , 𝐹 ) > 0 if and only if 𝑓 0 , 𝐹 ( Μ‚ 𝑝 , Μ‚ π‘₯ ) > 0 for any ( Μ‚ 𝑝 , Μ‚ π‘₯ ) ∈ 𝑆 ( 𝑓 0 , 𝐹 ) . If 𝑣 ( 𝑓 0 , 𝐹 ) > 0 , then also by Lemmas 3.6 and 3.7(2), there exists a unique Μ‚ β€Œ πœ† > 0 such that 𝑣 ( 𝑓 Μ‚ β€Œ πœ† , 𝐹 ) = 0 , and Μ‚ β€Œ πœ† is precisely the maximal lower eigenvalue πœ† . If the maximal lower eigenvalue πœ† to (1.1) exists, then πœ† is also a lower eigenvalue to (1.1). Hence statement (1) follows.
(2) Statement (2) is obvious. Thus the lemma follows.

Lemma 3.9. If 𝐹 ∈ 𝔹 π‘š + , then one has the following. (1) 𝑝 ↦ 𝑔 𝐹 ( 𝑝 , π‘₯ ) ( π‘₯ ∈ 𝑋 ) and 𝑝 ↦ s u p π‘₯ ∈ 𝑋 𝑔 𝐹 ( 𝑝 , π‘₯ ) are continuous on 𝑃 .(2) π‘₯ ↦ 𝑔 𝐹 ( 𝑝 , π‘₯ ) ( 𝑝 ∈ 𝑃 ) and π‘₯ ↦ i n f 𝑝 ∈ 𝑃 𝑔 𝐹 ( 𝑝 , π‘₯ ) are u.s.c. on 𝑋 .(3) 𝑣 ( 𝑔 𝐹 ) exists if and only if 𝑆 ( 𝑔 𝐹 ) β‰  βˆ… .

Proof. (1) Since for each π‘₯ ∈ 𝑋 and 𝑐 ∈ 𝐹 , 𝑝 ↦ ⟨ 𝑝 , 𝑇 π‘₯ βˆ’ 𝑐 ⟩ / ⟨ 𝑝 , 𝑆 π‘₯ ⟩ is continuous on 𝑃 , by (3.3), and Lemma 2.6(2), we see that 𝑝 ↦ 𝑔 𝐹 ( 𝑝 , π‘₯ ) = s u p 𝑐 ∈ 𝐹 ( ⟨ 𝑝 , 𝑇 π‘₯ βˆ’ 𝑐 ⟩ / ⟨ 𝑝 , 𝑆 π‘₯ ⟩ ) ( π‘₯ ∈ 𝑋 ) and 𝑝 ↦ s u p π‘₯ ∈ 𝑋 𝑔 𝐹 ( 𝑝 , π‘₯ ) are l.s.c. on 𝑃 . On the other hand, by Assumptions 13, we can verify that ( 𝑝 , π‘₯ , 𝑐 ) ↦ ⟨ 𝑝 , 𝑇 π‘₯ βˆ’ 𝑐 ⟩ / ⟨ 𝑝 , 𝑆 π‘₯ ⟩ is u.s.c. on 𝑃 Γ— 𝑋 Γ— 𝐹 . It follows from Lemma 2.8 that both functions ( 𝑝 , π‘₯ ) ↦ 𝑔 𝐹 ( 𝑝 , π‘₯ ) = s u p 𝑐 ∈ 𝐹 ( ⟨ 𝑝 , 𝑇 π‘₯ βˆ’ 𝑐 ⟩ / ⟨ 𝑝 , 𝑆 π‘₯ ⟩ ) on 𝑃 Γ— 𝑋 and 𝑝 ↦ s u p π‘₯ ∈ 𝑋 𝑔 𝐹 ( 𝑝 , π‘₯ ) on 𝑃 are u.s.c., so is 𝑝 ↦ 𝑔 𝐹 ( 𝑝 , π‘₯ ) . Hence (1) is true.
(2) As proved above, we know that for each 𝑝 ∈ 𝑃 , π‘₯ ↦ 𝑔 𝐹 ( 𝑝 , π‘₯ ) is u.s.c. on 𝑋 , so is π‘₯ ↦ i n f 𝑝 ∈ 𝑃 𝑔 𝐹 ( 𝑝 , π‘₯ ) because of Lemma 2.6(3).
(3) By Remark 2.2(3), we only need to prove the necessary part. Assume 𝑣 ( 𝑔 𝐹 ) exists, that is, i n f 𝑝 ∈ 𝑃 s u p π‘₯ ∈ 𝑋 𝑔 𝐹 ( 𝑝 , π‘₯ ) = s u p π‘₯ ∈ 𝑋 i n f 𝑝 ∈ 𝑃 𝑔 𝐹 ( 𝑝 , π‘₯ ) , then both (1) and (2) imply that there exist 𝑝 ∈ 𝑃 and π‘₯ ∈ 𝑋 with s u p π‘₯ ∈ 𝑋 𝑔 𝐹 ( 𝑝 , π‘₯ ) = i n f 𝑝 ∈ 𝑃 𝑔 𝐹 ( 𝑝 , π‘₯ ) , which means that ( 𝑝 , π‘₯ ) ∈ 𝑆 ( 𝑔 𝐹 ) and 𝑆 ( 𝑔 𝐹 ) is nonempty. Hence the lemma is true.

Lemma 3.10. (1)   πœ† is a lower eigenvalue to (1.1) and π‘₯ its eigenvector if and only if i n f 𝑝 ∈ 𝑃 𝑔 𝐹 ( 𝑝 , π‘₯ ) β‰₯ πœ† .
(2)   πœ† is a lower eigenvalue to (1.1) if and only if s u p π‘₯ ∈ 𝑋 i n f 𝑝 ∈ 𝑃 𝑔 𝐹 ( 𝑝 , π‘₯ ) β‰₯ πœ† .

Proof. (1) Suppose πœ† > 0 and π‘₯ ∈ 𝑋 . Since for each 𝑝 ∈ 𝑃 , 𝑔 𝐹 ( 𝑝 , π‘₯ ) = s u p 𝑐 ∈ 𝐹 ( ⟨ 𝑝 , 𝑇 π‘₯ βˆ’ 𝑐 ⟩ / ⟨ 𝑝 , 𝑆 π‘₯ ⟩ ) β‰₯ πœ† equals to 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) = s u p 𝑐 ∈ 𝐹 ⟨ 𝑝 , 𝑇 π‘₯ βˆ’ πœ† 𝑆 π‘₯ βˆ’ 𝑐 ⟩ β‰₯ 0 , which implies that i n f 𝑝 ∈ 𝑃 𝑔 𝐹 ( 𝑝 , π‘₯ ) β‰₯ πœ† if and only if i n f 𝑝 ∈ 𝑃 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) β‰₯ 0 . Combining this with Lemma 3.7(1), we know that (1) is true.
(2) By (1), it is enough to prove the sufficient part. If s u p π‘₯ ∈ 𝑋 i n f 𝑝 ∈ 𝑃 𝑔 𝐹 ( 𝑝 , π‘₯ ) β‰₯ πœ† ( > 0 ) , then Lemma 3.9(2) shows that there exists π‘₯ πœ† ∈ 𝑋 with i n f 𝑝 ∈ 𝑃 𝑔 𝐹 ( 𝑝 , π‘₯ πœ† ) = s u p π‘₯ ∈ 𝑋 i n f 𝑝 ∈ 𝑃 𝑔 𝐹 ( 𝑝 , π‘₯ ) β‰₯ πœ† . Hence πœ† is a lower eigenvalue to (1.1) and π‘₯ πœ† its eigenvector. This completes the proof.

Lemma 3.11. (1)   𝑣 ( 𝑓 0 , 𝐹 ) > 0 if and only if 𝑣 ( 𝑔 𝐹 ) exists with 𝑣 ( 𝑔 𝐹 ) = πœ† if and only if 𝑆 ( 𝑔 𝐹 ) β‰  βˆ… and 𝑔 𝐹 ( 𝑝 , π‘₯ ) = πœ† for ( 𝑝 , π‘₯ ) ∈ 𝑆 ( 𝑔 𝐹 ) . Where πœ† = πœ† ( 𝐹 ) > 0 is the maximal lower eigenvalue to (1.1).
(2)   πœ† is a lower eigenvalue to (1.1) if and only if 𝑣 ( 𝑔 𝐹 ) exists with 𝑣 ( 𝑔 𝐹 ) β‰₯ πœ† if and only if 𝑆 ( 𝑔 𝐹 ) β‰  βˆ… and 𝑔 𝐹 ( Μ‚ 𝑝 , Μ‚ π‘₯ ) β‰₯ πœ† for ( Μ‚ 𝑝 , Μ‚ π‘₯ ) ∈ 𝑆 ( 𝑔 𝐹 ) .
(3) System (1.1) has at least one lower eigenvalue if and only if 𝑣 ( 𝑔 𝐹 ) exist with 𝑣 ( 𝑔 𝐹 ) > 0 if and only if 𝑆 ( 𝑔 𝐹 ) β‰  βˆ… and 𝑔 𝐹 ( Μ‚ 𝑝 , Μ‚ π‘₯ ) > 0 for ( Μ‚ 𝑝 , Μ‚ π‘₯ ) ∈ 𝑆 ( 𝑔 𝐹 ) .

Proof. (1) We divide the proof of (1) into three steps.Step 1. If 𝑣 ( 𝑓 0 , 𝐹 ) > 0 , then by Lemma 3.8(1), the maximal eigenvalue πœ† ( > 0 ) to (1.1) exists with 𝑣 ( 𝑓 πœ† , 𝐹 ) = 0 . We will prove that 𝑣 ( 𝑔 𝐹 ) exists with 𝑣 ( 𝑔 𝐹 ) = πœ† . Let πœ† βˆ— = s u p π‘₯ ∈ 𝑋 i n f 𝑝 ∈ 𝑃 𝑔 𝐹 ( 𝑝 , π‘₯ ) , πœ† βˆ— = i n f 𝑝 ∈ 𝑃 s u p π‘₯ ∈ 𝑋 𝑔 𝐹 ( 𝑝 , π‘₯ ) , then πœ† βˆ— ≀ πœ† βˆ— , and the left is to show πœ† βˆ— ≀ πœ† ≀ πœ† βˆ— .
By Lemma 3.5(2), there exists π‘₯ ∈ 𝑋 such that i n f 𝑝 ∈ 𝑃 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) = 𝑣 ( 𝑓 πœ† , 𝐹 ) = 0 . This shows that s u p 𝑐 ∈ 𝐹 ⟨ 𝑝 , 𝑇 π‘₯ βˆ’ πœ† 𝑆 π‘₯ βˆ’ 𝑐 ⟩ = 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) β‰₯ 0 for any 𝑝 ∈ 𝑃 , that is, πœ† ≀ s u p 𝑐 ∈ 𝐹 ( ⟨ 𝑝 , 𝑇 π‘₯ βˆ’ 𝑐 ⟩ / ⟨ 𝑝 , 𝑆 π‘₯ ⟩ ) = 𝑔 𝐹 ( 𝑝 , π‘₯ ) ( 𝑝 ∈ 𝑃 ) . Hence, πœ† ≀ i n f 𝑝 ∈ 𝑃 𝑔 𝐹 ( 𝑝 , π‘₯ ) ≀ πœ† βˆ— . On the other hand, since for each 𝑝 ∈ 𝑃 , πœ† βˆ— ≀ s u p π‘₯ ∈ 𝑋 𝑔 𝐹 ( 𝑝 , π‘₯ ) , by Lemma 3.9(2), there exists π‘₯ 𝑝 ∈ 𝑋 such that πœ† βˆ— ≀ s u p π‘₯ ∈ 𝑋 𝑔 𝐹 ( 𝑝 , π‘₯ ) = 𝑔 𝐹 ( 𝑝 , π‘₯ 𝑝 ) = s u p 𝑐 ∈ 𝐹 ( ⟨ 𝑝 , 𝑇 π‘₯ 𝑝 βˆ’ 𝑐 ⟩ / ⟨ 𝑝 , 𝑆 π‘₯ 𝑝 ⟩ ) . It follows that s u p π‘₯ ∈ 𝑋 𝑓 πœ† βˆ— , 𝐹 ( 𝑝 , π‘₯ ) β‰₯ 𝑓 πœ† βˆ— , 𝐹 ( 𝑝 , π‘₯ 𝑝 ) = s u p 𝑐 ∈ 𝐹 ⟨ 𝑝 , 𝑇 π‘₯ 𝑝 βˆ’ πœ† βˆ— 𝑆 π‘₯ 𝑝 βˆ’ 𝑐 ⟩ β‰₯ 0 for any 𝑝 ∈ 𝑃 . Hence by Lemma 3.5(3), 𝑣 ( 𝑓 πœ† βˆ— , 𝐹 ) = i n f 𝑝 ∈ 𝑃 s u p π‘₯ ∈ 𝑋 𝑓 πœ† βˆ— , 𝐹 ( 𝑝 , π‘₯ ) β‰₯ 0 . From Lemma 3.7(2), this implies that πœ† βˆ— is a lower eigenvalue to (1.1), and thus πœ† βˆ— ≀ πœ† . Therefore, 𝑣 ( 𝑔 𝐹 ) exists with 𝑣 ( 𝑔 𝐹 ) = πœ† .
Step 2. If 𝑣 ( 𝑔 𝐹 ) exists with 𝑣 ( 𝑔 𝐹 ) = πœ† ( > 0 ) , then Lemma 3.9(3) and Remark 2.2(3) deduce that 𝑆 ( 𝑔 𝐹 ) β‰  βˆ… and 𝑔 𝐹 ( 𝑝 , π‘₯ ) = 𝑣 ( 𝑔 𝐹 ) = πœ† > 0 for ( 𝑝 , π‘₯ ) ∈ 𝑆 ( 𝑔 𝐹 ) .Step 3. If 𝑆 ( 𝑔 𝐹 ) β‰  βˆ… and ( 𝑝 , π‘₯ ) ∈ 𝑆 ( 𝑔 𝐹 ) with 𝑔 𝐹 ( 𝑝 , π‘₯ ) = πœ† ( > 0 ) , then i n f 𝑝 ∈ 𝑃 𝑔 𝐹 ( 𝑝 , π‘₯ ) = πœ† ( > 0 ) . This implies by Lemmas 3.10(1) and 3.8(1) that πœ† is a lower eigenvalue to (1.1), and thus 𝑣 ( 𝑓 0 , 𝐹 ) > 0 .
(2) If πœ† > 0 is a lower eigenvalue to (1.1), then Lemmas 3.8(1), 3.10(2) and statement (1) imply that 𝑣 ( 𝑓 0 , 𝐹 ) > 0 , 𝑣 ( 𝑔 𝐹 ) exists and 𝑣 ( 𝑔 𝐹 ) = s u p π‘₯ ∈ 𝑋 i n f 𝑝 ∈ 𝑃 𝑔 𝐹 ( 𝑝 , π‘₯ ) β‰₯ πœ† . If 𝑣 ( 𝑔 𝐹 ) exists with 𝑣 ( 𝑔 𝐹 ) β‰₯ πœ† , then from Lemma 3.9(3) and Remark 2.2(3), it follows that 𝑆 ( 𝑔 𝐹 ) β‰  βˆ… and 𝑔 𝐹 ( Μ‚ 𝑝 , Μ‚ π‘₯ ) = 𝑣 ( 𝑔 𝐹 ) β‰₯ πœ† for ( Μ‚ 𝑝 , Μ‚ π‘₯ ) ∈ 𝑆 ( 𝑔 𝐹 ) . If 𝑆 ( 𝑔 𝐹 ) β‰  βˆ… and 𝑔 𝐹 ( Μ‚ 𝑝 , Μ‚ π‘₯ ) β‰₯ πœ† for ( Μ‚ 𝑝 , Μ‚ π‘₯ ) ∈ 𝑆 ( 𝑔 𝐹 ) , then by Remark 2.2(3) and Lemma 3.10(1), we see that i n f 𝑝 ∈ 𝑃 𝑔 𝐹 ( 𝑝 , Μ‚ π‘₯ ) = 𝑔 𝐹 ( Μ‚ 𝑝 , Μ‚ π‘₯ ) β‰₯ πœ† , and thus πœ† is a lower eigenvalue to (1.1) and Μ‚ π‘₯ its eigenvector.
(3) Statement (3) follows immediately from (1) and (2). This completes the proof.

Lemma 3.12. (1) If 𝑣 ( 𝑓 0 , 𝐹 ) > 0 , or equivalently, if 𝑣 ( 𝑔 𝐹 ) exists with 𝑣 ( 𝑔 𝐹 ) > 0 , then 𝑆 ( 𝑔 𝐹 ) is a nonempty compact subset of 𝑃 Γ— 𝑋 .
(2) The first three statements of Theorem 3.3(2) are true.
(3) Theorem 3.3(3) is true.

Proof. (1) By Lemma 3.11(1), 𝑆 ( 𝑔 𝐹 ) is nonempty. Furthermore, with the same procedure as in proving the last part of Lemma 3.5 and using Lemma 3.9(1)-(2), we can show that if ( 𝑝 π‘˜ , π‘₯ π‘˜ ) ∈ 𝑆 ( 𝑔 𝐹 ) such that ( 𝑝 π‘˜ , π‘₯ π‘˜ ) β†’ ( 𝑝 0 , π‘₯ 0 ) ∈ 𝑃 Γ— 𝑋 as π‘˜ β†’ ∞ , then s u p π‘₯ ∈ 𝑋 𝑔 𝐹 ξ€· 𝑝 0 , π‘₯ ξ€Έ = l i m i n f π‘˜ β†’ ∞ s u p π‘₯ ∈ 𝑋 𝑔 𝐹 ξ€· 𝑝 π‘˜ , π‘₯ ξ€Έ ≀ l i m s u p π‘˜ β†’ ∞ i n f 𝑝 ∈ 𝑃 𝑔 𝐹 ξ€· 𝑝 , π‘₯ π‘˜ ξ€Έ ≀ i n f 𝑝 ∈ 𝑃 𝑔 𝐹 ξ€· 𝑝 , π‘₯ 0 ξ€Έ . ( 3 . 1 7 ) Hence, 𝑆 ( 𝑔 𝐹 ) is closed, and also compact.
(2) Now we prove the first three statements of Theorem 3.3(2).
By the condition of Theorem 3.3(2), Lemmas 3.8(1) and 3.11(1), we know that the maximal lower eigenvalue πœ† to (1.1) and 𝑣 ( 𝑔 𝐹 ) exist with 𝑣 ( 𝑔 𝐹 ) = πœ† .
First we prove statement (a). If π‘₯ ∈ 𝑋 is an optimal eigenvector, then by Lemma 3.10(1), we have i n f 𝑝 ∈ 𝑃 𝑔 𝐹 ( 𝑝 , π‘₯ ) β‰₯ πœ† . On the other hand, by Lemma 3.9(1), there exists 𝑝 ∈ 𝑃 such that 𝑣 ( 𝑔 𝐹 ) = s u p π‘₯ ∈ 𝑋 𝑔 𝐹 ( 𝑝 , π‘₯ ) . So we obtain that s u p π‘₯ ∈ 𝑋 𝑔 𝐹 ( 𝑝 , π‘₯ ) = πœ† ≀ i n f 𝑝 ∈ 𝑃 𝑔 𝐹 ( 𝑝 , π‘₯ ) , and thus ( 𝑝 , π‘₯ ) ∈ 𝑆 ( 𝑔 𝐹 ) . If 𝑝 ∈ 𝑃 such that ( 𝑝 , π‘₯ ) ∈ 𝑆 ( 𝑔 𝐹 ) , then Remark 2.2(3) implies that i n f 𝑝 ∈ 𝑃 𝑔 𝐹 ( 𝑝 , π‘₯ ) = 𝑣 ( 𝑔 𝐹 ) = πœ† . If i n f 𝑝 ∈ 𝑃 𝑔 𝐹 ( 𝑝 , π‘₯ ) = πœ† , then Lemma 3.10(1) shows that π‘₯ is an optimal eigenvector. Hence, Theorem 3.3(2)(a) follows.
Next we prove statement (b). By Lemmas 3.5(2) and 3.8(1), there exists Μ‚ π‘₯ ∈ 𝑋 with 0 = 𝑣 ξ€· 𝑓 πœ† , 𝐹 ξ€Έ = s u p π‘₯ ∈ 𝑋 i n f 𝑝 ∈ 𝑃 𝑓 πœ† , 𝐹 ( 𝑝 , π‘₯ ) = i n f 𝑝 ∈ 𝑃 𝑓 πœ† , 𝐹 ( 𝑝 , Μ‚ π‘₯ ) = i n f 𝑝 ∈ 𝑃 s u p 𝑐 ∈ 𝐹  𝑝 , 𝑇 Μ‚ π‘₯ βˆ’ πœ† 𝑆 Μ‚ π‘₯ βˆ’ 𝑐 ξ‚­ . ( 3 . 1 8 ) By applying Lemma 2.9 to πœ‘ ( 𝑝 , 𝑐 ) = ⟨ 𝑝 , 𝑇 Μ‚ π‘₯ βˆ’ πœ† 𝑆 Μ‚ π‘₯ βˆ’ 𝑐 ⟩ on 𝑃 Γ— 𝐹 , this leads to s u p 𝑐 ∈ 𝐹 i n f 𝑝 ∈ 𝑃  𝑝 , 𝑇 Μ‚ π‘₯ βˆ’ πœ† 𝑆 Μ‚ π‘₯ βˆ’ 𝑐 ξ‚­ = i n f 𝑝 ∈ 𝑃 s u p 𝑐 ∈ 𝐹  𝑝 , 𝑇 Μ‚ π‘₯ βˆ’ πœ† 𝑆 Μ‚ π‘₯ βˆ’ 𝑐 ξ‚­ = 0 . ( 3 . 1 9 ) Since 𝑐 ↦ i n f 𝑝 ∈ 𝑃 ⟨ 𝑝 , 𝑇 Μ‚ π‘₯ βˆ’ πœ† 𝑆 Μ‚ π‘₯ βˆ’ 𝑐 ⟩ is u.s.c. on 𝐹 and 𝑝 ↦ ⟨ 𝑝 , 𝑇 Μ‚ π‘₯ βˆ’ πœ† 𝑆 Μ‚ π‘₯ βˆ’ 𝑐 ⟩ is continuous on 𝑃 , from (3.19), first there exists Μ‚ 𝑐 = ( Μ‚ 𝑐 1 , Μ‚ 𝑐 2 , … , Μ‚ 𝑐 π‘š ) ∈ 𝐹 and then there exists Μ‚ 𝑝 = ( Μ‚ 𝑝 1 , Μ‚ 𝑝 2 , … , Μ‚ 𝑝 π‘š ) ∈ 𝑃 such that 0 = s u p 𝑐 ∈ 𝐹 i n f 𝑝 ∈ 𝑃  𝑝 , 𝑇 Μ‚ π‘₯ βˆ’ πœ† 𝑆 Μ‚ π‘₯ βˆ’ 𝑐 ξ‚­ = i n f 𝑝 ∈