Abstract

We study the existence of oscillatory periodic solutions for two nonautonomous differential-difference equations which arise in a variety of applications with the following forms: ̇ 𝑥 ( 𝑡 ) = 𝑓 ( 𝑡 , 𝑥 ( 𝑡 𝑟 ) ) and ̇ 𝑥 ( 𝑡 ) = 𝑓 ( 𝑡 , 𝑥 ( 𝑡 𝑠 ) ) 𝑓 ( 𝑡 , 𝑥 ( 𝑡 2 𝑠 ) ) , where 𝑓 𝐶 ( × , ) is odd with respect to 𝑥 , and 𝑟 , 𝑠 > 0 are two given constants. By using a symplectic transformation constructed by Cheng (2010) and a result in Hamiltonian systems, the existence of oscillatory periodic solutions of the above-mentioned equations is established.

1. Introduction and Statement of Main Results

Furumochi [1] studied the following equation: ̇ 𝑥 ( 𝑡 ) = 𝑎 s i n ( 𝑥 ( 𝑡 𝑟 ) ) , ( 1 . 1 ) with 𝑡 0 , 𝑎 0 ,  𝑟 > 0 , which models phase-locked loop control of high-frequency generators and is widely applied in communication systems. Obviously, (1.1) is a special case of the following differential-difference equations: ̇ 𝑥 ( 𝑡 ) = 𝛼 𝑓 ( 𝑥 ( 𝑡 𝑟 ) ) , ( 1 . 2 ) where 𝛼 is a real parameter. In fact, a lot of differential-difference equations occurring widely in applications and describing many interesting types of phenomena can also be written in the form of (1.2) by making an appropriate change of variables. For example, the following differential-difference equation: ̇ 𝑥 ( 𝑡 ) = 𝛼 𝑥 ( 𝑡 1 ) ( 1 + 𝑥 ( 𝑡 ) ) ( 1 . 3 ) arises in several applications and has been studied by many researchers. Equation (1.3) was first considered by Cunningham [2] as a nonlinear growth model denoting a mathematical description of a fluctuating population. Subsequently, (1.3) was proposed by Wright [3] as occurring in the application of probability methods to the theory of asymptotic prime number density. Jones [4] states that (1.3) may also describe the operation of a control system working with potentially explosive chemical reactions, and quite similar equations arise in economic studies of business cycles. Moreover, (1.3) and its similar ones were studied in [5] on ecology.

For (1.3), we make the following change of variables: 𝑦 = l n ( 1 + 𝑥 ) . ( 1 . 4 ) Then, (1.3) can be changed to the form of (1.2) ̇ 𝑦 ( 𝑡 ) = 𝑓 ( 𝑦 ( 𝑡 1 ) ) , ( 1 . 5 ) where 𝑓 ( 𝑦 ) = 𝛼 ( 𝑒 𝑦 1 ) .

Although (1.2) looks very simple on surface, Saupe's results [6] of a careful numerical study show that (1.2) displays very complex dynamical behaviour. Moreover, little of them has been proved to the best of the author's knowledge.

Due to a variety of applications, (1.2) attracts many authors to study it. In 1970s and 1980s of the last century, there has been a great deal of research on problems of the existence of periodic solutions [1, 4, 710], slowly oscillating solutions [11], stability of solutions [1214], homoclinic solutions [15], and bifurcations of solutions [6, 16, 17] to (1.2).

Since, generally, the main tool used to conclude the existence of periodic solutions is various fixed-point theorems, here we want to mention Kaplan and Yorke's work on the existence of oscillatory periodic solutions of (1.5) in [7]. In [7], they considered the following equations: ̇ 𝑥 ( 𝑡 ) = 𝑓 ( 𝑥 ( 𝑡 1 ) ) , ̇ 𝑥 ( 𝑡 ) = 𝑓 ( 𝑥 ( 𝑡 1 ) ) 𝑓 ( 𝑥 ( 𝑡 2 ) ) , ( 1 . 6 ) where 𝑓 is continuous, 𝑥 𝑓 ( 𝑥 ) > 0 for 𝑥 0 , and 𝑓 satisfies some asymptotically linear conditions at 0 and . The authors introduced a new technique for establishing the existence of oscillatory periodic solutions of (1.6). They reduced the search for periodic solutions of (1.6) to the problem of finding periodic solutions for a related systems of ordinary differential equations. We will give more details about the reduction method in Section 2.

In 1990s of the last century and at the beginning of this century, some authors [1821] applied Kaplan and Yorke's original ideas in [7] to study the existence and multiplicity of periodic solutions of (1.2) with more than two delays. See also [22, 23] for some other methods.

The previous work mainly focuses on the autonomous differential-difference equation (1.2). However, some papers [13, 24] contain some interesting nonautonomous differential difference equations arising in economics and population biology where the delay 𝑟 of (1.2) depends on time 𝑡 instead of a positive constant. Motivated by the lack of more results on periodic solutions for nonautonomous differential-difference equations, in the present paper, we study the following equations: ̇ 𝑥 ( 𝑡 ) = 𝑓 ( 𝑡 , 𝑥 ( 𝑡 𝑟 ) ) , ( 1 . 7 ) ̇ 𝑥 ( 𝑡 ) = 𝑓 ( 𝑡 , 𝑥 ( 𝑡 𝑠 ) ) 𝑓 ( 𝑡 , 𝑥 ( 𝑡 2 𝑠 ) ) , ( 1 . 8 ) where 𝑓 ( 𝑡 , 𝑥 ) 𝐶 ( × , ) is odd with respect to 𝑥 and 𝑟 = 𝜋 / 2 , 𝑠 = 𝜋 / 3 . Here, we borrow the terminology “oscillatory periodic solution” for (1.7) and (1.8) since 𝑓 ( 𝑡 , 𝑥 ) is odd with respect to 𝑥 .

Now, we state our main results as follows.

Theorem 1.1. Suppose that 𝑓 ( 𝑡 , 𝑥 ) 𝐶 ( × , ) is odd with respect to 𝑥 and 𝑟 -periodic with respect to 𝑡 . Suppose that l i m 𝑥 0 𝑓 ( 𝑡 , 𝑥 ) 𝑥 = 𝜔 0 ( 𝑡 ) , l i m 𝑥 𝑓 ( 𝑡 , 𝑥 ) 𝑥 = 𝜔 ( 𝑡 ) ( 1 . 9 ) exist. Write 𝛼 0 = ( 1 / 𝑟 ) 𝑟 0 𝜔 0 ( 𝑡 ) 𝑑 𝑡 and 𝛼 = ( 1 / 𝑟 ) 𝑟 0 𝜔 ( 𝑡 ) 𝑑 𝑡 . Assume that (H1) 𝛼 0 ± 𝑘 , 𝛼 ± 𝑘 , for all 𝑘 + ,(H2) there exists at least an integer 𝑘 0 with 𝑘 0 + such that 𝛼 m i n 0 , 𝛼 < ± 𝑘 0 𝛼 < m a x 0 , 𝛼 , ( 1 . 1 0 ) then (1.7) has at least one nontrivial oscillatory periodic solution 𝑥 satisfying 𝑥 ( 𝑡 ) = 𝑥 ( 𝑡 𝜋 ) .

Theorem 1.2. Suppose that 𝑓 ( 𝑡 , 𝑥 ) 𝐶 ( × , ) is odd with respect to 𝑥 and 𝑠 -periodic with respect to 𝑡 . Let 𝜔 0 ( 𝑡 ) and 𝜔 ( 𝑡 ) be the two functions defined in Theorem 1.1. Write 𝛽 0 = ( 1 / 𝑠 ) 𝑠 0 𝜔 0 ( 𝑡 ) 𝑑 𝑡 and 𝛽 = ( 1 / 𝑠 ) 𝑠 0 𝜔 ( 𝑡 ) 𝑑 𝑡 . Assume that (H3) 𝛽 0 , 3 𝛽 0 ± 𝑘 , 𝛽 , 3 𝛽 ± 𝑘 , for all 𝑘 + ,
(H4) there exists at least an integer 𝑘 0 with 𝑘 0 + such that 𝛽 m i n 0 , 𝛽 < ± 𝑘 0 𝛽 < m a x 0 , 𝛽 ( 1 . 1 1 ) or 𝛽 m i n 0 , 𝛽 𝑘 < ± 0 3 𝛽 < m a x 0 , 𝛽 , ( 1 . 1 2 ) then (1.8) has at least one nontrivial oscillatory periodic solution 𝑥 satisfying 𝑥 ( 𝑡 ) = 𝑥 ( 𝑡 𝜋 ) .

Remark 1.3. Theorems 1.1 and 1.2 are concerned with the existence of periodic solutions for nonautonomous differential-difference equations (1.7) and (1.8). Therefore, our results generalize some results obtained in the references. We will use a symplectic transformation constructed in [25] and a theorem of [26] to prove our main results.

2. Proof of the Main Results

Consider the following nonautonomous Hamiltonian system: ̇ 𝑧 ( 𝑡 ) = 𝐽 𝑧 𝐻 ( 𝑡 , 𝑧 ) , ( 2 . 1 ) where 𝐽 = 0 𝐼 𝑁 𝐼 𝑁 0 is the standard symplectic matrix, 𝐼 𝑁 is the identity matrix in 𝑁 , 𝑧 𝐻 ( 𝑡 , 𝑧 ) denotes the gradient of 𝐻 ( 𝑡 , 𝑧 ) with respect to 𝑧 , and 𝐻 𝐶 1 ( × 2 𝑁 , ) is the Hamiltonian function. Suppose that there exist two constant symmetric matrices 0 and such that 𝑧 𝐻 ( 𝑡 , 𝑧 ) 0 𝑧 = 𝑜 ( | 𝑧 | ) , a s | 𝑧 | 0 , 𝑧 𝐻 ( 𝑡 , 𝑧 ) 𝑧 = 𝑜 ( | 𝑧 | ) , a s | 𝑧 | . ( 2 . 2 ) We call the Hamiltonian system (2.1) asymptotically linear both at 0 and with constant coefficients 0 and because of (2.2).

Now, we show that the reduction method in [7] can be used to study oscillatory periodic solutions of (1.7) and (1.8). More precisely, let 𝑥 ( 𝑡 ) be any solution of (1.7) satisfying 𝑥 ( 𝑡 ) = 𝑥 ( 𝑡 2 𝑟 ) . Let 𝑥 1 ( 𝑡 ) = 𝑥 ( 𝑡 ) ,  𝑥 2 ( 𝑡 ) = 𝑥 ( 𝑡 𝑟 ) , then 𝑋 ( 𝑡 ) = ( 𝑥 1 ( 𝑡 ) , 𝑥 2 ( 𝑡 ) ) satisfies 𝑑 𝑑 𝑡 𝑋 ( 𝑡 ) = 𝐴 2 Φ 1 ( 𝑡 , 𝑋 ( 𝑡 ) ) , w h e r e 𝐴 2 = 0 1 1 0 , ( 2 . 3 ) and Φ 1 ( 𝑡 , 𝑋 ) = ( 𝑓 ( 𝑡 , 𝑥 1 ) , 𝑓 ( 𝑡 , 𝑥 2 ) ) . What is more, if 𝑋 ( 𝑡 ) is a solution of (2.3) with the following symmetric structure 𝑥 1 ( 𝑡 ) = 𝑥 2 ( 𝑡 𝑟 ) , 𝑥 2 ( 𝑡 ) = 𝑥 1 ( 𝑡 𝑟 ) , ( 2 . 4 ) then 𝑥 ( 𝑡 ) = 𝑥 1 ( 𝑡 ) gives a solution to (1.7) with the property 𝑥 ( 𝑡 ) = 𝑥 ( 𝑡 2 𝑟 ) . Thus, solving (1.7) within the class of the solutions with the symmetry 𝑥 ( 𝑡 ) = 𝑥 ( 𝑡 2 𝑟 ) is equivalent to finding solutions of (2.3) with the symmetric structure (2.4).

Since 𝐴 2 is indeed the standard symplectic matrix in the plane 2 , the system (2.3) can be written as the following Hamiltonian system: ̇ 𝑦 ( 𝑡 ) = 𝐴 2 𝑦 𝐻 ( 𝑡 , 𝑦 ) , ( 2 . 5 ) where 𝐻 ( 𝑡 , 𝑦 ) = 𝑦 1 0 𝑓 ( 𝑡 , 𝑥 ) 𝑑 𝑥 + 𝑦 2 0 𝑓 ( 𝑡 , 𝑥 ) 𝑑 𝑥 for each 𝑦 = ( 𝑦 1 , 𝑦 2 ) 2 .

From the assumptions of Theorem 1.1, we have 𝑓 ( 𝑡 , 𝑥 ) = 𝜔 0 𝑓 ( 𝑡 ) 𝑥 + 𝑜 ( | 𝑥 | ) a s | 𝑥 | 0 , ( 𝑡 , 𝑥 ) = 𝜔 ( 𝑡 ) 𝑥 + 𝑜 ( | 𝑥 | ) a s | 𝑥 | . ( 2 . 6 )

Hence, the gradient of the Hamiltonian function 𝐻 ( 𝑡 , 𝑦 ) satisfies 𝑦 𝐻 ( 𝑡 , 𝑦 ) = 𝜔 0 | | 𝑦 | | | | 𝑦 | | ( 𝑡 ) 𝑦 + 𝑜 a s 0 , 𝑦 𝐻 ( 𝑡 , 𝑦 ) = 𝜔 | | 𝑦 | | | | 𝑦 | | ( 𝑡 ) 𝑦 + 𝑜 a s . ( 2 . 7 )

By (2.7), according to [25], there is a symplectic transformation 𝑦 = Ψ 1 ( 𝑡 , 𝑧 ) under which the Hamiltonian system (2.5) can be transformed to the following Hamiltonian system: ̇ 𝑧 ( 𝑡 ) = 𝐴 2 𝑧 𝐻 ( 𝑡 , 𝑧 ) , ( 2 . 8 ) satisfying 𝑧 𝐻 ( 𝑡 , 𝑧 ) = 𝛼 0 𝐼 2 𝑧 + 𝑜 ( | 𝑧 | ) a s | 𝑧 | 0 , 𝑧 𝐻 ( 𝑡 , 𝑧 ) = 𝛼 𝐼 2 𝑧 + 𝑜 ( | 𝑧 | ) a s | 𝑧 | , ( 2 . 9 ) where 𝛼 0 and 𝛼 are two constants defined in Theorem 1.1.

By (2.9), we have the following.

Lemma 2.1. The Hamiltonian system (2.8) is asymptotically linear both at 0 and with constant coefficients 𝛼 0 𝐼 2 and 𝛼 𝐼 2 .

Let 𝑥 ( 𝑡 ) be any solution of (1.8) satisfying 𝑥 ( 𝑡 ) = 𝑥 ( 𝑡 3 𝑠 ) . Let 𝑥 1 ( 𝑡 ) = 𝑥 ( 𝑡 ) , 𝑥 2 ( 𝑡 ) = 𝑥 ( 𝑡 𝑠 ) , and 𝑥 3 ( 𝑡 ) = 𝑥 ( 𝑡 2 𝑠 ) , then 𝑌 ( 𝑡 ) = ( 𝑥 1 ( 𝑡 ) , 𝑥 2 ( 𝑡 ) , 𝑥 3 ( 𝑡 ) ) satisfies 𝑑 𝑑 𝑡 𝑌 ( 𝑡 ) = 𝐴 3 Φ 2 ( 𝑡 , 𝑌 ( 𝑡 ) ) , w h e r e 𝐴 3 = 0 1 1 1 0 1 1 1 0 , ( 2 . 1 0 ) and Φ 2 ( 𝑡 , 𝑌 ) = ( 𝑓 ( 𝑡 , 𝑥 1 ) , 𝑓 ( 𝑡 , 𝑥 2 ) , 𝑓 ( 𝑡 , 𝑥 3 ) ) .

Following the ideas in [18], (2.10) can be reduced to a two-dimensional Hamiltonian system ̇ 𝑦 ( 𝑡 ) = 𝐴 2 𝑦 𝐻 ( 𝑡 , 𝑦 ) , ( 2 . 1 1 ) where 𝐻 ( 𝑡 , 𝑦 ) = 𝑦 1 0 𝑓 ( 𝑡 , 𝑥 ) 𝑑 𝑥 + 𝑦 2 0 𝑓 ( 𝑡 , 𝑥 ) 𝑑 𝑥 + 𝑦 2 𝑦 1 0 𝑓 ( 𝑡 , 𝑥 ) 𝑑 𝑥 for each 𝑦 = ( 𝑦 1 , 𝑦 2 ) 2 .

From the assumptions of Theorem 1.1, (2.6), the gradient of the Hamiltonian function 𝐻 ( 𝑡 , 𝑦 ) satisfies 𝑦 𝐻 ( 𝑡 , 𝑦 ) = 𝜔 0 | | 𝑦 | | | | 𝑦 | | ( 𝑡 ) 𝑀 𝑦 + 𝑜 a s 0 , 𝑦 𝐻 ( 𝑡 , 𝑦 ) = 𝜔 | | 𝑦 | | | | 𝑦 | | ( 𝑡 ) 𝑀 𝑦 + 𝑜 a s , ( 2 . 1 2 ) where 𝑀 = 2 1 1 2 is a symmetric positive definite matrix.

It follows from (2.12) and [25] that there exists a symplectic transformation 𝑦 = Ψ 2 ( 𝑡 , 𝑧 ) under which the Hamiltonian system (2.11) can be changed to the following Hamiltonian system: ̇ 𝑧 ( 𝑡 ) = 𝐴 2 𝑧 𝐻 ( 𝑡 , 𝑧 ) , ( 2 . 1 3 ) satisfying 𝑧 𝐻 ( 𝑡 , 𝑧 ) = 𝛽 0 𝑀 𝑧 + 𝑜 ( | 𝑧 | ) a s | 𝑧 | 0 , 𝑧 𝐻 ( 𝑡 , 𝑧 ) = 𝛽 𝑀 𝑧 + 𝑜 ( | 𝑧 | ) a s | 𝑧 | , ( 2 . 1 4 ) where 𝛽 0 and 𝛽 are two constants defined in Theorem 1.2.

Then, (2.14) yields the following.

Lemma 2.2. The Hamiltonian system (2.13) is asymptotically linear both at 0 and with constant coefficients 𝛽 0 𝑀 and 𝛽 𝑀 .

Remark 2.3. In order to find periodic solutions of (1.7) and (1.8), we only need to seek periodic solutions of the Hamiltonian systems (2.8) and (2.13) with the symmetric structure (2.4), respectively.

In the rest of this paper, we will work in the Hilbert space 𝐸 = 𝑊 1 / 2 , 2 ( 𝑆 1 , 2 ) , which consists of all 𝑧 ( 𝑡 ) in 𝐿 2 ( 𝑆 1 , 2 ) whose Fourier series 𝑧 ( 𝑡 ) = 𝑎 0 + + 𝑘 = 1 𝑎 𝑘 c o s 𝑘 𝑡 + 𝑏 𝑘 s i n 𝑘 𝑡 ( 2 . 1 5 )

satisfies | | 𝑎 0 | | 2 + 1 2 + 𝑘 = 1 𝑘 | | 𝑎 𝑘 | | 2 + | | 𝑏 𝑘 | | 2 < + . ( 2 . 1 6 )

The inner product on 𝐸 is defined by 𝑧 1 , 𝑧 2 𝑎 = 0 ( 1 ) , 𝑎 0 ( 2 ) + 1 2 𝑘 = 1 𝑘 𝑎 𝑘 ( 1 ) , 𝑎 𝑘 ( 2 ) + 𝑏 𝑘 ( 1 ) , 𝑏 𝑘 ( 2 ) , ( 2 . 1 7 )

where 𝑧 𝑖 = 𝑎 0 ( 𝑖 ) + + 𝑘 = 1 ( 𝑎 𝑘 ( 𝑖 ) c o s 𝑘 𝑡 + 𝑏 𝑘 ( 𝑖 ) s i n 𝑘 𝑡 ) ( 𝑖 = 1 , 2 ) , the norm 𝑧 2 = 𝑧 , 𝑧 , and ( , ) denotes the inner product in 2 .

In order to obtain solutions of (2.8) with the symmetric structure (2.4), we define a matrix 𝑇 2 with the following form: 𝑇 2 = 0 1 1 0 . ( 2 . 1 8 )

Then, by 𝑇 2 , for any 𝑧 ( 𝑡 ) 𝐸 , define an action 𝛿 1 on 𝑧 by 𝛿 1 𝑧 ( 𝑡 ) = 𝑇 2 𝑧 ( 𝑡 𝑟 ) . ( 2 . 1 9 ) Then by a direct computation, we have that 𝛿 2 1 𝑧 ( 𝑡 ) = 𝑧 ( 𝑡 2 𝑟 ) = 𝑧 ( 𝑡 𝜋 ) , 𝛿 4 1 𝑧 ( 𝑡 ) = 𝑧 ( 𝑡 ) , and 𝐺 = { 𝛿 1 , 𝛿 2 1 , 𝛿 3 1 , 𝛿 4 1 } is a compact group action over 𝐸 . If 𝛿 1 𝑧 ( 𝑡 ) = 𝑧 ( 𝑡 ) holds, then through a straightforward check, we have that 𝑧 ( 𝑡 ) has the symmetric structure (2.4).

Lemma 2.4. Write S E = { 𝑧 𝐸 𝛿 1 𝑧 ( 𝑡 ) = 𝑧 ( 𝑡 ) } , then S E is a subspace of 𝐸 with the following form: S E = 𝑧 ( 𝑡 ) = 𝑘 = 1 𝑎 2 𝑘 1 c o s ( 2 𝑘 1 ) 𝑡 + 𝑏 2 𝑘 1 𝑎 s i n ( 2 𝑘 1 ) 𝑡 2 𝑘 1 , 1 = ( 1 ) 𝑘 + 1 𝑏 2 𝑘 1 , 2 , 𝑏 2 𝑘 1 , 1 = ( 1 ) 𝑘 𝑎 2 𝑘 1 , 2 , ( 2 . 2 0 ) where 𝑎 2 𝑘 1 = ( 𝑎 2 𝑘 1 , 1 , 𝑎 2 𝑘 1 , 2 ) and 𝑏 2 𝑘 1 = ( 𝑏 2 𝑘 1 , 1 , 𝑏 2 𝑘 1 , 2 ) .

Proof. Write 𝑧 ( 𝑡 ) = ( 𝑧 1 ( 𝑡 ) , 𝑧 2 ( 𝑡 ) ) , where 𝑧 1 ( 𝑡 ) = 𝑎 0 , 1 + + 𝑘 = 1 ( 𝑎 𝑘 , 1 c o s 𝑘 𝑡 + 𝑏 𝑘 , 1 s i n 𝑘 𝑡 ) , 𝑧 2 ( 𝑡 ) = 𝑎 0 , 2 + + 𝑘 = 1 ( 𝑎 𝑘 , 2 c o s 𝑘 𝑡 + 𝑏 𝑘 , 2 s i n 𝑘 𝑡 ) . By 𝛿 1 𝑧 = 𝑧 and the definition of the action 𝛿 1 , we have 𝑧 1 ( 𝑡 ) , 𝑧 2 ( 𝑡 ) = 𝑧 2 𝜋 𝑡 2 , 𝑧 1 𝜋 𝑡 2 , ( 2 . 2 1 ) which yields 𝑎 0 , 1 + + 𝑘 = 1 𝑎 𝑘 , 1 c o s 𝑘 𝑡 + 𝑏 𝑘 , 1 = s i n 𝑘 𝑡 𝑎 0 , 2 + 𝑛 = 1 ( 1 ) 𝑛 𝑎 2 𝑛 , 2 c o s 2 𝑛 𝑡 + 𝑏 2 𝑛 , 2 s i n 2 𝑛 𝑡 , f o r 𝑘 = 2 𝑛 i s e v e n , 𝑎 0 , 2 + 𝑛 = 1 ( 1 ) 𝑛 1 𝑎 2 𝑛 1 , 2 s i n ( 2 𝑛 1 ) 𝑡 𝑏 2 𝑛 1 , 2 c o s ( 2 𝑛 1 ) 𝑡 , f o r 𝑘 = 2 𝑛 1 i s o d d . ( 2 . 2 2 ) Then, we have 𝑎 0 , 1 = 𝑎 0 , 2 , 𝑎 2 𝑛 , 1 = ( 1 ) 𝑛 + 1 𝑎 2 𝑛 , 2 , 𝑏 2 𝑛 , 1 = ( 1 ) 𝑛 + 1 𝑏 2 𝑛 , 2 , 𝑎 2 𝑛 1 , 1 = ( 1 ) 𝑛 + 1 𝑏 2 𝑛 1 , 2 , 𝑏 2 𝑛 1 , 1 = ( 1 ) 𝑛 𝑎 2 𝑛 1 , 2 . ( 2 . 2 3 ) Similarly, by 𝑧 2 ( 𝑡 ) = 𝑧 1 ( 𝑡 ( 𝜋 / 2 ) ) , one has 𝑎 0 , 2 = 𝑎 0 , 1 , 𝑎 2 𝑛 , 2 = ( 1 ) 𝑛 𝑎 2 𝑛 , 1 , 𝑏 2 𝑛 , 2 = ( 1 ) 𝑛 𝑏 2 𝑛 , 1 , 𝑎 2 𝑛 1 , 2 = ( 1 ) 𝑛 𝑏 2 𝑛 1 , 1 , 𝑏 2 𝑛 1 , 2 = ( 1 ) 𝑛 1 𝑎 2 𝑛 1 , 1 . ( 2 . 2 4 ) Therefore, 𝑎 0 , 2 = 𝑎 0 , 1 = 0 , 𝑎 2 𝑛 , 1 = ( 1 ) 𝑛 + 1 𝑎 2 𝑛 , 2 = ( 1 ) 𝑛 + 1 ( 1 ) 𝑛 𝑎 2 𝑛 , 1 , that is, 𝑎 2 𝑛 , 1 = 0 . Similarly, 𝑎 2 𝑛 , 2 = 𝑏 2 𝑛 , 1 = 𝑏 2 𝑛 , 2 = 0 . Thus, for 𝑧 ( 𝑡 ) S E , 𝑧 ( 𝑡 ) = 𝑘 = 1 𝑎 2 𝑘 1 c o s ( 2 𝑘 1 ) 𝑡 + 𝑏 2 𝑘 1 , s i n ( 2 𝑘 1 ) 𝑡 ( 2 . 2 5 ) where 𝑎 2 𝑘 1 , 1 = ( 1 ) 𝑘 + 1 𝑏 2 𝑘 1 , 2 , 𝑏 2 𝑘 1 , 1 = ( 1 ) 𝑘 𝑎 2 𝑘 1 , 2 .
Moreover, for any 𝑧 1 ( 𝑡 ) , 𝑧 2 ( 𝑡 ) S E , 𝛿 1 𝑧 1 + 𝑧 2 = 𝑇 2 𝑧 1 ( 𝑡 𝑟 ) + 𝑧 2 ( 𝑡 𝑟 ) = 𝑇 2 𝑧 1 ( 𝑡 𝑟 ) + 𝑇 2 𝑧 2 ( 𝑡 𝑟 ) = 𝛿 1 𝑧 1 + 𝛿 1 𝑧 2 . ( 2 . 2 6 ) And for any 𝑐 , 𝛿 1 ( 𝑐 𝑧 ( 𝑡 ) ) = 𝑇 2 𝑐 𝑧 ( 𝑡 𝑟 ) = 𝑐 𝑇 2 𝑧 ( 𝑡 𝑟 ) = 𝑐 𝛿 1 𝑧 ( 𝑡 ) . Thus, S E is a subspace of 𝐸 . This completes the proof of Lemma 2.4.

For the Hamiltonian system (2.13), we define another action matrix 𝑇 2 with the following form: 𝑇 2 = 1 1 1 0 . ( 2 . 2 7 )

Then, by 𝑇 2 , for any 𝑧 ( 𝑡 ) 𝐸 , define an action 𝛿 2 on 𝑧 by 𝛿 2 𝑧 ( 𝑡 ) = 𝑇 2 𝑧 ( 𝑡 𝑠 ) . ( 2 . 2 8 ) Then, by a direct computation, we have that 𝛿 3 2 𝑧 ( 𝑡 ) = 𝑧 ( 𝑡 3 𝑠 ) = 𝑧 ( 𝑡 𝜋 ) , 𝛿 6 2 𝑧 ( 𝑡 ) = 𝑧 ( 𝑡 ) and 𝐺 = { 𝛿 2 , 𝛿 2 2 , 𝛿 3 2 , 𝛿 4 2 , 𝛿 5 2 , 𝛿 6 2 } is a compact group action over 𝐸 . If 𝛿 2 𝑧 ( 𝑡 ) = 𝑧 ( 𝑡 ) holds, then through a direct check, we have that 𝑧 ( 𝑡 ) has the symmetric structure (2.4).

Remark 2.5. By 𝛿 3 2 𝑧 ( 𝑡 ) = 𝑧 ( 𝑡 3 𝑠 ) = 𝑧 ( 𝑡 𝜋 ) and the definition of 𝛿 2 , the set { 𝑧 𝐸 𝛿 2 𝑧 ( 𝑡 ) = 𝑧 ( 𝑡 ) } has the same structure (2.20), where the relation between the Fourier coefficients of the first component 𝑧 1 and the second component 𝑧 2 is slightly different with the elements in { 𝑧 𝐸 𝛿 1 𝑧 ( 𝑡 ) = 𝑧 ( 𝑡 ) } . We denote it also by S E which is a subspace of 𝐸 .

Denote by 𝑀 ( ) , 𝑀 + ( ) , and 𝑀 0 ( ) the number of the negative, the positive, and the zero eigenvalues of a symmetric matrix , respectively. For a constant symmetric matrix , we define our index as 𝑖 ( ) = 𝑘 = 1 𝑀 𝑇 𝑘 ( , 𝑖 ) 2 0 ( ) = 𝑘 = 1 𝑀 0 𝑇 𝑘 , ( ) ( 2 . 2 9 ) where 𝑇 𝑘 ( ) = 𝑘 𝐽 𝑘 𝐽 . ( 2 . 3 0 ) Observe that for 𝑘 large enough, 𝑀 ( 𝑇 𝑘 ( ) ) = 2 and 𝑀 0 ( 𝑇 𝑘 ( ) ) = 0 . In fact, write 𝑇 𝑘 ( ) = 𝑘 𝐽 𝑘 𝐽 = 𝑘 0 𝐽 𝐽 0 0 0 . ( 2 . 3 1 )

Notice that 𝐽 = 𝐽 . If 𝑘 > 0 is sufficiently large, then 𝑀 = 𝑀 + = 2 , which are the indices of the first matrix in (2.31). Furthermore, if 𝑘 decreases, these indices can change only at those values of 𝑘 , for which the matrix 𝑇 𝑘 ( ) is singular, that is, 𝑀 0 ( 𝑇 𝑘 ( ) ) 0 . This happens exactly for those values of 𝑘 for which 𝑖 𝑘 is a pure imaginary eigenvalue of 𝐽 . Indeed assume ( 𝜉 1 , 𝜉 2 ) 2 × 2 is an eigenvector of 𝑇 𝑘 ( ) with eigenvalue 0, then by 𝐽 = 𝐽 , one has 𝜉 1 + 𝑘 𝐽 𝜉 2 = 0 and 𝜉 2 𝑘 𝐽 𝜉 1 = 0 . Thus, ( 𝜉 1 + 𝑖 𝜉 2 ) = 𝑘 𝐽 ( 𝑖 𝜉 1 𝜉 2 ) = 𝑖 𝑘 𝐽 ( 𝜉 1 + 𝑖 𝜉 2 ) ; therefore, 𝐽 ( 𝜉 1 + 𝑖 𝜉 2 ) = 𝑖 𝑘 ( 𝜉 1 + 𝑖 𝜉 2 ) . Therefore, ± 𝑖 𝑘 𝜎 ( 𝐽 ) , as claimed. Hence, 𝑖 ( ) and 𝑖 0 ( ) are well defined.

The following theorem of [26] on the existence of periodic solutions for the Hamiltonian system (2.1) will be used in our discussion.

Theorem A. Let 𝐻 𝐶 1 ( × 2 𝑁 , ) be 2 𝜋 -periodic in t and satisfy (2.2). If 𝑖 0 ( 0 ) = 𝑖 0 ( ) = 0 and 𝑖 ( 0 ) 𝑖 ( ) , then the Hamiltonian system (2.1) has at least one nontrivial periodic solution.

Now, we claim the following.

Lemma 2.6. If z is a solution of the Hamiltonian system (2.8) ((2.13)) in S E , then 𝑦 = Ψ 1 ( 𝑡 , 𝑧 ) ( 𝑦 = Ψ 2 ( 𝑡 , 𝑧 ) ) is the solution of the Hamiltonian system (2.5) ((2.11)) with the symmetric structure (2.4), respectively.

Proof. By Lemma 2.4, any 𝑧 S E has the structure (2.4). We only need to show 𝛿 1 𝑦 = 𝑦 or 𝛿 2 𝑦 = 𝑦 , that is, 𝑇 2 Ψ 1 ( 𝑡 , 𝑧 ) = Ψ 1 ( 𝑡 , 𝑇 2 𝑧 ) or 𝑇 2 Ψ 2 ( 𝑡 , 𝑧 ) = Ψ 2 ( 𝑡 , 𝑇 2 𝑧 ) , which can be verified directly by the constructions of the symplectic transformations Ψ 1 ( 𝑡 , 𝑧 ) and Ψ 2 ( 𝑡 , 𝑧 ) , respectively. Please see [25] for details.

We denote the matrix 𝛼 𝐼 2 by 𝛼 for convenience. We prove the following lemma.

Lemma 2.7. (1) Suppose that (H1) and (H3) hold, then 𝑖 0 ( 𝛼 0 ) = 𝑖 0 ( 𝛼 ) = 𝑖 0 ( 𝛽 0 𝑀 ) = 𝑖 0 ( 𝛽 𝑀 ) = 0 .
(2) Suppose that (H1) and (H2) hold, then 𝑖 ( 𝛼 0 ) 𝑖 ( 𝛼 ) .
(3) Suppose that (H3) and (H4) hold, then 𝑖 ( 𝛽 0 𝑀 ) 𝑖 ( 𝛽 𝑀 ) .

Proof. For any 𝛼 , 𝛽 , let 𝜎 ( 𝑇 𝑘 ( 𝛼 ) ) and 𝜎 ( 𝑇 𝑘 ( 𝛽 𝑀 ) ) denote the spectra of 𝑇 𝑘 ( 𝛼 ) and 𝑇 𝑘 ( 𝛽 𝑀 ) , respectively. Denote by 𝜆 and 𝛾 the elements of 𝜎 ( 𝑇 𝑘 ( 𝛼 ) ) and 𝜎 ( 𝑇 𝑘 ( 𝛽 𝑀 ) ) , respectively, then d e t 𝜆 𝐼 4 𝑇 𝑘 ( 𝛼 ) = d e t ( 𝜆 + 𝛼 ) 2 𝐼 2 𝑘 2 𝐼 2 = d e t ( 𝜆 + 𝛼 ) 𝐼 2 𝑘 𝐼 2 d e t ( 𝜆 + 𝛼 ) 𝐼 2 + 𝑘 𝐼 2 , d e t 𝛾 𝐼 4 𝑇 𝑘 ( 𝛽 𝑀 ) = d e t 𝛾 𝐼 2 + 𝛽 𝑀 2 𝑘 2 𝐼 2 = d e t 𝛾 𝐼 2 + 𝛽 𝑀 𝑘 𝐼 2 d e t 𝛾 𝐼 2 + 𝛽 𝑀 + 𝑘 𝐼 2 ( = d e t 𝛾 + 2 𝛽 𝑘 ) 2 𝛽 2 ( d e t 𝛾 + 2 𝛽 + 𝑘 ) 2 𝛽 2 . ( 2 . 3 2 ) The above computation of determinant shows that 𝜎 𝑇 𝑘 = ( 𝛼 ) 𝜆 = ± 𝑘 𝛼 𝑘 + 𝜎 𝑇 , ( 2 . 3 3 ) 𝑘 ( = 𝛽 𝑀 ) 𝛾 = ± 𝑘 𝛽 , ± 𝑘 3 𝛽 𝑘 + . ( 2 . 3 4 )
Case 1. From (2.33), if 𝛼 0 ± 𝑘 , for all 𝑘 + , then 𝜆 0 , where 𝜆 is the eigenvalue of 𝑇 𝑘 ( 𝛼 0 ) . That means 𝑀 0 ( 𝑇 𝑘 ( 𝛼 0 ) ) = 0 for 𝑘 1 . Thus, 𝑖 0 ( 𝛼 0 ) = 𝑘 = 1 𝑀 0 ( 𝑇 𝑘 ( 𝛼 0 ) ) = 0 . Similarly, we have 𝑖 0 ( 𝛼 ) = 𝑖 0 ( 𝛽 0 𝑀 ) = 𝑖 0 ( 𝛽 𝑀 ) = 0 .
Case 2. Without loss of generality, we suppose that 𝛼 0 < 𝛼 . By the conditions (H1) and (H2), 𝛼 0 < 𝑘 0 < 𝛼 . ( 2 . 3 5 ) Since 𝛼 0 < 𝑘 0 , by (2.33), 𝑀 ( 𝑇 𝑘 0 ( 𝛼 0 ) ) 2 . By 𝑘 0 < 𝑘 0 < 𝛼 and (2.33), 𝑀 ( 𝑇 𝑘 0 ( 𝛼 ) ) = 4 , that is, 𝑀 𝑇 𝑘 0 𝛼 0 + 2 𝑀 𝑇 𝑘 0 𝛼 . ( 2 . 3 6 ) For each 𝑘 𝑘 0 and from (2.33), one can check easily that 𝑀 ( 𝑇 𝑘 ( 𝛼 0 ) ) 𝑀 ( 𝑇 𝑘 ( 𝛼 ) ) . Hence, one has 𝑘 = 1 ( 𝑀 ( 𝑇 𝑘 ( 𝛼 0 ) ) 2 ) < 𝑘 = 1 ( 𝑀 ( 𝑇 𝑘 ( 𝛼 ) ) 2 ) , since 𝑀 ( 𝑇 𝑘 ( 𝛼 ) ) = 2 for 𝑘 large enough. This yields that 𝑖 ( 𝛼 0 ) < 𝑖 ( 𝛼 ) . Then, property (2) holds.
Case 3. By the conditions (H3) and (H4), without loss of generality, we suppose that 𝛽 0 < 𝛽 and 𝛽 0 < 𝑘 0 < 𝛽 . ( 2 . 3 7 )
Since 𝛽 0 < 𝑘 0 , by (2.34), 𝑀 ( 𝑇 𝑘 0 ( 𝛽 0 𝑀 ) ) 3 . By 𝑘 0 < 𝑘 0 < 𝛽 < 3 𝛽 and (2.34), one has 𝑀 ( 𝑇 𝑘 0 ( 𝛽 𝑀 ) ) = 4 , that is, 𝑀 𝑇 𝑘 0 𝛽 0 𝑀 + 1 𝑀 𝑇 𝑘 0 𝛽 𝑀 . ( 2 . 3 8 ) For each 𝑘 𝑘 0 and from (2.34), it is easy to see that 𝑘 𝛽 < 𝑘 𝛽 0 and 𝑘 3 𝛽 < 𝑘 3 𝛽 0 . Then, by the definition of 𝑀 ( 𝑇 𝑘 ( 𝛽 𝑀 ) ) , we have 𝑀 ( 𝑇 𝑘 ( 𝛽 0 𝑀 ) ) 𝑀 ( 𝑇 𝑘 ( 𝛽 𝑀 ) ) . Therefore, we have 𝑘 = 1 𝑀 𝑇 𝑘 𝛽 0 𝑀 < 2 𝑘 = 1 𝑀 𝑇 𝑘 𝛽 𝑀 , 2 ( 2 . 3 9 ) since 𝑀 ( 𝑇 𝑘 ( 𝛽 𝑀 ) ) = 2 for 𝑘 large enough. This implies that 𝑖 ( 𝛽 0 𝑀 ) < 𝑖 ( 𝛽 𝑀 ) . Then, property (3) holds.

Now, we are ready to prove the main results. We first give the proof of Theorem 1.1.

Proof of Theorem 1.1. Solutions of (2.8) in S E are indeed nonconstant classic 2 𝜋 -periodic solutions with the symmetric structure (2.4), and hence they give solutions of (1.7) with the property 𝑥 ( 𝑡 𝜋 ) = 𝑥 ( 𝑡 ) . Therefore, we will seek solutions of (2.8) in S E .
Now, Theorem 1.1 follows from Lemmas 2.1, 2.6, and 2.7 and Theorem A.

Proof of Theorem 1.2. Obviously, Theorem 1.2 follows from Lemmas 2.2, 2.6, and 2.7 and Theorem A.

Acknowledgments

The author thanks the referee for carefully reading of the paper and giving valuable suggestions. This work is supported by the National Natural Science Foundation of China (11026212). This paper was typeset using AMS-LATEX.