International Journal of Differential Equations

International Journal of Differential Equations / 2021 / Article

Research Article | Open Access

Volume 2021 |Article ID 6628243 | https://doi.org/10.1155/2021/6628243

G. Loaiza, Y. Acevedo, O.M.L. Duque, Danilo A. García Hernández, "Lie Algebra Classification, Conservation Laws, and Invariant Solutions for a Generalization of the Levinson–Smith Equation", International Journal of Differential Equations, vol. 2021, Article ID 6628243, 11 pages, 2021. https://doi.org/10.1155/2021/6628243

Lie Algebra Classification, Conservation Laws, and Invariant Solutions for a Generalization of the Levinson–Smith Equation

Academic Editor: Peiguang Wang
Received03 Jan 2021
Revised02 Apr 2021
Accepted22 Apr 2021
Published07 May 2021

Abstract

We obtain the optimal system’s generating operators associated with a generalized Levinson–Smith equation; this one is related to the Liénard equation which is important for physical, mathematical, and engineering points of view. The underlying equation has applications in mechanics and nonlinear dynamics as well. This equation has been widely studied in the qualitative scheme. Here, we treat the equation by using the Lie group method, and we obtain certain operators; using those operators, we characterized all invariants solutions associated with the generalized equation of Levinson Smith considered in this paper. Finally, we classify the Lie algebra associated with the given equation.

1. Introduction

Lie group symmetry method is a powerful tool employed to study ODEs, PDEs, FPDEs, FODEs, and so on. This theory was introduced in the 19th century by Sophus Lie [1], following the idea of Galois theory in algebra. Lie group method applied to differential equations has received great interest among researchers in different fields of science such as mathematics, theoretical, and applied physics, due to the physical interpretations of the underlying equations that are studied. As a consequence, this method leads to construct, for example, conservation laws, using the well known Noether’s theorem [2], even more applying Ibragimov’s approach [3]. In the same way, it is possible to build similarity solutions which, in the traditional methods, are not possible.

Furthermore, this method contributes to establish schemes and the usefulness of some numerical methods; here, many packages are being built in different environments of computations, e.g., [4, 5]. In general, taking into account the importance of the equations’ study (such as ODEs, PDEs, and others), this method can be interesting to different researchers. A vast reference in Lie group method can be found in the literature, e.g., [69]. Recently, the Lie group method approach has been applied to solve and analyze different problems in many scientific fields, e.g., in [10], the authors applied the Lie symmetry method to investigate a fourth-order 1 + 2 evolutionary partial differential equation which has been proposed for the image processing noise reduction. References in the latest progress in symmetry analysis can be found in [1118] and therein.

In [19], Kamke proposes the following differential equation:where are the arbitrary functions, for this equation presents the transformation which reduces this equation to a system of two first-order equations. Equation (1) can be written aswhere and . This means that the coefficient of friction, i.e., is a function that depends on , and , and it will almost always be a nonlinear function, and on the other hand, function , which is known as function of disturbance, is also nonlinear. It is worth remembering that this type of equation (2) is known as generalizations of the Levinson–Smith equation. Also, equation (2), which is a particular case of the generalized Levinson–Simth equations, is related to the Liénard-type second-order nonlinear differential equation [20]. The underlying equations describe several phenomena in different areas such as electronics, biology, mechanics, seismology, chemistry, physics, and cosmology, for example, an important model in physical and biological sciences is the van der Pol equation, which describes a nonconservative oscillator with nonlinear damping. Levinson and Smith in [21] studied a general equation for relaxation oscillations.

In [8], the equation,is presented; note that equation (3) is a particular case of equation (1), with and ; in [8], Cantwell states that equation (3) has a group of symmetries dimensional; but it does not exhibit the development of said statement; they affirm the Lie group of symmetries of (3), using a ODEtools Maple package. In fact, the goal of this work is (i) to calculate the dimensional Lie symmetry group in detail, (ii) to present the optimal system (optimal algebra) for (3), (iii) to make use of all elements of the optimal algebra to propose invariant solutions for (3), then (iv) to construct the Lagrangian with which we could determine the variational symmetries using Noether’s theorem and thus to present conservation laws associated, and (iv) also to build some nontrivial conservation laws using Ibragimov’s method, and finally (v) to classify the Lie algebra associated to (3), corresponding to the symmetry group.

2. Continuous Group of Lie Symmetries

In this section, we study the Lie symmetry group for (3). The main result of this section can be presented as follows.

Proposition 1. The Lie symmetry group for equation (3) is generated by the following vector fields:

Proof. A general form of the one-parameter Lie group admitted by (3) is given bywhere is the group parameter. The vector field associated with the group of transformations shown above can be written as where are differentiable functions in . Applying its second prolongation,to equation (3), we must find the infinitesimals , satisfying the symmetry condition,associated with (3). Here, are the coefficients in given byBeing as the total derivative operator, . Replacing (8) into (7) and using (3), we obtainFrom (9), canceling the coefficients of the monomial variables in derivatives , and , we obtain the determining equations for the symmetry group of (3). That is,Solving the system of equations (9a)–(9d) for and , we getThus, the infinitesimal generators of the group of symmetries of (3) are the operators described in the statement of Proposition 1, thus having the proposed result.

3. Optimal System

Taking into account [2225], we present in this section the optimal system associated to the symmetry group of (3), which shows a systematic way to classify the invariant solutions. To obtain the optimal system, we should first calculate the corresponding commutator table, which can be obtained from the operatorwhere , with , and are the corresponding coefficients of the infinitesimal operators . After applying the operator (11) to the symmetry group of (3), we obtain the operators that are shown in Table 1.

Now, the next thing is to calculate the adjoint action representation of the symmetries of (3), and to do that, we use Table 1 and the operator



00
000
000
000
00

Making use of this operator, we can construct Table 2, which shows the adjoint representation for each .


adj[, ]


Proposition 2. The optimal system associated to equation (3) is given by the vector fields

Proof. To calculate the optimal system, we start with the generators of symmetries (4) and a generic nonzero vector. LetThe objective is to simplify as many coefficients, , as possible, through maps adjoint to , using Table 2.(1)Assuming in (14), we have that . Applying the adjoint operator to and , we do not have any reduction; on the other hand, applying the adjoint operator to , we get(1.1) Case. Using , in (15), is eliminated, therefore , where . Now, applying the adjoint operator to , we get .Case. Using , with , eliminated is , and then . Applying the adjoint operator to , we getCase. Using , with , in (16), is eliminated, therefore . Then, we have the first element of the optimal system.with , , and . This is how the first reduction of the generic element (14) ends.Case. We get . Then, we have the other element of the optimal system.with . This is how the other reduction of the generic element (14) ends.Case. We get . Applying the adjoint operator to , we haveCase. Using , with , in (19), is eliminated, therefore . Then, we have the other element of the optimal system.with , , and . This is how the other reduction of the generic element (14) ends.Case. We get . Then, we have the other element of the optimal system.This is how the other reduction of the generic element (14) ends.(1.2) Case. We get , using , then is eliminated, and then . Now, applying the adjoint operator to , we have . It is clear that we do not have any reduction.(1.2.1) Case. Then, using , with , we get . Applying the adjoint operator to , we haveUsing , with , in (22), is eliminated, therefore . Then, we have the other element of the optimal system.(1.2.2) Case. We get . Applying the adjoint operator to , we haveIt is clear that we do not have any reduction.Case. Then, using , with , in (24), we get . Then, we have the other element of the optimal system.Case. Then, we get , hence we have the other element of the optimal system.(2)Assuming and in (14), we have that . Applying the adjoint operator to and , we do not have any reduction; on the other hand, applying the adjoint operator to , we get(2.1) Case. Using with , in (27), is eliminated, therefore . Now, applying the adjoint operator to , we get .Case. Using , with , eliminated is , then . Applying the adjoint operator to , we getIt is clear that we do not have any reduction.Case. Then, substituting with , we have the other element of the optimal systemwith , , and . This is how the other reduction of the generic element (14) ends.Case. We get , and then we have the other element of the optimal system,with and , . This is how the other reduction of the generic element (14) ends.Case. We get . Applying the adjoint operator to , we haveIt is clear that we do not have any reduction; it is also clear that and then ; then, substituting , we have the other element of the optimal systemwith y. This is how the other reduction of the generic element (14) ends.(2.2) Case. We get . Now, applying the adjoint operator to , we have .Case. Using , with , is eliminated, then . Applying the adjoint operator to , we getIt is clear that we do not have any reduction.Case. Then, substituting with , we have other element of the optimal systemwith . This is how the other reduction of the generic element (14) ends.Case. We get ; we do not have any reduction; then, using , we have the other element of the optimal systemThis is how the other reduction of the generic element (14) ends.Case. We get . Applying the adjoint operator to , we getCase. Using with , is eliminated, then we have the other element of the optimal systemThis is how the other reduction of the generic element (14) ends.Case. We get , and then we have the other element of the optimal systemThis is how the other reduction of the generic element (14) ends.(3)Following a procedure analogous to the previous one and analyzing the respective cases for in (14); in (14) and in (14); we can reduce and obtain all the elements presented for the optimal system.

4. Invariant Solutions by Some Generators of the Optimal System

In this section, we characterize invariant solutions taking into account all operators that generate the optimal system presented in Proposition 2. For this purpose, we use the method of invariant curve condition [23] (presented in Section 4.3), which is given by the following equation:

Using the element from Proposition 2, under the condition (39), we obtain that , which implies ; then, solving this ODE, we have , where is an arbitrary constant, which is an invariant solution for (3); using an analogous procedure with all of the elements of the optimal system (Proposition 2), we obtain both implicit and explicit invariant solutions that are shown in Table 3, with being a constant.


ElementsSolutionsType solution

1Explicit
2Trivial
3Trivial
4Explicit
5Explicit
6.Implicit
7.Explicit
8Trivial
9Explicit
10Explicit
11Explicit
12Explicit
13Implicit
14Explicit
15Explicit
16Explicit
17Explicit
18