Abstract

We present a new approach for solving nonsmooth optimization problems and a system of nonsmooth equations which is based on generalized derivative. For this purpose, we introduce the first order of generalized Taylor expansion of nonsmooth functions and replace it with smooth functions. In other words, nonsmooth function is approximated by a piecewise linear function based on generalized derivative. In the next step, we solve smooth linear optimization problem whose optimal solution is an approximate solution of main problem. Then, we apply the results for solving system of nonsmooth equations. Finally, for efficiency of our approach some numerical examples have been presented.

1. Introduction

As we all know, many problems of considerable practical importance can be related to the solution of nonsmooth optimization of problems (NSOPs) and system of nonsmooth equations. In general, optimization a function is one of the most important problems of real life and plays a fundamental role in mathematics and its applications in the other disciplines such as control theory, optimal control, engineering, and economics.

Nonsmooth optimization is one of the research areas in computational mathematics, applied mathematics, and engineering design optimization and also is widely used in many of practical problems. It is necessary to know that several important methods for solving difficult smooth problems lead directly to the need to solve nonsmooth problems, which are either smaller in dimension or simpler in structure. For instance, decomposition methods for solving very large scale smooth problems produce lower-dimensional nonsmooth problems; penalty methods for solving constrained smooth problems result in unconstrained nonsmooth problems; nonsmooth equation methods for solving smooth variational inequalities and smooth nonlinear complementarity problems give arise to systems of nonsmooth equations (see [1]).

The well-known methods for nonsmooth optimization include subgradient method, cutting-planes method, analytic center cutting-planes method, bundle method, trust region method, and bundle trustering method (see [2]).

Note that the most difficult type of optimization problem to solve is a nonsmooth problem. Nonsmooth optimization refers to the more general problem of minimizing functions that are typically not differentiable at their minimizers. The focus of this paper is the numerical solution of NSOPs and system of nonsmooth equations. The techniques for solving the minimization problems and nonsmooth equations are closely related.

The outline of the paper is as follows. In Section 2, we introduce the reader to a new generalized derivative (GD) for one variable and multivariable functions (see Kamyad et al. [3]). A new approach for NSOP based on GD is studied in Section 3. Also, using the last section, an approach for solving system of nonsmooth equations is considered in Section 4. Some conclusive remarks are given in Section 5. Finally, we present some examples of the efficiency of our approach in Section 6.

2. Preliminaries

In this section, we present definitions and results concerning with GD, which are needed in the remainder of the paper. Since the early 1960s several generalized theories of differentiation have been proposed by different authors. A first major step in this direction came with the dissertation of Rockafellar [4], who introduced subgradients for convex functions. Another breakthrough occurred when Clarke [5] found a way of extending Rockafellar’s ideas to the broader class of lower semicontinuous, proper functions. This line of ideas has given rise to an extensive amount of research, continuing to the present (see [6]). It is commonly recognized that these GDs are not practical and applicable for solving problems. We mainly utilize the new GD of Kamyad et al. [3] for nonsmooth functions. This kind of GD is particularly helpful and practical when dealing with nonsmooth continuous and discontinuous functions and it can be easily computed. In what follows, let us now devote just two short sections to the GD of Kamyad et al. in two cases, one and multivariable functions.

2.1. GD of One-Variable Nonsmooth Functions

Let . For function on , we define the following functional optimization problem: where , , and , are positive sufficiently small numbers. Moreover, , are arbitrary numbers. For instance, we can choose the middle points, that is, , .

Theorem 1 (see [3]). Let and be the optimal solution of the functional optimization problem (1). Then and .

Definition 2. Let be a continuous nonsmooth function on the interval and be the optimal solution of the functional optimization problem (1). We denote the generalized first derivative (GFD) of by and define as .

Remark 3 (see [3]). Note that if is a smooth function on then the in Definition 2 is. Further, if is a nonsmooth integrable function on then GFD of is an approximation for first derivative of .
In what follows, the problem (1) is approximated to the following finite dimensional problem (see [3]): where is a given large number. We assume that , , , and for all . In addition, we choose the arbitrary points , . By trapezoidal and midpoint integration rules, problem (2) can be approximated to the following problem in which are its unknown variables:

Lemma 4. Let pairs , be the optimal solutions ofthe following LP problem: Then , are the optimal solutions of the following nonlinear programming (NLP) problem: where is a compact set.

Proof. Since , are the optimal solutions of the LP problem, so they satisfy the constraints. Thus we have and for . Hence, , and so . Now, let there exist , such that . Define for . Then and . Moreover, and hence So ; this is a contradiction.

Now, by Lemma 4 and techniques of mathematical programming, problem (3) may be converted to the following equivalent LP problem in which , , , , and , , , for are decision variables of the problem:

Remark 5. Note that and must be selected as sufficiently small numbers and arbitrary points ,   can be chosen as arbitrary numbers.

Remark 6. Note that if , are optimal solutions of problem (6), then we have ,   .

2.2. GD of Multivariable Functions

In this section, we are going to introduce functional optimization problems that their optimal solutions are the partial derivatives of smooth function on . First, we select arbitrary (but fixed) index and calculate the partial differentiation of respect to . Without loss of generality, assume and define as follows:

Now, select as a sufficiently large number and part to the similar grids , such that these grids cover set . In the next step, we consider arbitrary points ,   as . Moreover, we define the following vector for all :

Let and define ,    for . Now, suppose and are two sufficiently small given numbers and   be sufficiently large number. For given continuous function , we define the following functional optimization problem for any : where , are fixed and arbitrary points; for example, it can be chosen by , .

Also, , and for all , .

Theorem 7. Let and be the optimal solution of the functional optimization problem , for defined by (9). Then where , .

Proof. (see [3]).

Now, the generalized partial derivative (GPD) of nonsmooth function may be defined as follows.

Definition 8. Let be a fixed and arbitrary index and , . Moreover, let function be a continuous nonsmooth function and , is the optimal solution of the functional optimization problem (9). We denote the GPD of with respect to variable by and define as , .

Remark 9. Note that if is a smooth function then for fixed index and , , Further, if is a continuous nonsmooth function, then the GPD of with respect to variable is an approximation for first derivative of function with respect to variable .
However, the optimization problem (9) is an infinite dimensional problem and hence it is approximated as the following finite dimensional problems: where is a given large number. We assume that , , , and , for all . In addition, we choose the arbitrary points , .
Similar to the Section 2.1, problem (12) may be converted to the following equivalent finite linear programming problem which , , , , and , , , for are decision variables of the problem: where for , , and satisfy the relations

Remark 10. Note that if , are optimal solutions of problem (13), then for .

3. Nonsmooth Optimization Problems

The focus of this paper is the following NSOP: where the objective function is assumed to be nonsmooth function and the feasible set is a compact set. Throughout the whole paper we assume that there exists a solution for problem (15). NSOP has been used in many of branches of sciences such as engineering, economics, and mathematics. An increasing number of practical problems require minimizing a nonsmooth, nonconvex function on a convex set, including image restoration, signal reconstruction, variable selection, optimal control, stochastic equilibrium problems, and spherical approximations. Also, a number of constrained optimization problems can be reformulated as problem (15) by using exact penalty methods (see [7]). However, many well-known optimization algorithms lack effectiveness and efficiency in dealing with nonsmooth, nonconvex objective functions. Furthermore, for non-Lipschitz continuous functions, the Clarke generalized gradients [8] cannot be used directly in the analysis. Further, smooth approximations for optimization problems have been studied for decades, including complementarity problems, variational inequalities, second order cone complementarity problems, semidefinite programming, semi-infinite programming, optimal control, and eigenvalue optimization (see [9]).

A well known way to seek numerical solution of problem (15) is to replace it by a suitable piecewise linear function. With this linearization we can construct a piecewise linear approximation to the NSOP.

In this paper, we describe a class of approximations which are constructed as piecewise linear functions based on GD of nonsmooth functions.

Without less of generality, we firstly let be a closed interval and let us take a partition of , where ,   and is a sufficiency small number. In addition, we select mid points , . In order to derive piecewise linear approximations, we introduce the generalized first order Taylor expansion of continuous nonsmooth function on , based on the GD as follow:

Moreover, we can approximate function on interval as follows: We restrict our treatment to first order generalized derivative for obtaining a practical and useful general approach. In this connection, we note that is a linear approximation for nonsmooth function on .

Theorem 11. Let be a continuous nonsmooth function on the interval and are sufficiency small numbers. The best linear approximation for function on interval , , in passing points ,   , is function , defined by (17).

Proof. Let , and define Now, assume that where . Since are sufficiency small numbers, we have . On the other hand, by constraints of problem (1), . Thus and this shows that the best linear approximation for function on interval in passing points ,   , is function .

The approximation (17) will be used in nonsmooth optimization approach in the next part. The next theorem indicates how to find minimum of nonsmooth function in our approach.

Theorem 12. Let be a nonsmooth function on the interval and are sufficiency small numbers. Then where , , is defined by (17).

Proof. If     tends to infinity then by definition of points , we have . So by the constraints of the problem (2), we have . Hence, and we can write for . So for , there exits an index such that . Thus, achieve the following relation . Now, assume that . So
On the other hand, there is an index such that and
Hence, by the relations (22) and (23) we conclude

For the main result of Theorem 12, we can approximate the problem (15) with the following problem: where is a sufficiency big number.

A well known way to seek optimal solution of the problem (25) is to convert it to a min-max problem based on the following remark.

Remark 13. Finding the minimum of function on a certain domain is really the same as finding the maximum of on that domain.
So for attaining to feasible solution of problem (25) we have the following problem: Now, we assume It has become clear that the problem (26) is equivalent to the following linear programming problem: where the decision variables are , and . After solving the LP problem (28), we obtain optimal solutions , and . Then, we choose such that . Now, is an approximate optimal solution for main problem (15).
Note that in the connection of multivariable functions , also above-mentioned theorems hold for problem (15). Without less of generality, we let . Here, we can approximate function with a linear approximation as follows: where ,     are the GDs of with respect to in point . Here, we select similar sets , such that . Moreover, similar to Theorem 11, we can prove that
Similar to one-variable case, we obtain a LP problem to the approximate optimal solution of nonsmooth problem as follows: where the decision variables are , for , and . After solving the LP Problem (31), we obtain optimal solutions for , and . Then, we choose , such that Now, is an approximate optimal solution for main problem (15).

4. System of Nonsmooth Equations

We consider the system of nonsmooth equations with variables where is assumed to be nonsmooth function. We assume that there exists a unique solution such that . Many efforts have been done for solving nonsmooth system of equations (for more details refer [10, 11]). These methods are very useful but they are not simple and practical in the nonsmooth case. Here, according to Section 3, we introduce an approach based on GD which is useful for nonsmooth equations.

In what follows, we firstly convert the problem (33) to the corresponding NLP problem and in the next step, by the linearization method of previous section, we obtain an LP problem for approximating solution of the problem (33). Define the following NLP problem: where and .

Lemma 14. Let be the optimal solution of the problem (34). Then is a solution for problem (33), that is, .

Proof. It is trivial that ,   , and On the other hand, there is an such that and hence . So we have hence and .

Now, we can replace the problem (33) with the NLP problem (34). Here, we assume ,  , and corresponding to the Section 3, use the linearization method based on GD and solve the LP problem (28) for the one variable functions or the problem (31) for the multivariable functions (for more details of GD, see Sections 2.1 and 2.2).

5. Numerical Examples

In this section, we present some numerical results in order to illustrate the performance of established approach. Three NSOPs and three nonsmooth equations were solved; some of them are multidimensional problems. One of the aims is to show the efficiency of our approach in the connections with nonsmooth functions.

Example 1. Consider the following problem of one variable of nonsmooth optimization: The exact solution of Example 1 is , in . The function is nondifferentiable at in the interval and according to the problem (28), the acheived approximate solution for is , . Here, the function is illustrated in Figure 1 and the GD of has been shown in Figure 2.

Example 2. Consider the following NSOP which has been introduced by Zhang et al. in [2]:
According to Figure 3, the exact global minimizer is . The comparison results of our approach and [2] show that our approach has acceptable precision and accuracy (see Table 1). The GD of has been shown in Figure 4.

Example 3. Let the following NSOP: which has its minimum at , (see Figure 5) and the minimum exact value of is . According to the problem (31), produces the following approximate solutions for different values of (see Table 2). Figures 6 and 7 have been shown in the graph of GD of with respect to and .

Example 4. Consider the following nonsmooth equation The above problem has an exact root . According to the problem (34), the approximate solution for is and the approximate for this root is . Figures 8 and 9 have been shown in the graph of function and the graph of GD of , respectively.

Example 5. Consider the following nonsmooth equation:
The function is nondifferentiable at in the interval . It can be viewed as the place of the root in Figure 10 and the achived approximate solution for is . Also the approximate value function for this solution is . The graph of GD of is illustrated in Figure 11.

Example 6. Consider the following system of nonsmooth equations: where , . The exact solution of the above system is (see Figure 12).
According to the problem (34), and so we solve . Table 3 presents the results of approximate solution for different values of .

6. Conclusions

We have shown that NSOP can be approximated by a linear optimization problem whose solution can be used for that problem. In this approach, we utilize new GD which it is practical and useful for nonsmooth functions. Also by this approach, it is possible to solve system of nonsmooth equations. The results of numerical examples imply that our approach is useful with respect to results and more applicable to computational assignments.