Abstract

A novel filled function method is suggested for solving box-constrained systems of nonlinear equations. Firstly, the original problem is converted into an equivalent global optimization problem. Subsequently, a novel filled function with one parameter is proposed for solving the converted global optimization problem. Some properties of the filled function are studied and discussed. Finally, an algorithm based on the proposed novel filled function for solving systems of nonlinear equations is presented. The objective function value can be reduced by quarter in each iteration of our algorithm. The implementation of the algorithm on several test problems is reported with satisfactory numerical results.

1. Introduction

Systems of nonlinear equations arise in myriad applications, for example, in engineering, physics, mechanics, applied mathematics and sciences; see [1] for a more detailed description.

In this paper, we consider the following box-constrained systems of nonlinear equations (for short, ): where the mapping is continuous, is a box.

Generally, systems of nonlinear equations are very difficult to solve directly. The typical methods to solve are optimization-based methods in which is reformulated as an optimization problem. The most popular optimization-based methods involve solving the following optimization problem (for short ): to find solutions of . Note that the problem above is a box-constrained nonlinear least-squares problem. It is easy to see that the objective function satisfies and global optimal solutions of problem with the zero objective function value corresponding to solutions of .

Generally speaking, the traditional optimization-based methods for solving are often stuck at a stationary point or a local minimizer of the corresponding optimization problem, which is not necessarily a solution of the original system. Lately, great efforts have been made to overcome the difficulty caused by nonglobal minimizers. In Particular, some switching techniques [26] have been developed to escape from a stationary point or a local minimizer which is not a solution of .

Kanzow [3] incorporated two well-known global optimization algorithms, namely, a tunneling [7] and a filled function method [8], into a standard nonsmooth Newton-type method for solving a nonsmooth system of equations which is a reformulation of the mixed complementarity problem. Wu et al. [9] and Lin et al. [10, 11] also gave some filled function methods to solve a nonlinear system with box constraints. Wang et al. [12] gave a filled function method to solve an unconstrained nonlinear system. In this paper, we will propose another kind of filled function method to solve box-constrained systems of nonlinear equations. Unlike [3], we do not use any Newton-type methods to solve , and, also unlike [9, 10], a better initial point of the primal optimization problem sometimes can not be obtained by minimizing the constructed filled function locally. Moreover, unlike [11, 12], the exponential term was used in the construct of the filled function, which may increase the computability when applied to numerical optimization. Instead, in this paper, we use an efficient filled function method to solve the corresponding optimization problem, and the local minimizer of the filled function is always obtained in the interior and is always good point. The objective function value can be reduced by quarter in each iteration of our algorithm.

The existence of local minimizers other than global ones makes global optimization a great challenge. As one of the main methods to solve general unconstrained or box-constrained global optimization problems without special structural property, the filled function method has attracted extensive attention; see [815]. The main idea of the filled function method is to construct an auxiliary function called filled function via the current local minimizer of the original optimization problem, with the property that the current local minimizer is a local maximizer of the constructed filled function and a better initial point of the primal optimization problem can be obtained by minimizing the constructed filled function locally. However, generally speaking, the local minimizer of the filled function cannot ensure that it is a better point of the primal optimization problem. In the paper, we propose a new filled function method, which can ensure that the proposed function is an efficient filled function and the local minimizer of the new filled function on a given box set is a better point and the primal problem's objective value at this better point can be reduced by quarter in each iteration of our filled function algorithm.

The numerical results obtained show that our method is applicable and efficient. The paper is organized as follows. Following this introduction, a novel filled function is proposed for the optimization problem in Section 2. The corresponding algorithm is presented in Section 3. In Section 4, several numerical examples are reported. Finally, some conclusions are drawn in Section 5.

2. Filled Function for the Optimization Problem

Throughout this paper we make the following assumption.

Assumption 1. has at least one solution in and the number of solutions of is finite.

Suppose that is a local miminizer of problem , the definition of the filled function is as follows.

Definition 2. A continuously differentiable function is called a filled function of problem at , if it satisfies the following conditions: (1) is a strict local maximizer of on ;(2) has no stationary point in the region ;(3)If is not a global minimizer of problem , then does have a minimizer in the region .

These conditions of the new filled function ensure that when a descent method, for example, the steepest descent method, is employed to minimize the constructed filled function, the sequence of iteration points will not terminate at any point at which the objective function value is large than ; if is not a global minimizer of problem , then there must exist a minimizer of the filled function at which the objective function value is less than or equal to , namely, any local minimizer of must belong to the set . Therefore, the present local minimizer of the objective function escapes and a better minimizer can be found by a local search algorithm starting from the minimizer of the filled function.

Let denote the set of local minimizers of problem and let denote the set of global minimizers of problem .

In the following, a novel filled function with one parameter satisfying Definition 2 is introduced. To begin with, we design a continuously differentiable function with the following properties: it is equal to 0 when .

More specifically, we construct as follows:

It is not difficult to check that is continuously differentiable and decreasing on . Obviously, we have

Given , the following filled function with one parameter is constructed: where the only parameter . Clearly, is continuously differentiable on .

The following theorems show that satisfy Definition 2 when the positive parameter is sufficiently large.

Theorem 3. Let , . Then, is a strict local maximizer of on .

Proof. Since , there exists a neighborhood of with such that , for all , where . Then, for any , , , and . We have Thus, is a strict local maximizer of on .

Theorem 3 reveals that the proposed new filled function satisfies condition (1) of Definition 2.

Theorem 4. Let , . Then, has no stationary point in the region .

Proof. Assume that , namely, and .
Then, we have It implies that the function has no stationary point in the region .

Theorem 4 reveals that the proposed new filled function satisfies condition (2) of Definition 2.

Theorem 5. Let , but . And satisfies Assumption 1. Then, does have a minimizer in the region when is sufficiently large.

Proof. Since , but , and the global minimum of is zero, there exists an such that . By the continuity of and Assumption 1, there exists that is small enough and , and it holds , for all , where .
Here, we just give the proof to the case when . For the other case when , the proof is similar.
Therefore, for each , there are two cases: (1);(2).For case (1), by and , For case (2), by and , we have if and only if which is equivalent to Let Thus, there exists sufficiently large as function approaches . Consequently, it must has that for all when . Thus, is a minimizer of when is sufficiently large.

Theorems 5 show that, for all , satisfies Condition of Definition 2. The following theorems show that function has some interesting properties.

Theorem 6. Let and the following conditions hold: (i);(ii).Then, the inequality holds for all .

Proof. Since , then Therefore, for all , holds.

Theorem 7. for all .

Proof. By the form of the filled function (3) and since , we have for all .

Remark 8. In the phase of minimizing the filled function, Theorems 35 guarantee that the present local minimizer of the objective function is escaped and the minimum of the filled function will be always achieved at a point where the objective function value is not greater than the quarter of the current minimum of the objective function. Moreover, the proposed filled function does not include exponential terms. A continuously differentiable function is used in the constructed filled function, which possesses many good properties and is efficient in numerical implementation.

3. Filled Function Algorithm

The theoretical properties of the proposed filled function were discussed in the last section. In this section, a global optimization method for solving problem is presented based on the constructed filled function (3), which leads to a solution or an approximate solution to .

Suppose that has at least one solution and the number of solutions is finite. The general idea of the global optimization method is as follows.

Let be a given initial point. Starting from this initial point, a local minimizer of problem is obtained with a local minimization method (Newton method, Quasi-Newton Method, or Conjugate Gradient method). If is not a global minimizer, the main task is to find a better local minimizer of problem .

Consider the following filled function problem (for short ): where is given by (3).

Let be a obtained local minimizer of problem on , and then by Theorem 5, we have . Starting from this initial point , we can obtain a local minimizer of problem . If is a global minimizer (namely, ), is the solution of the system ; otherwise, locally solve problem . Let be the obtained local minimizer, and then we have that . Repeating this process, we can finally obtain a solution of the system or a sequence with , . For such a sequence , , when is sufficiently large, can be regarded as an approximate solution of the system .

Let and , and is called a -approximate solution of the system if and .

The corresponding filled function algorithm for the global optimization problem is described as follows. The algorithm is referred as FFSNE (the filled function method for ).

Algorithm FFSNE

Step  0. Choose small positive numbers , , a large positive number , and an initial value for the parameters . (e.g., , , , and ). Choose a positive integer number (e.g., ) and directions , , are the coordinate directions. Choose an initial point . Set .

If , then let and go to Step 6. Otherwise, let and go to Step 1.

Step 1. Find a local minimizer of the problem by local search methods starting from . If , go to Step 6.

Step 2. Let where is defined by (1). Set and .

Step 3. Consider(a)If , set , and go to Step 5; otherwise, go to (b).(b)If , set , and go to (c); otherwise, set , , go to (a).(c)If , go to (d); otherwise, set , go to (b).(d)If , then set , , and go to Step 1; otherwise, go to Step 4.

Step 4. Search for a local minimizer of the following filled function problem starting from : Once a point with is obtained in the process of searching, set , and go to Step 1; otherwise continue the process. Let be an obtained local minimizer of problem (12). If satisfies , then set , and go to Step 1; otherwise, set , and go to Step 3(b).

Step 5. If , go to Step 2.

Step 6. Let and stop.

In this algorithm, the termination criteria for minimization of in Step 4 can be interpreted as follows. The purpose of minimizing is to find a “better” point in set . If it is successful, that is, the solution obtained satisfies , then we can turn to Step 1 and restart to minimize the objective function with as a new starting point. If in the process of searching such point is not found, we can obtain the local minimizer of . By Theorems 4 and 5, we know that there must exist a local minimizer of which belongs to the set . Then, we turn to Step 1 and minimizer starting from the local minimizer . Obviously, if is small enough, is large enough, and the direction set is large enough and can be obtained from algorithm FFOP within finite steps.

4. Numerical Experiment

In this section, several sets of numerical experiments are presented to illustrate the efficiency of algorithm FFSNE. All the numerical experiments are implemented in Matlab2010b. In our programs, the local minimizers of problem and problem are obtained by the SQP method. Note that is used as the terminate condition.

The symbols used in Table 3 are given in Table 1.

Throughout our computational experiments, the parameters in algorithm FFSNE are set as

Problem 9 (test problem in [16]). Consider
There are nine known solutions as shown in [16] (see Table 2).

Problem 10 (test problem in [16]). Consider

where , , , , , , and .

The known solution in [16] is .

Problem 11 (test problem in [16]). Consider
The known solution is .

Problem 12 (test problem in [16]). Consider
The known solution is and .

Problem 13 (test problem in [16]). Consider
The known solution is and .

Problem 14 (test problem in [16]). Consider
The are sixteen known solutions of problem 6 are given in [16].

Problem 15 (test problem 1 in [17]). Consider
The known solution is .

Problem 16 (test problem 7 in [18]). Consider
The known solution is .

The numerical results are listed in Table 3. From Table 3, it is easy to see that all problems that we tested have been solved with a small number of iterations.

5. Conclusions

In this paper, the filled function with one parameter is constructed for solving nonlinear equations and it has been proved that it satisfies the basic characteristics of the filled function definition. Promising computation results have been observed from our numerical experiments. In the future, the filled function method can be used to solve other problems such as nonlinear systems of equalities and inequalities and nonlinear feasibility problems with expensive functions.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgment

This work was supported by the Natural Science Foundation of China (no. 71171150 and no. 51275366).