Abstract

In the survey of the continuous nonlinear resource allocation problem, Patriksson pointed out that Newton-type algorithms have not been proposed for solving the problem of search theory in the theoretical perspective. In this paper, we propose a Newton-type algorithm to solve the problem. We prove that the proposed algorithm has global and superlinear convergence. Some numerical results indicate that the proposed algorithm is promising.

1. Introduction

We consider the problem where , , , and is the vector of ones. The problem described by (1) is called the theory of search by Koopman [1] and Patriksson [2]. It has the following interpretation: an object is inside box with probability , and is proportional to the difficulty of searching inside the box. If the searcher spends time units looking inside box , then he/she will find the object with probability . The problem described by (1) represents the optimum search strategy if the available time is limited to time units. Such problems in the form of (1) arise, for example, in searching for a lost object, in distribution of destructive effort such as a weapons allocation problem [3], in drilling for oil, and so forth [2]. Patriksson [2] surveyed the history and applications as well as algorithms of Problem (1); see [2, Sections 2.1.4, 2.1.5, and 3.1.2]. Patriksson pointed out that Newton-type algorithms have not been theoretically analyzed for the problem described by (1) in the references listed in [2].

Recently, related problems and methods were considered in many articles, for example, [46]. For example, a projected pegging algorithm was proposed in [5] for solving convex quadratic minimization. However, the question proposed by Patriksson [2] was not answered in the literature. In this paper, we design a Newton-type algorithm to solve the problem described by (1). We show that the proposed algorithm has global and superlinear convergence.

According to the Fischer-Burmeister function [7], the problem described by (1) can be transformed to a semismooth equation. Based on the framework of the algorithms in [8, 9], a smoothing Newton-type algorithm is proposed to solve the semismooth equation. It is shown that the proposed algorithm can generate a bounded iteration sequence. Moreover, the iteration sequence superlinearly converges to an accumulation point which is a solution to the problem described by (1). Numerical results indicate that the proposed algorithm has good performance even for .

The rest of this paper is organized as follows. The Newton-type algorithm is proposed in Section 2. The global and superlinear convergence is established in Section 3. Section 4 reports some numerical results. Finally, Section 5 gives some concluding remarks.

The following notation will be used throughout this paper. All vectors are column ones, the subscript denotes transpose, (resp., ) denotes the space of -dimensional real column vectors (resp., real numbers), and and denote the nonnegative and positive orthants of and , respectively. Let denote the derivative of the function . We define . For any vector , we denote by the diagonal matrix whose th diagonal element is and the vector . The symbol stands for the 2-norm. We denote by the solution set of Problem (1). For any ,  ,   (resp., ) means is uniformly bounded (resp., tends to zero) as .

2. Algorithm Description

In this section, we formulate the problem described by (1) as a semismooth equation and develop a smoothing Newton-type algorithm to solve the semismooth equation.

We first briefly recall the concepts of NCP, semismooth and smoothing functions [1012].

Definition 1. A function is called an NCP function if

Definition 2. A locally Lipschitz function is called semismooth at if is directionally differentiable at and for all and , where is the generalized Jacobian of in the sense of Clarke [13].
is called strongly semismooth at if is semismooth at and for all and ,

Definition 3. Let be a parameter. Function is called a smoothing function of a semismooth function if it is continuously differentiable everywhere and there is a constant independent of such that

The Fischer-Burmeister function [7] is one of the well-known NCP functions: Clearly, the Fischer-Burmeister function defined by (6) is not smooth, but it is strongly semismooth [14]. Let be the perturbed Fischer-Burmeister function defined by It is obvious that for any ,   is differentiable everywhere and for each , we have In particular, for all . Namely, defined by (7) is a smoothing function of defined by (6).

According to Kuhn-Tucker theorem, the problem described by (1) can be transformed to where .

Define According to the Fischer-Burmeister function defined by (6), we formulate (9) as the following semismooth equation: Based on the perturbed Fischer-Burmeister function defined by (7), we obtain the following smooth equation: where Clearly, if is a solution to (12) then is an optimal solution to the problem described by (1).

We give some properties of the function in the following lemma, which will be used in the sequel.

Lemma 4. Let be defined by (12). Then is semismooth on and continuously differentiable at any with its Jacobian where with , . Moreover, the matrix is nonsingular on .

Proof. is semismooth on due to the strong semismoothness of . is continuously differentiable on . For any , (14) can be obtained by a straightforward calculation from (12). Clearly, we have for any , which implies that for all . Let Then, we have and The second equality in (18) implies Since and for , the first equality in (18) yields and hence . Therefore, the matrix defined by (14) is nonsingular for .

We now propose a smoothing Newton-type algorithm for solving the smooth equation in (12). It is a modified version of the smoothing Newton method proposed in [8]. The main difference is that we add a different perturbed item in Newton equation, which allows the algorithm to generate a bounded iteration sequence. Let and . Define a function by

Algorithm  5

Step  0. Choose and . Let . Let and be arbitrary points. Let . Choose such that and . Set .

Step  1. If , stop. Otherwise, let ).

Step  2. Compute by Step  3. Let be the smallest nonnegative integer such that Let .

Step  4. Set and . Go to Step  1.

The following theorem proves that Algorithm  5 is well defined.

Theorem 5. Algorithm  5 is well defined. If it finitely terminates at th iteration then is an optimal solution to the problem described by (1). Otherwise, it generates an infinite sequence with and .

Proof. If then Lemma 4 shows that the matrix is nonsingular. Hence, Step  2 is well defined at the th iteration. For any , define It follows from (21) that Hence, for any , we have From Lemma 4, is continuously differentiable around . Thus, (23) implies that On the other hand, (20) yields Therefore, for any sufficiently small , where the first inequality follows from (21) and (23), and the second one follows from (26) and (27). Inequality in (28) implies that there exists a constant such that This inequality shows that Step  3 is well defined at the th iteration. In addition, by (24), Steps  3 and 4 in Algorithm  5, we have holds since and . Consequently, from and the above statements, we obtain that Algorithm  5 is well defined.
It is obvious that if Algorithm  5 finitely terminates at th iteration then , which implies that and satisfies (9). Hence, is an optimal solution to the problem described by (1).
Subsequently, we assume that Algorithm  5 does not finitely terminate. Let be the sequence generated by the algorithm. It follows that . We want to prove that satisfies through the induction method. Clearly, , which yields . Assume that ; then (24) yields Clearly, It follows from (20) and (22) that Hence, combining (31), (32), and (33), we obtain that which gives the desired result.

3. Convergence Analysis

In this section we establish the convergence property for Algorithm  5. We show that the sequence generated by Algorithm  5 is bounded and its any accumulation point yields an optimal solution to the problem described by (1). Furthermore, we show that the sequence is superlinearly convergent.

Theorem 6. The sequence generated by Algorithm  5 is bounded. Let denote an accumulation point of . Then , and is the optimal solution of the problem described by (1).

Proof. By Theorem 5, which implies that is monotonically decreasing and hence converges. It follows from (22) that Hence, also converges. Consequently, there exists a constant such that for all . This implies that and are bounded, and that for any and , This shows that is also bounded. Consequently, is bounded. Let be an accumulation point of . We assume, without loss of generality, that converges to . Then, we have By (20), Suppose by contradiction. Then . From Theorem 5 and (22), we have Hence, from Step  3 of Algorithm  5, we obtain Taking in (40) and then combining with (21), we have which yields Since , (42) implies which contradicts and . Therefore, and then . Consequently, is the optimal solution of the problem described by (1).

We next analyze the rate of convergence for Algorithm  5. By Theorem 6, we know that Algorithm  5 generates a bounded iteration sequence and it has at least one accumulation point. The following lemma will be used in the sequel.

Lemma 7. Suppose that is an accumulation point of the iteration sequence generated by Algorithm  5. Let be a matrix in . Then the matrix is nonsingular.

Proof. Define two index sets: By a direct computation, we get where is an matrix with all elements being zero except the -th as for , is an matrix with all elements being zero except the -th as for , and Obviously, Let . Then we have which implies Therefore, which, together with (47), yields . Thus, (49) implies , and hence the matrix is nonsingular.

Theorem 8. Let be the iteration sequence generated by Algorithm  5. Then superlinearly converges to , that is, .

Proof. By Theorem 6, is bounded and then let be its any accumulation point. Hence, and all matrices are nonsingular from Lemma 7. By Proposition 3.1 in [12], for all sufficiently close to . From Lemma 4, we know that is semismooth at . Hence, for all sufficiently close to , Since is locally Lipschitz continuous near . Therefore, for all sufficiently close to , which implies that This inequality, together with (51) and (52), yields Following the proof of Theorem 3.1 in [15], we obtain for all sufficiently close to . This implies, from (55), that for all sufficiently close to , Since , it follows from (56) that can satisfy (22) when is sufficiently close to . Therefore, for all sufficiently close to , we have which, together with (55), proves . Namely, superlinearly converges to .

4. Computational Experiments

In this section, we report some numerical results to show the viability of Algorithm  5. First, we compare the numerical performance of Algorithm  5 and the algorithm in [5] on two randomly generated problems. Second, we apply Algorithm  5 to solve two real world examples. Throughout the computational experiments, the parameters used in Algorithm  5 were ,  ,  , and . In Step  1, we used as the stopping rule. The vector of ones is all the starting points.

Firstly, problems in the form of (1) with 100, 500, 1000, 5000, and 10000 variables were computed. In the first randomly generated example, was randomly generated in the interval , was randomly generated in the interval for each , and was randomly generated in the interval . In the second randomly generated example, with and was randomly generated in the interval for each , and was also randomly generated in the interval . Each problem was run times. The numerical results are summarized in Tables 1 and 2, respectively. Here, Dim denotes the number of variable; AT [5] denotes the average run time in seconds used by the algorithm in [5]. In particular, we list more items for Algorithm  5, Inter denotes the average number of iterations, CPU (sec.) is the average run time, and Gval denotes the average values of at the final iteration.

The numerical results reported in Tables 1 and 2 show that the proposed algorithm solves the test problems much faster than the algorithm in [5] when the size of problem is large.

Secondly, we apply Algorithm  5 to solve two real world problems. The first example is described in [16]. This problem is how to allocate a maximum amount of total effort, , among independent activities, where is the return from the th activity, that is, effort , to yield the maximum total return. Note that here is the potential attainable and is the rate of attaining the potential from effort . When no effort is devoted to the th activity, the value of is zero. This example usually arises in the marketing field; the activities may correspond to different products, or the same product in different marketing areas, in different advertising media, and so forth. In this example we wish to allocate one million dollars among four activities with values of and as in Table 3.

For this problem Algorithm  5 obtained the maximum total return after iterations and elapsing 0.0625 second CPU time. The total effort one million dollars was allocated, million and million to activities A and B, respectively.

The second example is to search an object in regions, where is the prior probability with of an object of search being in the th region and is the probability of finding an object known to be in the th region with time units. The data of this example, listed in Table 4, come from Professor R. Jiang of Jiaozhou Bureau of Water Conservancy: the available time is time unite, and after iterations and elapsing second CPU time Algorithm  5 computed the result . This shows that we will spend time units in Region II and time units in Region IV to find water.

5. Conclusions

In this paper we have proposed a Newton-type algorithm to solve the problem of search theory. We have shown that the proposed algorithm has global and superlinear convergence. Some randomly generated problems and two real world problems have been solved by the algorithm. The numerical results indicate that the proposed algorithm is promising.

Acknowledgment

This work was supported by NSFC Grant (11271221).