Abstract

Determining the winners in combinatorial auctions to maximize the auctioneer's revenue is an NP-complete problem. Computing an optimal solution requires huge computation time in some instances. In this paper, we apply three concepts of the game theory to design an approximation algorithm: the stability of the Nash equilibrium, the self-learning of the evolutionary game, and the mistake making of the trembling hand assumption. According to our simulation results, the proposed algorithm produces near-optimal solutions in terms of the auctioneer's revenue. Moreover, reasonable computation time is another advantage of applying the proposed algorithm to the real-world services.

1. Introduction

The combinatorial auctions support the complementarity for bidding. The complementarity means that bidders can bid several goods with a price. The combinatorial auctions are useful in the instances with some considerations. For example, the Federal Communications Commission (FCC) used the combinatorial auction to sell electromagnetic spectra. Most bidders in the auction prefer to buy successive, rather than nonsuccessive, electromagnetic spectra. If the FCC used traditional auctions in selling a group of spectra to a single buyer, for example, the English auctions, the bidders may win some nonsuccessive spectra. Thus, the winners will drop out the auction because the expected utility is not achieved [1]. After the FCC used the combinatorial auction to sell the electromagnetic spectra, the combinatorial auctions are widely applied to solve the optimization problems, such as the study of economic performance [2] and task assignment [3].

Determining the winners in the combinatorial auctions is called as the winner determination problem (WDP) [49]. Solving the WDP to maximize the auctioneer's revenue is an NP-complete problem [5]. This indicates that the auctioneer requires huge time to solve some large-scale WDP instances. Currently, the approaches of solving the WDPs can be classified into three categories.(1)The optimal algorithms can find the optimal solutions, but huge computation cost is required in some problem instances, such as [57].(2)The approximation algorithms can find the winners rapidly, but they do not guarantee to find the optimal solutions for all kinds of instances, such as [8, 10, 11].(3)The restricted algorithms can find the optimal solutions rapidly, but they can be used for the combinatorial auctions only with some restricted conditions, such as [6].

Although the optimal algorithms can find the solutions with maximum auctioneer's revenue, they require long CPU time for solving some instances. This approach is appropriate for small-scale problems. On the other hand, it is not suitable for the large-scale or on-line version because of its huge computation cost. The restricted algorithms are useful for the problems with special properties. This kind of approaches rapidly finds an optimal solution in a constrained problem. If the auctioneer applies the restricted algorithm to a general problem, the solution may not reach the expected objective. Comparing to the optimal algorithms, the second approach can find the winners rapidly. Even if the optimal solution is not guaranteed, it reduces considerable computation cost to find the approximation solutions. Furthermore, the auctioneer must build the auction environment to fit the requirements of the restricted algorithm, but he does not need so by using the second approach. Therefore, a better approximation algorithm is more efficient and convenient for the auctioneers to solve WDPs.

In this paper, we formulate the WDP as a noncooperative game and apply some concepts of the game theory to design an approximation algorithm for solving the WDPs. We treat each auction good as a player who can determine the winner. In the proposed noncooperative game, all players use the auctioneer's revenue as their utility functions. Our proposed algorithm adopts the stability of Nash equilibria (NEs) that the obtained solutions are accepted by all players [1214]. We utilize the evolutionary game to iteratively improve the solution quality and consider the hand trembling assumption to increase the solution diversity. According to our simulation results, our proposed algorithm achieves 97.81% revenue comparing to the optimal solutions. Moreover, our proposed algorithm is more efficient and convenient for the auctioneers to determine winners in combinatorial auctions.

Finding the optimal solutions in the general WDPs has been proofed as an NP-complete problem [5]. The approaches of solving the general WDPs can be divided into two categories. The first one is finding the optimal winners, and the other one is the approximation algorithms. The optimal algorithms calculate the winners with maximum revenue, and the research goal is to decrease the computation cost. The approximation algorithms focus on increasing the solution quality under a reasonable running time.

Sandholm proposed the optimal algorithm, named the Combinatorial Auction Branch On Bids (CABOB), to find the winners with maximum auctioneer's revenue [7]. The Divide-and-Conquer is performed to partition the original WDP instance into some subproblems. Then, the CABOB applies the Linear Programming and the Branch-and-Bound approaches to eliminate unnecessary calculations. So, the CABOB decreases the computation cost. Although the computation efficiency is improved, the auctioneer still requires huge running time in some instances to obtain the optimal solution via adopting the CABOB. For the large-scale WDP instances, decreasing the computation cost is the major consideration of designing the optimal algorithms.

Another approach is to develop heuristic algorithms for increasing the solution quality. Avasarala et al. observed that the bid rank affects the solution quality [10]. They formulate the solutions in the bit string format that each element is a bid, and the genetic algorithm (GA) is applied to solve the WDP. When generating the initial population, each gene is given a rank as the evaluation sequence. Next, the crossover and mutation operators are applied to generate various solutions. After some iterations, the solution quality is improved. However, the parameter setting is a major problem for applying GAs to solve WDPs, such as the population size, crossover rate, and mutation rate. Thus, the auctioneers require some preprocesses to evaluate the optimal parameter settings.

Hoos and Boutilier applied the stochastic local search to design the Casanova [8]. The Casanova evaluates the feasible solutions as many as possible. Furthermore, the Casanova uses the revenue-per-item (RPI) of each bid to determine the evaluation order and keeps track of the age for each bid. The evaluation order affects the solution quality, and many studies have proposed heuristics to determine the order [15]. Generating solutions rapidly is the advantage of the Casanova, but increasing the solution quality is the major implementation issue.

The optimal algorithms and the approximation algorithms have different advantages in solving WDPs. The advantage of one approach is the drawback of another approach. In this paper, we propose an algorithm which combines the advantages of above approaches to maximize auctioneer's revenue and minimize the computation cost. The proposed algorithm can compute the solution rapidly and improve the solution quality iteratively. Thus, our proposed algorithm is valuable for the real-world implementations.

3. Problem Model

3.1. Winner Determination Problem

The WDP consists of a bid set and a good set . Each bid includes a bundle and a bid price . The bidder pays to buy all goods claimed in if is declared as a winner. Assume that a bidder proposes only one bid. The auctioneer's objective is to compute an assignment , for all bidders to maximize the revenue. When , the bidder is a winner, and he can buy the bundle by paying and can buy nothing as . Therefore, the WDP can be formulated as an integer linear program shown as follows:

The WDP has one constraint shown in (2). The candidates of winning the good are the bidders whose bundles include , that is, . Equation (2) shows that can be assigned to at most one bidder. In other words, only one candidate of can be the winner.

We use an example illustrated in Table 1 to show the WDP. There are two solutions with revenue and with revenue . The constraint drawn in (2) indicates that each good can be sold to at most one bidder, so either bidder 1 or bidder 2 can be the winner of . In this example, maximizes the auctioneer's revenue, where selling to bidder 1 and selling to bidder 3.

3.2. Proposed Game Model

We formulate the WDP as a noncooperative game that each player considers the common utility function. The elements of this game are listed as follows.(i)A set of players . We treat each good as a player in .(ii)A set of all players' strategies . For each player , the strategy set is . This means that can determine the winner of , where must appear in the bundle of each winner candidate . Moreover, is unsold for the strategy .(iii)A utility function . Each player's utility function is the revenue of the auctioneer in the WDP, as shown in (1).Given a union of all players' actions , the outcome of is a winner set of the WDP. Each action in points out the winner of or it is unsold for . Therefore, we can transform the solution from to the WDP. Considering a WDP instance, and are the solutions of and the WDP, respectively. Supposing in , we have in . Thus, we have , where means that the bundle is sold, that is, , s.t. , .

We use the example shown in Table 1 to explain . Each good is a player, so the game consists of three players. The strategy is the set of the winner candidates. Both bidders 1 and 2 bid at good 2, so player 2 can claim bidder 1 or bidder 2 as the winner. Notice that each good is unnecessary to be sold, so is a feasible strategy for each player. In Table 2, we have two solutions and with revenue 8 and 7, respectively.

It is reasonable to treat each bundle, rather than a good, as a player. We discuss the rationality of regarding a good as a player from two aspects: the computation complexity and the property of the WDP. In the real-world applications, the number of bidders is greater than the number of the goods, and we have that is much larger than . Computing an NE solution in the game with more players is more complex than that with fewer players. Thus, treating each good as a player is more appropriate than formulating the bundle as a player in terms of the computation efficiency consideration. On the other hand, (2) specifies that each good can be sold to at most one bidder or unsold. Regarding a good as a player completely satisfies the meaning that claimed in (2). Treating a bundle as a player also can reach the restriction stated in (2), but the auctioneer must maintain the information of indicating which good is sold. In summary, formulating the good as a player in a game is appropriate for the computation efficiency and matches the property of the WDP.

4. Proposed Method

4.1. Finding Nash Equilibrium in the Normal-Form Game

Before introducing our approach, we first present the idea of finding a pure NE in a normal-form game. This is the foundation of the proposed algorithm. As shown in Table 3, the classical prisoner's dilemma includes two players. Given an initial solution, we will seek an NE outcome from this solution. Supposing the initial solution (silent, silent) and each player receives two-year serving. Because each player only cares about the self-utility, Player 1 observes that performing “confess” is more beneficial than “silent” under (silent, silent). Thus, Player 1 chooses “confess” and the outcome becomes (confess, silent). Then, Player 2 is aware that the utility is decreased from −2 to −3, and the strategy “confess” is more beneficial than “silent” currently under (confess, silent). Therefore, Player 2 also moves from “silent” to “confess”, and the outcome is (confess, confess). Eventually, both Player 1 and Player 2 accept the outcome (confess, confess). If any player deviates from the outcome, their utility will be decreased from −1 to −3. No unilateral deviation will occur under (confess, confess), so this solution is an NE.

4.2. Nash Equilibrium Search Algorithm

In above prisoner's dilemma, we have the following procedure of seeking an NE. Consider an initial solution . Pick up a player , and change their action from to . The outcome becomes . If all players are satisfied with , it is an NE. Otherwise, pick up another player and check the existence of the other auction with higher utility. Repeat this idea until all players simultaneously accept the outcome.

We use the WDP example drawn in Table 1 to explain the manner of finding an NE. Let the initial solution be . The bidder 2 is the winner of and , and the utility of each player is $7. The player has two strategies: and 1. If chooses , the outcome is still . If picks up another strategy (selling to bidder 1), bidder 2 will lose and , and bidder 3 can be the new winner of . The outcome moves to , and the utility is increased from $7 to $8. Performing the strategy 1 (selling the good to bidder 1) is more beneficial than the current action , so changes the action from to 1. The outcome moves from to . In , no player can gain more utility by performing the other strategy, so is an NE.

Based on the above principle of searching the NE, we propose the Nash equilibrium search approach (NESA) to compute the winner set in the WDP. Except for the idea of searching NEs, we also consider other two ideas of the game theory to design the NESA. We adopt the learning strategy from the evolutionary game to improve the solution quality. To increase the solution diversity, the hand trembling assumption is taken into account for simulating the mistake making behavior in the real-world. The NESA includes two major procedures. The first one, termed findNE(), is used to seek an equilibrium outcome. Another procedure is named by longJump(), and that is designed to increase the solution diversity.

The main algorithm of the NESA is shown in Algorithm 1. The auctioneer should provide the stop criterion of the NESA, for example, up to ten minutes. First, the NESA randomly generates a feasible solution . The NESA invokes findNE() to calculate the equilibrium solution from . We apply the revenue calculation function to compute the revenue that the auctioneer can earn from . If the NE solution is more beneficial then for the auctioneer, survives to the next iteration. Simultaneously, we update the good victim count (gvc) information of each bid. The gvc is a count that indicates the importance of each bid in terms of the auctioneer's revenue. Supposing , which means the auctioneer can gain more revenue from , we increase the gvc of each good in by one. Thus, the good with higher gvc value implies that selling the good contributes more revenue for the auctioneer. In the last step, the NESA invokes the diversity increase procedure longJump() to produce the solution which is quite different to the NE. If the stop criterion is not satisfied, findNE() finds the NE from the solution produced by the longJump().

Result: the winner set
(1) begin
(2)  generate a solution at random
(3)  while  not meet stop condition  do
(4)   
(5)   if     then
(6)    update gvc
(7)    
(8)   end
(9)   
(10)  end
(11)  return
(12) end

4.3. The Local Search

Finding the NE solutions is the objective of the local search procedure findNE(). The NE represents a steady status that each player will not receive more utility by the unilateral deviation [13, 14]. The player chooses the winner from the candidates whose bundles include the good . All players use the same utility function: which is the auctioneer's revenue.

The procedure of the local search findNE() is illustrated in Algorithm 2. findNE() receives a base solution and outputs an NE solution . First, findNE() changes the action of to and then declares the possible candidates as winners to establish a new feasible solution by a given ranking function. If contributes more auctioneer's revenue than , findNE() returns . Simultaneously, increase the gvc of each good in by one. If we cannot find another solution better than , all players accept with , and is an NE. Therefore, findNE() returns the NE solution .

Data:   : the base solution
Result:   : the NE solution
(1) begin
(2)  for each good in   do
(3)   
(4)   remove the winner of in
(5)   add feasible bids to except for based on a
    given ranking function
(6)   if     then
(7)    update gvc
(8)   return  
(9)   end
(10)  end
(11)  return  
(12) end

Because we focus on the game without any special property, findNE() must consider some techniques to prune unnecessary searches. In line (5) of Algorithm 2, there are some strategies for determining the ranking of adding feasible bids. For example, the auctioneer can rank the feasible bids via the criteria: (1) the bid price, (2) the RPI, or (3) the random approach. The ranking function which includes a specific search direction can compute the new solution quickly. However, the solution quality may be restricted. If the auctioneer does not well investigate the properties of the instance, the random approach is more appropriate to the general cases than other heuristics. Therefore, we adopt the random process for ranking the feasible bids in this paper.

4.4. The Diversity Increase Procedure

The procedure longJump(), shown in Algorithm 3, is used for computing another base solution of findNE(). Given a base solution , first, the good is removed according to the probability that is called as the good victim rate (gvr). The gvr of good is the gvc ratio that is normalized by the maximum gvc over all winners in the solution : Higher indicates that the good contributes more auctioneer's revenue than other goods. Removing from the solution is not beneficial for the auctioneer in expectation. So, the remove probability of is inversely proportional to . The remove process is shown from step 3 to step 7.

Data: : the base solution
Result: : the diversity increased solution
(1) begin
(2)  
(3)  foreach bid in   do
(4)   if  the random value > gvr   then
(5)    remove in
(6)   end
(7)  end
(8)  add feasible bids to except for with trembling at
   random based on a given ranking function
(9)  if     then
(10)   update gvc
(11)   return  
(12)  else if     then
(13)   return  
(14)  else
(15)   return  
(16)  end
(17) end

In step 8, we randomly declare some winners to to establish a new feasible solution which is identical to that we considered in line (5) of Algorithm 2. When choosing a winner, the hand trembling assumption is considered to increase the solution diversity [16]. The hand trembling assumption means that all players may make a mistake even if the mistake probability is very small. In other words, there is a small probability that we declare the other bidder as a winner.

If contributes more revenue than , there is an NE solution with higher revenue that we have not found. longJump() updates the gvc of each element in and outputs . Otherwise, is returned. Based on the hand trembling assumption, with less auctioneer's revenue still can be accepted with a small probability.

5. Simulation Result

We consider three metrics to evaluate the NESA: the revenue performance, the anytime performance, and the optimal solution comparison. The simulation results are compared with the general GA and the Casanova. The GA is outstanding in solving optimization problems, such as optimizing the complex network system [17] and solving the WDP [10]. The Casanova converges rapidly, so we combine findNE() with the Casanova, termed as the Hybrid, in the anytime performance evaluation. Because the NESA outperforms the GA and the Casanova, we only compare findNE() with the optimal solution. The objective of the optimal solution comparison is to measure the effects of different initial solutions.

5.1. Simulation Environment

By referring to [4, 8, 10], we consider three bid models, and each instance includes goods and bids.(1)Decay: (1) Randomly assign a good, (2) repeatedly add a good with the probability 0.75, which is the same as [4, 8], until a good is not added or the bundle includes all goods, (3) pick up a price between 0 and the number of goods in this bundle.(2)Random: (1) Determine the number of goods from at random, (2) randomly choose enough goods without replacement, (3) pick up a price from the uniform distribution on .(3)Uniform: (1) Choose five goods from without replacement, (2) pick up a random price from a uniform distribution on .

We consider the large-scale model with 2000 bids and 200 goods and the small-scale model with 1000 bids and 100 goods. Table 4 shows the average number of goods in a bundle. We generate five instances for each bid model and use the average values over ten runs for each algorithm.

Table 5 shows the parameters of the GA. The population size is not as large as that applied in [10]. During our parameter tuning phase, the GA averagely requires seven seconds when the population size is set as 100. The computation time requires more than 100 seconds for the population size 1000. To increase the solution diversity and decrease the computation cost, we use the population size 100. For the Casanova, all parameters are the same as that proposed in [8], and they are shown in Table 6. The NESA includes only one parameter, the trembling probability, and it is set as 5%. We observed that various trembling probabilities do not affect the solution qualities dramatically.

5.2. Revenue Performance

Given 200 seconds, the revenue performance in the Decay, Random and Uniform bid models is shown in Figures 1, 2, and 3, respectively. In each figure, the results of the large-scale model with 200 goods and 2000 bids are arranged in the left, and the small-scale model results are in the right. The auctioneer's revenue is presented in the percentage, and the base line is the GA.

The Casanova produces less revenue than the GA in all our simulations, but the difference is not too much. The maximum gap is 11.48% comparing to the GA, and it takes place in the small-scale Decay bid model. In addition, the revenue gap between the Casanova and the NESA is 13.05% at most. The Casanova performs worse than the GA in terms of the auctioneer's revenue, and we have two observations. First, the search direction of the Casanova is invariant. So, it is difficult to improve the solution quality without increasing the solution diversity. For example, the NESA considers longJump() to increase the solution diversity, and the GA has the mutation operator. Another issue is the parameter setting. The Casanova has three parameters. During the parameter training phase, we did not find the optimal settings for all bid models. We applied the same settings as that appeared in [8] to our simulations. Thus, the Casanova does not perform well in all simulations. In summary, even though the Casanova provides the solutions worse than the GA and the NESA, the gap is not too much. After overcoming the above issues, the Casanova could increase the solution quality in expectation.

For the overall performance, the auctioneer gains more revenue by using the NESA than other algorithms in all our simulations. The extra revenue that the NESA can reach (comparing to the GA) is proportional to the number of goods in a bundle. If each bidder prefers to buy larger number of auction goods, the auction can obtain more revenue via using the NESA. For example, the NESA in the Random bid model performs better than in the Decay bid model.

On the other hand, in the large-scale instances, the NESA obtains the solutions with more auctioneer's revenue than in the small-scale instances. Although the NESA in the large-scale instances provides higher solution quality than in the small-scale instances, the improvement is not proportional to the problem scale. For example, the auctioneer in the large-scale Decay model only receives 1.8% extra revenue than in the small-scale Decay model, but the problem scale is increased two times. So, we conjecture that the computation time is not sufficient for the NESA to compute the solution with maximum revenue. We will discuss this observation in the next section.

In summary, the NESA is an outstanding approach in terms of the solution quality for solving the WDP. In particular, in the large-scale instances and the large number of goods in a bundle, the NESA can provide the solutions with more auctioneer's revenue.

5.3. Anytime Performance

We randomly select an instance of each bid model and use the average values of ten runs for each algorithm. The revenue is tracked in every second. To capture the explicit convergence information, we only considered the large-scale models. Except for the GA, the Casanova, and the NESA, we combine the findNE() with the Casanova, labeled by the Hybrid. Figures 4, 5, and 6 show the anytime performance in various bid models.

Because the GA searches widely in each iteration, the curve of the anytime performance is not raised very fast. The GA may obtain better results given enough time for the complete search. To improve the running time of the GA, the auctioneer can take into account the initial solution with high quality. Avasarala used a Casanova solution to seed the initial population of the GA [10]. Their experiments showed that the anytime performance could be improved but not too much. So, seeding the initial population may improve the anytime performance of the GA, but how to guarantee the quality of the obtained solution is another problem.

The Casanova converges rapidly in all bid models. In the early stage, for example, first 100 seconds in Figure 4, the Casanova outputs more revenue than other algorithms. However, more running time does not result in higher solution quality. For the aspect of the search amount, the Casanova only estimates one solution in an iteration. Moreover, the search direction is fixed because the RPI is applied to rank each bid. So, the Casanova rapidly meets the solution with maximum revenue. We tried to use the findNE() to search the NE from the solutions obtained by the Casanova, but the revenue is improved slightly. In Figures 4, 5, and 6, the curves of the Casanova and the Hybrid are very close. This simulation turns out that requiring low computation time is the major advantage of the Casanova, but the search direction restricts the solution quality. In the instances that the auctioneer must obtain the solutions in a short time, the Casanova is the best approach.

The NESA considers single solution in each iteration, but it requires more running time than the Casanova to finish an iteration. Each solution stays in the findNE() for a long time to check whether it is an NE solution or not. Computing the NE solution is the bottleneck of the NESA. From the results shown in Figures 4, 5, and 6, the NESA requires more time to converge in the instances that each bundle includes fewer goods, for example, the Decay bid model. Therefore, the computation efficiency of the NESA is inversely proportional to the number of goods in a bundle.

In Figure 4, the solution quality of the NESA is still increased, so the NESA requires more running to meet the solution with maximum revenue. To capture more details about the convergence information of all algorithms, we evaluate the anytime performance in the large-scale Decay model. Moreover, we increase the running time from 200 to 1200 seconds. The results are shown in Figure 7. After 900 seconds, all algorithms do not significantly improve the solution qualities, so we only drew the results within first 900 seconds. The curves of all algorithms shown in Figure 7 are clearer than those shown in Figure 4 in terms of reaching the solutions with maximum auctioneer's revenue. Increasing the running time does not improve the auctioneer's revenue very much for the GA, the Casanova, and the Hybrid. The revenue is improved 5.689%, 2.70%, and 2.41% approximately for the GA, the Casanova, and the Hybrid, respectively. The NESA requires 421 seconds to obtain the solution with maximum revenue. Although the NESA spends more running time to meet the best solution than other algorithms, it produces more 6.63% revenue than the GA after 900 seconds. Moreover, this simulation also shows that the running time required by the NESA is acceptable for reaching the best solution in the worst case. Therefore, the NESA is appropriate to be implemented in the real-world services even if the large-scale WDP problems are considered.

5.4. Optimal Solution Comparison

According to above simulations, the NESA outperforms the GA and the Casanova in terms of the auctioneer's revenue and requires reasonable running time. In this section, we compare the auctioneer's revenue obtained by the NESA with the optimal solution. The WDP is formulated as an integer linear program, so we use the CPLEX (http://www-01.ibm.com/software/commerce/optimization/cplex-optimizer/) to compute the optimal solutions. We only consider the small-scale problems with 100 goods and 1000 bids because the computation resource required by the CPLEX in the large-scale problem exceeds that equipped in our simulation environment.

Auctioneer's revenue obtained from the findNE() may depend on the input solution quality. An initial solution with higher revenue may lead to a better NE solution. Except for the random approach, we use an RPI-based approach to improve the initial solutions. For each bid , the RPI-based approach computes the RPI values as follows: We first rank each bid as the decreasing order of . Then, adding the bids to build a feasible solution based on the rank result. We consider this solution as the input of the findNE(). So, three algorithms participate in the optimal solution comparison. We take 200 rounds for each algorithm and keep track of the variance of the solution quality in each round. The results of the optimal solution comparison are shown in Figures 8, 9, and 10, where the base line is the optimal solution.

The findNE() with the RPI-based approach, which is drawn in the red line, produces the stable results which are very close to the optimal solutions. In Figures 8, 9, and 10, we observe that there are some break-out points, and those results have more revenue than others. After tracking the log, each break-out point results from the successful trembling, but not all trembles improve the solution quality. Therefore, the trembling hand assumption is useful for seeking better solutions.

The qualities of the solutions obtained by the findNE() with random input solutions illustrated by the green line are unstable. Except for the Random model, the findNE() with random input solutions performs worse than the findNE() with the RPI-based approach and the optimal solutions. Although the random input solution provides the unrestricted search direction, the quality of the NE solution is only outstanding in few instances of the Random model. So, seeding the input solution will result in stable solutions and high solution qualities.

6. Discussion and Remark

Computing the NE with maximum utility for each player has been proved as a PPAD-complete problem [13, 18]. In a game, if the number of the union of strategy sets for all players is grown exponentially, reaching the NE requires huge computation time. Theoretically, finding a steady solution via the NESA may require huge computation time.

Solving the WDP is an NP-complete problem [5]. This means that in general cases no algorithm can compute the optimal winner set in the polynomial time unless = NP. The search space is the critical issue to evaluate the computation complexity of the NESA. The computation complexity is inversely proportional to the degree of intersections between bundles. The worst case takes place when each bundle includes exactly two goods. Declaring a bidder as a winner means that many bidders cannot win their bundles simultaneously. In our simulations, we consider three bid models with different average numbers of goods in each bundle to prove this discussion. The simulation results turn out that the computation complexity of the NESA is inversely proportional to the number of goods in a bundle. So, this discussion is correct.

The procedures in line (5) of Algorithm 2 and line (8) of Algorithm 3 deal with declaring some bidders as winners. The selection strategy of determining winners affects the solution quality. In general, the convergence speed and the auction's revenue are the trade-off. As shown in Section 5.3, the Casanova (the search with the fixed direction) rapidly finds a solution, but the auctioneer's revenue is not the major consideration. On the other hand, the GA (the comprehensive search) requires more computation time than the Casanova to generate a solution, but the auctioneer can gain more revenue. Different strategies of selecting winners can be applied to the NESA based on the consideration of the auctioneer. In this paper, we apply the idea of searching the NE to propose the framework for solving the WDP. The auctioneer can adjust the search direction when using the NESA to solve the WDP. For example, the search direction which includes the specific goal, for example, the RPI ranking function, is better in the larger-scale cases based on the computation efficiency consideration. In Section 5.4, using the input solutions with RPI approach is more stable and provides more revenue than the random input solutions.

In this paper, we propose a game model to represent the WDP and the NESA to find the NE solutions. The properties of the NESA can be discussed in the aspects of the game model and the solution quality. When selling several goods to multiple buyers, most sellers usually estimate the revenue improvement of selling a good to another buyer. We use this impression to formulate the game model of conceiving each good as a player. Each player (the auction good) can determine the winner and evaluate the revenue improvement of changing the winner. The translation from the WDP to the proposed game model does not violate the real-world behaviors, so the auctioneers would accept the proposed game. On the other hand, the NESA provides the solutions that the revenue is very close to the optimal solution. Furthermore, the running is reasonable. In summary, the NESA is appropriate to be implemented in the real-world service to compute the WDPs.

7. Conclusion

We propose the NESA to solve the WDP in the combinatorial auctions. The NESA utilizes the stability of the NE, the self-learning of the evolutionary game, and the mistake making of the trembling hand assumption to seek for the solutions with high auctioneer's revenue. The auctioneers can earn the revenue which is very close to the optimal solution via adopting the NESA. Moreover, the NESA requires only rational running time.

In our simulations, we consider the bid prices with some specific distributions to evaluate the performance of the NESA. According to the real-world data analysis [19], most bidders submit the prices with their private considerations. We only consider three major bid models to evaluate the performance of the NESA in this paper. However, this may be not sufficient to understand the performance of the NESA in the real-world services. For example, the bid profile may not be satisfied with a well-defined distribution. To apply the NESA in the real-world services, taking into account the relationship between the auction item and the bid profile is our next research issue. Upon understanding the properties of the received bid profile, the auctioneer can use some techniques to improve the search efficiency and the solution quality.

In the future, we will first study the data mining approaches, such as the mining association rules [20] and feature extraction [21], to explore the relationship between the auction items and the bidding behaviors. Then, we will evaluate the instances that are generated via simulating the real-world bidding behaviors. We focus on pruning the unnecessary searches of the NESA to reduce the computation complexity, but the solution quality should not be decreased. The NESA will provide a more efficient way to compute the solutions with high auctioneer's revenue in expectation.

Conflict of Interests

The authors declare that there is no conflict of interests regarding the publication of this paper.

Acknowledgments

This work was supported in part by Taiwan NSC under Grant no. NSC 102-2221-E-274-004 and NSC 102-2221-E-194-054-. The authors would like to thank reviewers for their insightful comments which helped to significantly improve the paper.