Table of Contents Author Guidelines Submit a Manuscript
Journal of Sensors
Volume 2017 (2017), Article ID 5757125, 14 pages
https://doi.org/10.1155/2017/5757125
Research Article

Quality-Aware Incentive Mechanism for Mobile Crowd Sensing

1College of Computer, Nanjing University of Posts and Telecommunications, Nanjing, Jiangsu 210003, China
2Jiangsu High Technology Research Key Laboratory for Wireless Sensor Networks, Nanjing, Jiangsu 210003, China
3Key Lab of Broadband Wireless Communication and Sensor Network Technology, Nanjing University of Posts and Telecommunications, Ministry of Education, Nanjing, Jiangsu 210003, China

Correspondence should be addressed to Hai-ping Huang; nc.ude.tpujn@phh

Received 4 March 2017; Revised 24 July 2017; Accepted 22 August 2017; Published 28 September 2017

Academic Editor: Javier Sedano

Copyright © 2017 Ling-Yun Jiang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Abstract

Mobile crowd sensing (MCS) is a novel sensing paradigm which can sense human-centered daily activities and the surrounding environment. The impact of mobility and selfishness of participants on the data reliability cannot be ignored in most mobile crowd sensing systems. To address this issue, we present a universal system model based on the reverse auction framework and formulate the problem as the Multiple Quality Multiple User Selection (MQMUS) problem. The quality-aware incentive mechanism (QAIM) is proposed to meet the quality requirement of data reliability. We demonstrate that the proposed incentive mechanism achieves the properties of computational efficiency, individual rationality, and truthfulness. And meanwhile, we evaluate the performance and validate the theoretical properties of our incentive mechanism through extensive simulation experiments.

1. Introduction

A new paradigm of sensing with smartphones has emerged which is usually called people-centric mobile sensing or mobile crowd sensing [1]. Compared with the traditional sensor networks, MCS is an effective way for large-scale data sensing, processing, and gathering without deploying a large number of sensor nodes. MCS has enabled numerous large-scale applications such as urban environment monitoring [24], traffic flow surveillance [57], healthcare [8], behavior and relationship discovery [9, 10], indoor localization [11], 3G/Wi-Fi discovering [1214], activity monitoring [15, 16], and bus arrival time prediction [17].

The effect of the aforementioned mobile crowd sensing applications relies heavily on the quantities of participants. However, the ordinary individuals are not willing to share their sensing capabilities unless there are sufficient incentives. Research on incentive mechanism has been widely concerned by investigators, and considerable designed schemes about the incentive mechanism design have been put forward which can be classified into nonmonetary incentives [1820] and monetary incentives [2129].

The key of any crowd sensing system is not only the quantities of participants but also the sensing quality offered by participants. However, most of the existing solutions usually assume that each sensing task (e.g., air quality in a certain region) in a sensing cycle could be performed by a single participant. It is intuitive that the quality of sensing project would be higher if each sensing task was performed by multiple participants. One of the main reasons is that the sensed data cannot always be trusted because participants maybe intentionally (e.g., malicious participants) or unintentionally (e.g., making mistakes) offer the data contrary to the truth. Another reason may come from the recruitment system model itself. A typical MCS consists of two roles: the recruiter who publicizes the sensing tasks and the participants who constitute potential sensing capability selected by the recruiter from many candidates. The interaction between the recruiter and the candidates is modeled as a reverse auction in many existing solutions which can be illustrated by Figure 1. The recruiter always selects participants according to the sensing plans of the candidates. However, changes always go beyond plans. The participants may not be able to complete the task according to their schedule for unexpected incidents (e.g., a selected participant cannot go to the specific locations claimed in his sensing plan). These participants may offer some forged data or do nothing. As a result, the tasks could not be completed in time.

Figure 1: A typical mobile crowd sensing system as a reverse auction framework.

In this paper, we address the issue of quality-aware monetary incentive mechanism design. We design a truthful incentive mechanism satisfying the properties of computational efficiency, individual rationality, and truthfulness with low approximation ratio.

The remainder of this paper is organized as follows. In Section 2, we review the related work. In Section 3, we describe the system model and formulate the MQMUS problem. Thereafter, in Section 4 we propose the incentive mechanism, named QAIM, which consists of two phases, winner selection and payment determination, and analyze the properties of QAIM. Section 5 presents the experimental results. Finally, we draw the conclusion and discuss some possible future directions in Section 6.

2. Related Work

There are lots of incentive mechanisms which can be classified into nonmonetary incentives [1829] and monetary incentives [3045]. Paying for sensed data in crowd sensing tasks is the most intuitive incentive. Monetary incentive mechanisms are mainly based on two kinds of schemes: Stackelberg game and auction.

Stackelberg game is a game where one leader player has the dominant influence over the other players [46]. Duan et al. [30] make use of the Stackelberg game to design a threshold revenue model for service providers. The system and the users interact through a two-stage process similar to that of Stackelberg game. The system announces the total reward and the threshold number of required participants. Each participant decides whether to accept the task or not. Yang et al. [31] also model the proposed platform-centric incentive mechanism as a Stackelberg game, prove that this Stackelberg game has a unique equilibrium, and design an efficient mechanism for computing it. The above two Stackelberg game solutions have theoretical guarantees. However, the premise of this kind of method is that the costs of all users or their probability distributions are assumed to be known, which limits the applicability of Stackelberg game-based mechanisms because participants may keep their costs private in the real world.

An auction-based mechanism is originally the process of buying and selling goods by negotiating the monetary prices [47]. A kind of auction, called reverse auction, is adopted to model the negotiation process in crowd sensing, which is shown in Figure 1. Lee and Hoh [32] firstly design a reverse auction-based dynamic price incentive mechanism with virtual participation credit with the objective of minimizing and stabilizing the platform cost while maintaining the participation level. Yang et al. [31] consider two system models for smartphone crowd sensing system: the platform-centric model with the solution based on the Stackelberg game and user-centric model with the solution based on the reverse auction. Feng et al. [33] formulate the winning bids determination problem and present a truthful auction for location-aware collaborative sensing. Zhang et al. [34] focus on the user-centric model and study three methods which involve cooperation and competition among the services. Xu et al. [35, 36] investigate truthful incentive mechanisms for time window dependent tasks with the strong requirement of data integrity and propose two incentive mechanisms for the single time window case and the multiple window case, respectively. Subramanian et al. [37] consider offline and online incentive mechanisms using the same bidding framework with MSensing Auction proposed in [31]. Zhao et al. [38] investigate the incentive mechanisms in the online setting based on an offline budget feasible mechanism [39], which provides a starting point for the online mechanism. Jin et al. [40] pay attention to the quality of the mobile crowd sensing systems and incorporate a metric named QoI (Quality of Information) into the incentive mechanisms. SRC and MRC mechanisms with the criterion of the combinatorial QoI and price are proposed. However, the authors fail to consider the truthfulness of the MRC mechanism. The aforementioned solutions assume that each measurement of sensing task can be represented by a single sensor reading.

Several solutions are proposed to ensure the quality of crowd sensing data. Tanas and Herrera-Joancomartí [48] achieve the first work, which focuses on how to validate sensing data, but the premise of their work is that there are multiple users to submit multiple sensing readings on each task. Kazemi et al. [49] assume each worker has a reputation score, and assign enough number of workers to each spatial task such that workers’ aggregate reputation can satisfy the confidence of the task. However, they focus on self-incentivized spatial crowdsourcing, in which people perform the tasks voluntarily without any reward. Zhang et al. [41] propose a task management framework to match workers to the merged query and sensing tasks efficiently. In their model, each task can be assigned to multiple workers, and each worker can be assigned to at most one task, although each worker may have the preference for multiple tasks. Xu et al. [42] design the incentive mechanism, which considers the issue of stimulating the biased requesters in the competing crowdsourcing market. Xiong et al. [43] consider the k-depth coverage as an MCS data collection constraint, but every subtask is assigned to the same value of . Wang et al. [44] present a detailed quality-aware mobile crowdsourced sensing framework, composed of three MCS components: crowd, crowdsourcer, and crowdsourcing platform. The crowdsourcer is a new role who assesses the posted contributions’ quality. He et al. [45] propose a recruitment strategy in vehicle-based crowdsourcing through taking full advantage of predictable mobility patterns of vehicles, which bring a new insight to improve the quality of crowd sensing system. However, the behaviors of human are affected by many factors. It is far more difficult to predict the mobility patterns of human beings than those of vehicles.

In this paper, we try to enhance the quality-aware incentive mechanism from two main dimensions: the reputation of participants and the design of task.

3. Problem Statement

Different from most crowd sensing systems, the objective of this paper is designing the truthful incentive mechanism with maximum social efficiency and high sensing quality. To achieve this objective, the recruiter needs to select participants who can match the diverse requirements of the crowd sensing application with minimum social cost. Before demonstrating the rigorous problem definition, we would like to present a motivating example to make the problem better understood.

3.1. A Participant Recruitment Example in Air Quality Monitoring

We take the urban air quality monitoring MCS task as an example. As shown in Figure 2, the MCS recruiter wants to collect the state of the air in three regions (denoted as ). Nine candidates () are interested in performing the task and reporting their sensing plans, which include what they can do with the corresponding bid price. The industrial structures vary greatly in different regions. The regions with more plants, which can discharge waste gas, need more participants to monitor. For example, the recruiter wants 5 participants to monitor region and only 3 to monitor region because there are more chemical plants in region . We use squares to represent the regions, and the number above each square denotes its requirement. To the perspective of the candidates, people may not just stay in a certain region in one sensing cycle and can fulfill multiple sensing tasks in different regions. We use disks to represent the candidates, and the number above each disk denotes its corresponding bid price, and the set of regions below each disk denotes the regions that he can monitor.

Figure 2: A motivating example with diversity requirements.

In this example, the mobile crowd sensing system has some requirements: () Every subtask should be assigned to enough participants so that their aggregate sensing results can ensure the sensing quality. () Every subtask has different sensing requirement. The different number of participants should be recruited to satisfy different sensing requirements with minimum costs. () Every participant has different ability in terms of the task completion and should be assigned to the different number of subtasks based on his particular ability.

3.2. System Model and Problem Formulation

We present the rigorous definition and formulation of the MQMUS problem. In this problem, the recruiter can divide the task into multiple subtasks with different quality factors and the participants can be assigned to multiple subtasks in one sensing cycle.

Suppose that a crowd sensing task can be divided into disjoint subtasks according to the sensing geographic areas, and each subtask has its sensing quality factor (to simplify, we use the number of participants to represent as shown in the above motivating example). The recruiter publicizes the sensing task and the quality factor as a quality constraint for participants selecting.

Considering candidates, are interested in performing the sensing task. Each candidate submits a sensing plan to the recruiter, in which is the set of subtasks that candidate can perform (the superscript of is only used to represent that can fulfill the subtask ) and is bid price that candidate wants to charge for performing .

We assume that the candidate has a reputation score , which states the probability that the candidate performs a task correctly. The recruiter is responsible for maintaining and updating the reputation score of every candidate. The value of is set to 1 initially and updated by

We utilize a voting mechanism to set the value of . This intuition is based on the idea of the wisdom of crowds [50] that the majority of the participants are trusted. The recruiter aggregates the different sensing results to get the reliable result at the end of the sensing cycle. The setting way of is inspired by [44]. is set to “−1” in two cases: () the candidate cannot perform the subtask as the claimed sensing plan; () the sensing result of the same subtask is contrary to more than half of participants’ results; otherwise, is “0.” If , will not be selected until the recruiter resets to 1 after a period of time (e.g., 10 sensing cycles).

Assume that the number of candidates is sufficient to fulfill the sensing task with its quality constraints . This assumption is reasonable for mobile crowd sensing systems as made in [31, 33, 35]. The selected participant is placed into the list according to the order. is the ID of the candidate and its subscript denotes that is the selected participant. The recruiter has to calculate the payment for each participant as the incentive. The utility of participant can be calculated by (2), in which is the real cost of the participant and only known by itself. is not less than due to the selfishness and rationality of participants (if the reputation score of is set to a value less than 0 in this sensing cycle, the utility of will be 0 in the next sensing cycle because he will not be selected).

The utility of the recruiter is calculated by (3). is the value to the recruiter when it has collected enough data to satisfy the quality constraints of the sensing task .

The social efficiency of the sensing task (with the quality constraints ) is calculated by (4). Although the real cost is only known by participant , we will prove that claiming a different cost cannot help to increase the utility of participant in our designed mechanisms. So we use when we attempt to maximize social efficiency in the mechanisms designed below. The objective of maximizing the social efficiency is equivalent to the objective of minimizing the social cost.

Given the list of selected participants is the set of the remaining subtasks excluding those subtasks of participants according to their sensing plans. The goal of achieving high quality crowd sensing with minimum social cost can be formulated as (5) and constrained by (6).

We design a truthful incentive mechanism, , to select appropriate participants to satisfy the objective of this paper, and to eliminate the fear of market manipulation (the participants cannot improve their utility by submitting a bid price different from its real cost).

consists of two phases: winner selection algorithm and payment determination algorithm . For a given and a set of bids , the algorithm selects a subset of participants and the algorithm returns the vector for those selected participants.

We cannot find the optimal solution in polynomial time for the problem presented in (5) and (6) because this problem is NP-hard. The proof is in Appendix.

Our objective is to design the incentive mechanisms satisfying the following four desirable properties to solve problem:(i)Computational Efficiency. A mechanism is computationally efficient if both the winner selection function and payment decision function can be computed in polynomial time.(ii)Individual Rationality. Each participant will have a nonnegative utility upon performing the sensing task.(iii)Truthfulness. A mechanism is truthful if no participant can improve its utility by submitting a bid price different from its real cost, no matter what others submit. In other words, reporting the real cost is a dominant strategy for all participants.(iv)Social Optimization. The objective function is maximizing the social efficiency. We attempt to find optimal solution or approximation algorithm with low approximation ratio when there is no optimal solution computed in polynomial time.

4. Mechanism Design and Analysis

4.1. Mechanism Design

We attempt to find an approximation algorithm following a greedy approach which can be solved in polynomial time because the problem is NP-hard problem. The winner selection algorithm is illustrated in Algorithm 1 and the payment algorithm is illustrated in Algorithm 2.

Algorithm 1 .
Algorithm 2: .

In Algorithm 1, is the ID of candidates, is the number of selection round, and is the set of remaining subtasks excluding those in the sensing plans of the selected participants before the previous rounds. The effective sensing units of in the round are denoted by which can be calculated by (7), the effective average sensing weight of candidate in the round is denoted by which is calculated in Line () of Algorithm 1.

The main idea of greedy approach is to select candidate with least effective average sensing weight, so of all remaining candidates are sorted in nondecreasing order in Line () of Algorithm 1, and is the ID of the selected participant in the selection round.

The trick of lies in the use of which denotes the number of participants that can be selected in the round. The nondecreasing sorting of implies that (8) is true.

Equation (9) is true; otherwise the first selected participant in the round will be selected in the round.

The calculation method of in Line () of Algorithm 1 implies that cannot be bigger than , so (10) is true which implies the participant is selected in the nondecreasing order of the effective average sensing weight:

Let denote the IDs of the selected participants in the selection round; the set of remaining subtasks would possibly be changed only at the end of the selection round, so (11) is true.

Equation (12) is true by derivation from (10) and (11).

In Algorithm 2, is the ID of candidates with the same role as in , is the number of payment determination round, and is the set of remaining subtasks excluding those in the sensing plans of the selected participants before the previous rounds. The effective sensing units of in the round are denoted by with the same calculation method used in (7). The effective average sensing weight of candidate in the round is denoted by which is calculated in Line () of Algorithm 2.

To compute the payment for each in the winner list , we consider the set of candidates and reselect appropriate participants into the list with the same method used in (the superscript of is used to identify that is not considered as a candidate). Let and denote the selected participant, is the set of remaining subtasks excluding those effective subtasks of participants according to their sensing plans, and (13) is true for the same reason of (12).

4.2. A Walk-Through Example

To better understand the algorithm, we use the example in Figure 2 to illustrate how the works.

With regard to the aforementioned example, the crowd sensing task is divided into 3 subtasks: is set to 3, is set to 4, and is set to 5. There are 9 candidates who want to participate in the task and report their sensing plan: the bid price of is shown above it, and the subtask that can fulfill is given below it in Figure 2. Take , for example, , which can also be represented as , and . The effective sensing units of in the first round are denoted by which is calculated by (7) (i.e., ), and the effective average sensing weight of in the first round is denoted by which is calculated in Line () of Algorithm 1 (i.e., ).

We first assume that all participants are trustworthy and can fulfill the sensing units as they had claimed in their sensing plan.

In the first selection round, . of each candidate in the first round is listed in Table 1. According to , is the first winner and then , and is the third one in the first round. The selected list , , , and .

Table 1: in the first selection round.

In the second selection round, , , , , and . of each candidate in the second round is listed in Table 2. According to , is the first and only winner. The selected list and .

Table 2: in the second selection round.

In the third selection round, , , , , and . of each candidate in the third round is listed in Table 3. According to , is the first winner and is the second one. The selected list , , and .

Table 3: in the third selection round.

In the fourth selection round, , which implies the selected participants () can satisfy the sensing requirements.

If is a malicious participant, he lies in the results of all sensing units; the reputation of is 0 calculated by (1) which means he will not be selected in the next sensing recruitment cycle.

Owning to the limitation of the space and the similarity of the algorithm process, we only give the payment example of the first selected winner which is similar to the payment determination of other participants. is initialized to 0.

In the first payment determination round, , , , , and . of each candidate in the first round is listed in Table 4. According to , is the first winner; then . is the second winner; then . is the third winner; then .

Table 4: in the first payment determination round considering .

In the second payment determination round, , , , , and . of each candidate in the second round is listed in Table 5. According to , is the first and only winner; then .

Table 5: in the second payment determination round considering .

In the third payment determination round, , , , , and . of each candidate in the second round is listed in Table 6. According to , is the first and only winner; then .

Table 6: in the third payment determination round considering .

In the fourth payment round, , so the payment to is .

4.3. Properties of QAIM

In this section, we analyze the properties of theoretically to show that is computationally efficient, individually rational, and truthful. The approximation is also discussed in the end. We use to denote the number of candidates, to denote the number of subtasks, to denote all the sensing units, and to denote the effective sensing units of the selected participants , and for ease of analysis they can be calculated by

(1) Is Computationally Efficient. We analyze and , respectively, where takes in the worst case.

The nested for-loop (Lines ()–()) of will be executed times. The maximal value of is which is less than obviously in the worst case when the effective sensing unit of every candidate has only one subtask. is less than and is far less than obviously, so takes in the worst case.

takes in the worst case because there are similar processes in both (Lines ()–()) and (Lines ()–()), which will be executed times less than .

(2) Is Individually Rational. When considering the set of candidates , let be the replacement of participant which appears in the place in the selected list . Equation (16) is true according to the main idea of winner selection. will not be selected in the place if is considered, so (17) is true.

Equation (18) is true based on the derivation from (16) and (17).

Equation (19) is true according to the main idea of payment calculation in Line () of .

From the analysis of (18) and (19), we know .

(3) Is Truthful. According to Myerson’s Theorem [51], an auction is truthful if and only if the selection rule is monotone and each winner is paid the critical value ; if a participant wins the auction by bidding , he also wins by bidding but loses by bidding .

The monotonicity of the selection rule is obvious: if bids a smaller that means will also be selected according to (12).

Suppose ; (20) is true if is greater than .

Equation (20) shows the fact that will not be selected before the previous participants are selected. But if the previous participants are selected, there is no reason to select because the previous selected participants can satisfy different sensing requirements.

(4) The Approximation Factor to Optimal Solution Is . Let denote the minimal social cost computed by optimal solution, denote the effective sensing units of the selected participants calculated by (15), and denote the social cost of the selected participant .

Because the participant is selected in the nondecreasing order of the effective average sensing weight according to and the average cost of the rest uncovered sensing units is not greater than , (21) is true.

Hence the total cost of can be calculated by

5. Performance Evaluation

5.1. Before the Simulation Setup

Because there is no real data set which is consistent with the proposed system model, we have to mine the ways of human mobility from Gowalla [52] and Brightkite [53], which come from the location-based social networking website where users share their locations by checking-in. The details of the data sets are listed in Table 7.

Table 7: Facts about studied traces.

We consider the variation law of user’s mobile preferences because the sensing task is dependent on location in most crowd sensing systems. Observing a user’s visiting history can help discover the user’s abilities to fulfill the subtasks in different locations.

We divide the region into square blocks and let denote the block in which represents horizontal and vertical locations, respectively. Let denote the number of checking-ins of user during the time period in block . If is greater than zero, is called the reachable region of user during the time period .

The reachable regions of user can be viewed as the subtasks in different locations that the user can fulfill. Let denote the number of reachable regions which can be viewed as the number of subtasks that the user can fulfill, which is calculated by

With different partition granularity in different data set, more than 95% of the users’ number of reachable regions is smaller than 1.25% of the number of region blocks; the complementary cumulative distribution of follows the same power-law distribution formulated by (24), which can be seen in Figures 3 and 4, respectively.

Figure 3: The complementary cumulative distribution of in Gowalla.
Figure 4: The complementary cumulative distribution of in Brightkite.

The phenomenon of power-law distribution is consistent with our life experience: the activity scope of most people has a limited range in several specific regions in their daily life. The above-mentioned result tells us how to set the number of subtasks that the user can fulfill.

5.2. Performance Evaluation

In order to evaluate , we first introduce two baseline algorithms with the similar ideas of using redundancy.(i) is derived from the K-depth coverage objective solution proposed in [43]. No matter how different the sensing quality factor of each subtask is, is set to the maximal value of these quality factors.(ii) is derived from the idea proposed in [41]. No matter how many subtasks one participant can do, he is only assigned with one subtask at a time. So, the participant is selected in the nondecreasing order of the bid price.

The performance metrics include the social cost, the number of winners, the running time, and the truthfulness, and the importance of reputation score is also checked up.

Simulation parameters are shown in Table 8. Each measurement is averaged over 100 instances.

Table 8: Simulation settings.

(1) Impact of . Figures 57 show the performance of with different candidates when the number of sensing subtasks is set to 100. As shown in Figures 5 and 6, the social cost and the number of winners of are both less than those of or . The variation does not follow the rule of decreasing with increment of the number of candidates but is within a certain range. The reason is and which are generated randomly. has superiority in achieving high quality crowd sensing with minimum social cost. The running time of is larger than but less than as shown in Figure 7. The variation trend of the running time is consistent with the property of theoretical analysis which increases with the increasing number of candidates.

Figure 5: The social cost with different candidates.
Figure 6: The number of winners with different candidates.
Figure 7: The running time with different candidates.

(2) Impact of . Figures 810 show the performance with a fixed number of 1400 candidates when the number of sensing subtasks varies from 100 to 140 with increment of 10. As shown in Figures 8 and 9, both the social cost and the number of winners of increase with the increment of and are less than other algorithms. The running time of is larger than but less than and increases with as shown in Figure 10, which is consistent with the property of theoretical analysis.

Figure 8: The social cost with different number of sensing subtasks.
Figure 9: The number of winners with different number of sensing subtasks.
Figure 10: The running time with different number of sensing subtasks.

(3) Truthfulness. We verified the truthfulness of with different candidates when the number of subtasks is set to 100. We randomly selected the 78th participant and changed the bid price of the 78th participant. When , the 78th participant would not be selected. The running time of is recorded in Figure 11 which shows the time cost of the truthfulness. The running time of is bounded by 80 and increases with the increment of the number of candidates except when the number of candidates is 1300, which is a reasonable phenomenon since the running time of is related to not only the number of candidates but also the number of winners.

Figure 11: The running time of with different candidates.

(4) The Effect of Reputation Value. Finally, we verified the importance of the calculation of reputation value. We first set the 78th participant as the malicious user and offer the contrary sensing result to correct ones of all subtasks intentionally; we find that it would not be selected after the second test. Then we reset the reputation score to 1 and let the 78th participant be selected but the 78th participant does not fulfill one of the subtasks; we find that it would be selected after the second test and would not be selected after the third test.

6. Conclusion

In this paper, we address the fundamental research issue: how can we achieve high quality crowd sensing with the minimum social cost? To answer this question, we study different conditions of recruiter and candidates in crowd sensing system. Based on the findings, we formulate the sensing quality assurance problem as an optimization problem () and prove it to be NP-hard. We design a polynomial-time greedy approximation algorithm which consists of two phases: selects appropriate participants to satisfy the objective of this research which approximates the optimal solution with the times of and eliminates the fear of market manipulation. Through rigorous theoretical analysis, we demonstrate the proposed mechanisms with the properties of high computation efficiency, individual rationality, and truthfulness and then evaluate our algorithm using synthetic data with the features of real data sets. Evaluations show that our algorithms outperform existing approaches. In the future work, we will explore the quality-aware incentive mechanisms in more complex scenarios, for example, how to prevent cocheating using the history of mobility traces and the completed tasks list of participants.

Appendix

The MQMUS Problem Is NP-Hard

Demonstration. In order to prove that the problem is NP-hard, we first prove that the problem is NP-hard. We define as a special case of in which every is equal to one. Thereafter, we conclude that the problem is NP-hard.

The problem of can be illustrated below which is a set cover problem with weight .

Given a set of elements and a set of in which is the subset of and is the cost of , the problem of is to find a collection from such that the union of equals with the least costs. We cannot find an efficient optimal solution for the special case of in polynomial time, so is NP-hard.

is a special instance of while varies with different sensing quality requirement. Therefore, is also NP-hard.

Conflicts of Interest

The authors declare that there are no conflicts of interest regarding the publication of this paper.

Acknowledgments

This paper was sponsored by the National Natural Science Foundation of China (nos. 61572261, 61472193, and 61373138), the Natural Science Foundation of Jiangsu Province (no. BK20141429), Postdoctoral Foundation (no. 2016T90485), the Sixth Talent Peaks Project of Jiangsu Province (nos. DZXX-017 and JNHB-062), and Open Project of Jiangsu High Technology Research Key Laboratory for Wireless Sensor Networks (WSNLBZY201518).

References

  1. R. K. Ganti, F. Ye, and H. Lei, “Mobile crowdsensing: current state and future challenges,” IEEE Communications Magazine, vol. 49, no. 11, pp. 32–39, 2011. View at Publisher · View at Google Scholar · View at Scopus
  2. V. Sivaraman, J. Carrapetta, K. Hu, and B. G. Luxan, “HazeWatch: a participatory sensor system for monitoring air pollution in Sydney,” in Proceedings of the IEEE 38th Conference on Local Computer Networks Workshops (LCN '13), pp. 56–64, IEEE Computer Society, Sydney, Australia, October 2013. View at Publisher · View at Google Scholar
  3. R. K. Rana, C. T. Chou, S. S. Kanhere, N. Bulusu, and W. Hu, “Ear-phone: an end-to-end participatory urban noise mapping system,” in Proceedings of the 9th ACM/IEEE International Conference on Information Processing in Sensor Networks (IPSN '10), pp. 105–116, Stockholm, Sweden, April 2010. View at Publisher · View at Google Scholar · View at Scopus
  4. N. Maisonneuve, M. Stevens, M. E. Niessen, and L. Steels, “NoiseTube: Measuring and mapping noise pollution with mobile phones,” in Information Technologies in Environmental Engineering, Environmental Science and Engineering, pp. 215–228, Springer Berlin Heidelberg, Heidelberg, Gremany, 2009. View at Publisher · View at Google Scholar
  5. P. Mohan, V. Padmanabhan N, and R. Ramjee, “Nericell: rich monitoring of road and traffic conditions using mobile smartphones,” Tech. Rep. 323-336, Microsoft Technical Report, Raleigh, NC, USA, 2008. View at Google Scholar
  6. E. Koukoumidis, L.-S. Peh, and M. R. Martonosi, “SignalGuru: leveraging mobile phones for collaborative traffic signal schedule advisory,” in Proceedings of the 9th International Conference on Mobile Systems, Applications, and Services, pp. 127–140, ACM, Bethesda, Md, USA, July 2011. View at Publisher · View at Google Scholar · View at Scopus
  7. A. Thiagarajan, L. Ravindranath, K. LaCurts et al., “VTrack: accurate, energy-aware road traffic delay estimation using mobile phones,” in Proceedings of the 7th ACM Conference on Embedded Networked Sensor Systems (SenSys '09), pp. 85–98, November 2009. View at Publisher · View at Google Scholar · View at Scopus
  8. S. Reddy, A. Parker, J. Hyman, J. Burke, D. Estrin, and M. Hansen, “Image browsing, processing, and clustering for participatory sensing: Lessons from a DietSense prototype,” in Proceedings of the 4th Workshop on Embedded Networked Sensors, EmNets 2007, pp. 13–17, irl, June 2007. View at Publisher · View at Google Scholar · View at Scopus
  9. E. Miluzzo, N. D. Lane, K. Fodor et al., “Sensing meets mobile social netwo rks: the design, implementation and evaluati on of the cenceme application,” in Proceedings of the 6th ACM Conference on Embedded Networked Sensor Systems (SenSys '08), pp. 337–350, ACM, New York, NY, USA, November 2008. View at Publisher · View at Google Scholar · View at Scopus
  10. W. Dong, B. Lepri, and S. Pentland, “Tracking co-evolution of behavior and relationships with mobile phones,” Tsinghua Science and Technology, vol. 17, no. 2, pp. 136–151, 2012. View at Publisher · View at Google Scholar · View at Scopus
  11. Z. Yang, C. Wu, and Y. Liu, “Locating in fingerprint space: wireless indoor localization with little human intervention,” in Proceedings of the 18th Annual International Conference on Mobile Computing and Networking (Mobicom '12), pp. 269–280, ACM, Istanbul, Turkey, August 2012. View at Publisher · View at Google Scholar · View at Scopus
  12. C. Costa, C. Laoudias, D. Zeinalipour-Yazti, and D. Gunopulos, “SmartTrace: Finding similar trajectories in smartphone networks without disclosing the traces,” in Proceedings of the 2011 IEEE 27th International Conference on Data Engineering, ICDE 2011, pp. 1288–1291, April 2011. View at Publisher · View at Google Scholar · View at Scopus
  13. S. Matyas, C. Matyas, C. Schlieder, P. Kiefer, H. Mitarai, and M. Kamata, “Designing location-based mobile games with a purpose - Collecting geospatial data with cityexplorer,” in Proceedings of the 2008 International Conference on Advances in Computer Entertainment Technology, ACE 2008, pp. 244–247, December 2008. View at Publisher · View at Google Scholar · View at Scopus
  14. Sensorly. http://www.sensorly.com.
  15. S. B. Eisenman, E. Miluzzo, N. D. Lane, R. A. Peterson, G.-S. Ahn, and A. T. Campbell, “The BikeNet mobile sensing system for cyclist experience mapping,” in Proceedings of the 5th International Conference on Embedded Networked Sensor Systems (SenSys '07), pp. 87–101, Sydney, Australia, November 2007. View at Publisher · View at Google Scholar · View at Scopus
  16. Y. Liu, Y. Zhao, L. Chen, J. Pei, and J. Han, “Mining frequent trajectory patterns for activity monitoring using radio frequency tag arrays,” IEEE Transactions on Parallel and Distributed Systems, vol. 23, no. 11, pp. 2138–2149, 2012. View at Publisher · View at Google Scholar · View at Scopus
  17. P. Zhou, Y. Zheng, and M. Li, “How long to wait?: predicting bus arrival time with mobile phone based participatory sensing,” in Proceedings of the 10th International Conference on Mobile Systems, Applications, and Services (MobiSys '12), pp. 379–392, ACM, June 2012. View at Publisher · View at Google Scholar · View at Scopus
  18. L. Barkhuus, M. Chalmers, P. Tennent et al., “Picking Pockets on the Lawn: The Development of Tactics and Strategies in a Mobile Game [C],” in Proceedings of UBICOMP, pp. 358–374, 2005.
  19. C. Schlieder, “Representing the meaning of spatial behavior by spatially grounded intentional systems,” Lecture Notes in Computer Science, vol. 3799, pp. 30–44, 2005. View at Publisher · View at Google Scholar · View at Scopus
  20. M. Bell, M. Chalmers, L. Barkhuus et al., “Interweaving mobile games with everyday life,” in Proceedings of the CHI 2006: Conference on Human Factors in Computing Systems, pp. 417–426, April 2006. View at Scopus
  21. P. Kiefer, S. Matyas, and C. Schlieder, “Playing on a line: Location-based games for linear trips,” in Proceedings of the 4th International Conference on Advances in Computer Entertainment Technology, ACE 2007, pp. 250-251, June 2007. View at Publisher · View at Google Scholar · View at Scopus
  22. N. Bulusu, C. Chou T, S. Kanhere et al., “Participatory Sensing in Commerce: Using Mobile Phones to Track Market Price Dispersion,” Proc. Urbansense, pp. 6–10, 2008. View at Google Scholar
  23. L. Deng and L. P. Cox, “Live compare: grocery bargain hunting through participatory sensing,” in Proceedings of the 10th Workshop on Mobile Computing Systems and Applications (HotMobile '09), pp. 1–6, ACM, New York, NY, USA, February 2009. View at Publisher · View at Google Scholar · View at Scopus
  24. B. L. Sullivan, C. L. Wood, M. J. Iliff, R. E. Bonney, D. Fink, and S. Kelling, “eBird: a citizen-based bird observation network in the biological sciences,” Biological Conservation, vol. 142, no. 10, pp. 2282–2292, 2009. View at Publisher · View at Google Scholar · View at Scopus
  25. M. Bell, S. Reeves, B. Brown et al., “EyeSpy,” in Proceedings of the the SIGCHI Conference, p. 123, Boston, MA, USA, April 2009. View at Publisher · View at Google Scholar
  26. E. Kanjo, “NoiseSPY: a real-time mobile phone platform for urban noise monitoring and mapping,” Mobile Networks and Applications, vol. 15, no. 4, pp. 562–574, 2010. View at Publisher · View at Google Scholar · View at Scopus
  27. K. Han, E. A. Graham, D. Vassallo, and D. Estrin, “Enhancing motivation in a mobile participatory sensing project through gaming,” in Proceedings of the 2011 IEEE International Conference on Privacy, Security, Risk and Trust, PASSAT 2011 and 2011 IEEE International Conference on Social Computing, SocialCom 2011, pp. 1443–1448, October 2011. View at Publisher · View at Google Scholar · View at Scopus
  28. B. Hoh, T. Yan, D. Ganesan, K. Tracton, T. Iwuchukwu, and J.-S. Lee, “TruCentive: a game-theoretic incentive platform for trustworthy mobile crowdsourcing parking services,” in Proceedings of the 15th International IEEE Conference on Intelligent Transportation Systems (ITSC '12), pp. 160–166, IEEE, Anchorage, Alaska, USA, September 2012. View at Publisher · View at Google Scholar · View at Scopus
  29. K. O. Jordan, I. Sheptykin, B. Grüter, and H.-R. Vatterrott, “Identification of structural landmarks in a park using movement data collected in a location-based game,” in Proceedings of the 1st ACM SIGSPATIAL International Workshop on Computational Models of Place, COMP 2013, pp. 1–8, November 2013. View at Publisher · View at Google Scholar · View at Scopus
  30. L. Duan, T. Kubo, K. Sugiyama, J. Huang, T. Hasegawa, and J. Walrand, “Incentive mechanisms for smartphone collaboration in data acquisition and distributed computing,” in Proceedings of the IEEE Conference on Computer Communications (INFOCOM '12), pp. 1701–1709, Orlando, Fla, USA, March 2012. View at Publisher · View at Google Scholar · View at Scopus
  31. D. Yang, G. Xue, X. Fang et al., “Crowdsourcing to smartphones: incentive mechanism design for mobile phone sensing,” in Proceedings of the 18th Annual International Conference on Mobile Computing and Networking (MobiCom '12), pp. 173–184, ACM, August 2012. View at Publisher · View at Google Scholar · View at Scopus
  32. J.-S. Lee and B. Hoh, “Sell your experiences: A market mechanism based incentive for participatory sensing,” in Proceedings of the 8th IEEE International Conference on Pervasive Computing and Communications, PerCom 2010, pp. 60–68, April 2010. View at Scopus
  33. Z. Feng, Y. Zhu, Q. Zhang, L. M. Ni, and A. V. Vasilakos, “TRAC: Truthful auction for location-aware collaborative sensing in mobile crowdsourcing,” in Proceedings of the 33rd IEEE Conference on Computer Communications, IEEE INFOCOM 2014, pp. 1231–1239, May 2014. View at Publisher · View at Google Scholar · View at Scopus
  34. X. Zhang, G. Xue, R. Yu, D. Yang, and J. Tang, “Truthful incentive mechanisms for crowdsourcing,” in Proceedings of the 34th IEEE Annual Conference on Computer Communications and Networks, IEEE INFOCOM 2015, pp. 2830–2838, May 2015. View at Publisher · View at Google Scholar · View at Scopus
  35. J. Xu, J. Xiang, and D. Yang, “Incentive Mechanisms for Time Window Dependent Tasks in Mobile Crowdsensing,” IEEE Transactions on Wireless Communications, vol. 14, no. 11, pp. 6353–6364, 2015. View at Publisher · View at Google Scholar · View at Scopus
  36. J. Xu, J. Xiang, and Y. Li, “Incentivize maximum continuous time interval coverage under budget constraint in mobile crowd sensing,” Wireless Networks, vol. 23, no. 5, pp. 1–14, 2017. View at Publisher · View at Google Scholar · View at Scopus
  37. A. Subramanian, G. Kanth S, and R. Vaze, “Offline and Online Incentive Mechanism Design for Smart-phone Crowd-sourcing,” Computer Science, pp. 1–12, 2013. View at Google Scholar
  38. D. Zhao, X.-Y. Li, and H. Ma, “How to crowdsource tasks truthfully without sacrificing utility: Online incentive mechanisms with budget constraint,” in Proceedings of the 33rd IEEE Conference on Computer Communications, IEEE INFOCOM 2014, pp. 1213–1221, May 2014. View at Publisher · View at Google Scholar · View at Scopus
  39. L. Jaimes G, I. Vergara-Laurens, and M. Labrador, “A Location-Based Incentive Mechanism for Participatory Sensing Systems with Budget Constraints,” Dissertations Theses-Gradworks, vol. 25, no. 3, pp. 103–108, 2012. View at Google Scholar
  40. H. Jin, L. Su, D. Chen, K. Nahrstedt, and J. Xu, “Quality of information aware incentive mechanisms for mobile crowd sensing systems,” in Proceedings of the 16th ACM International Symposium on Mobile Ad Hoc Networking and Computing (MobiHoc '15), pp. 167–176, ACM, Hangzhou, China, June 2015. View at Publisher · View at Google Scholar
  41. X. Zhang, Z. Yang, Y. Gong, Y. Liu, and S. Tang, “Spatial Recruiter: Maximizing sensing coverage in selecting workers for spatial crowdsourcing,” IEEE Transactions on Vehicular Technology, vol. 66, no. 6, 2017. View at Publisher · View at Google Scholar · View at Scopus
  42. J. Xu, H. Li, Y. Li, D. Yang, and T. Li, “Incentivizing the Biased Requesters: Truthful Task Assignment Mechanisms in Crowdsourcing,” in Proceedings of the 2017 14th Annual IEEE International Conference on Sensing, Communication, and Networking (SECON), pp. 1–9, San Diego, Calif, USA, June 2017. View at Publisher · View at Google Scholar
  43. H. Xiong, D. Zhang, G. Chen, L. Wang, V. Gauthier, and L. E. Barnes, “ICrowd: Near-Optimal Task Allocation for Piggyback Crowdsensing,” IEEE Transactions on Mobile Computing, vol. 15, no. 8, pp. 2010–2022, 2016. View at Publisher · View at Google Scholar · View at Scopus
  44. Y. Wang, X. Jia, Q. Jin, and J. Ma, “QuaCentive: a quality-aware incentive mechanism in mobile crowdsourced sensing (MCS),” Journal of Supercomputing, vol. 72, no. 8, pp. 2924–2941, 2016. View at Publisher · View at Google Scholar · View at Scopus
  45. Z. He, J. Cao, and X. Liu, “High quality participant recruitment in vehicle-based crowdsourcing using predictable mobility,” in Proceedings of the 34th IEEE Annual Conference on Computer Communications and Networks, IEEE INFOCOM 2015, pp. 2542–2550, May 2015. View at Publisher · View at Google Scholar · View at Scopus
  46. D. Fudenberg and J. Tirole, Game Theory, MIT Press, 1991. View at MathSciNet
  47. V. Krishna, Auction Theory, Academic Press, 2009.
  48. C. Tanas and J. Herrera-Joancomartí, “When users become sensors: Can we trust their readings?” International Journal of Communication Systems, vol. 28, no. 4, pp. 601–614, 2015. View at Publisher · View at Google Scholar · View at Scopus
  49. L. Kazemi, C. Shahabi, and L. Chen, “GeoTruCrowd: Trustworthy query answering with spatial crowdsourcing,” in Proceedings of the 21st ACM SIGSPATIAL International Conference on Advances in Geographic Information Systems, ACM SIGSPATIAL GIS 2013, pp. 304–313, ACM, November 2013. View at Publisher · View at Google Scholar · View at Scopus
  50. James., The Wisdom of Crowds: Why The Many Are Smarter than The Few and How Collective Wisdom Shapes Business, Economies, Societies and Nations, Doubleday, 2004.
  51. R. B. Myerson, “Optimal auction design,” Mathematics of Operations Research, vol. 6, no. 1, pp. 58–73, 1981. View at Publisher · View at Google Scholar · View at MathSciNet
  52. Gowalla [EB/OL]. http://snap.stanford.edu/data/loc-gowalla.html.
  53. Brightkite [EB/OL]. http://snap.stanford.edu/data/loc-brightkite.html.