Journal of Sensors

Journal of Sensors / 2017 / Article
Special Issue

Emerging Technologies: IoT, Big Data, and CPS with Sensory Systems

View this Special Issue

Research Article | Open Access

Volume 2017 |Article ID 5757125 | https://doi.org/10.1155/2017/5757125

Ling-Yun Jiang, Fan He, Yu Wang, Li-Juan Sun, Hai-ping Huang, "Quality-Aware Incentive Mechanism for Mobile Crowd Sensing", Journal of Sensors, vol. 2017, Article ID 5757125, 14 pages, 2017. https://doi.org/10.1155/2017/5757125

Quality-Aware Incentive Mechanism for Mobile Crowd Sensing

Academic Editor: Javier Sedano
Received04 Mar 2017
Revised24 Jul 2017
Accepted22 Aug 2017
Published28 Sep 2017

Abstract

Mobile crowd sensing (MCS) is a novel sensing paradigm which can sense human-centered daily activities and the surrounding environment. The impact of mobility and selfishness of participants on the data reliability cannot be ignored in most mobile crowd sensing systems. To address this issue, we present a universal system model based on the reverse auction framework and formulate the problem as the Multiple Quality Multiple User Selection (MQMUS) problem. The quality-aware incentive mechanism (QAIM) is proposed to meet the quality requirement of data reliability. We demonstrate that the proposed incentive mechanism achieves the properties of computational efficiency, individual rationality, and truthfulness. And meanwhile, we evaluate the performance and validate the theoretical properties of our incentive mechanism through extensive simulation experiments.

1. Introduction

A new paradigm of sensing with smartphones has emerged which is usually called people-centric mobile sensing or mobile crowd sensing [1]. Compared with the traditional sensor networks, MCS is an effective way for large-scale data sensing, processing, and gathering without deploying a large number of sensor nodes. MCS has enabled numerous large-scale applications such as urban environment monitoring [24], traffic flow surveillance [57], healthcare [8], behavior and relationship discovery [9, 10], indoor localization [11], 3G/Wi-Fi discovering [1214], activity monitoring [15, 16], and bus arrival time prediction [17].

The effect of the aforementioned mobile crowd sensing applications relies heavily on the quantities of participants. However, the ordinary individuals are not willing to share their sensing capabilities unless there are sufficient incentives. Research on incentive mechanism has been widely concerned by investigators, and considerable designed schemes about the incentive mechanism design have been put forward which can be classified into nonmonetary incentives [1820] and monetary incentives [2129].

The key of any crowd sensing system is not only the quantities of participants but also the sensing quality offered by participants. However, most of the existing solutions usually assume that each sensing task (e.g., air quality in a certain region) in a sensing cycle could be performed by a single participant. It is intuitive that the quality of sensing project would be higher if each sensing task was performed by multiple participants. One of the main reasons is that the sensed data cannot always be trusted because participants maybe intentionally (e.g., malicious participants) or unintentionally (e.g., making mistakes) offer the data contrary to the truth. Another reason may come from the recruitment system model itself. A typical MCS consists of two roles: the recruiter who publicizes the sensing tasks and the participants who constitute potential sensing capability selected by the recruiter from many candidates. The interaction between the recruiter and the candidates is modeled as a reverse auction in many existing solutions which can be illustrated by Figure 1. The recruiter always selects participants according to the sensing plans of the candidates. However, changes always go beyond plans. The participants may not be able to complete the task according to their schedule for unexpected incidents (e.g., a selected participant cannot go to the specific locations claimed in his sensing plan). These participants may offer some forged data or do nothing. As a result, the tasks could not be completed in time.

In this paper, we address the issue of quality-aware monetary incentive mechanism design. We design a truthful incentive mechanism satisfying the properties of computational efficiency, individual rationality, and truthfulness with low approximation ratio.

The remainder of this paper is organized as follows. In Section 2, we review the related work. In Section 3, we describe the system model and formulate the MQMUS problem. Thereafter, in Section 4 we propose the incentive mechanism, named QAIM, which consists of two phases, winner selection and payment determination, and analyze the properties of QAIM. Section 5 presents the experimental results. Finally, we draw the conclusion and discuss some possible future directions in Section 6.

There are lots of incentive mechanisms which can be classified into nonmonetary incentives [1829] and monetary incentives [3045]. Paying for sensed data in crowd sensing tasks is the most intuitive incentive. Monetary incentive mechanisms are mainly based on two kinds of schemes: Stackelberg game and auction.

Stackelberg game is a game where one leader player has the dominant influence over the other players [46]. Duan et al. [30] make use of the Stackelberg game to design a threshold revenue model for service providers. The system and the users interact through a two-stage process similar to that of Stackelberg game. The system announces the total reward and the threshold number of required participants. Each participant decides whether to accept the task or not. Yang et al. [31] also model the proposed platform-centric incentive mechanism as a Stackelberg game, prove that this Stackelberg game has a unique equilibrium, and design an efficient mechanism for computing it. The above two Stackelberg game solutions have theoretical guarantees. However, the premise of this kind of method is that the costs of all users or their probability distributions are assumed to be known, which limits the applicability of Stackelberg game-based mechanisms because participants may keep their costs private in the real world.

An auction-based mechanism is originally the process of buying and selling goods by negotiating the monetary prices [47]. A kind of auction, called reverse auction, is adopted to model the negotiation process in crowd sensing, which is shown in Figure 1. Lee and Hoh [32] firstly design a reverse auction-based dynamic price incentive mechanism with virtual participation credit with the objective of minimizing and stabilizing the platform cost while maintaining the participation level. Yang et al. [31] consider two system models for smartphone crowd sensing system: the platform-centric model with the solution based on the Stackelberg game and user-centric model with the solution based on the reverse auction. Feng et al. [33] formulate the winning bids determination problem and present a truthful auction for location-aware collaborative sensing. Zhang et al. [34] focus on the user-centric model and study three methods which involve cooperation and competition among the services. Xu et al. [35, 36] investigate truthful incentive mechanisms for time window dependent tasks with the strong requirement of data integrity and propose two incentive mechanisms for the single time window case and the multiple window case, respectively. Subramanian et al. [37] consider offline and online incentive mechanisms using the same bidding framework with MSensing Auction proposed in [31]. Zhao et al. [38] investigate the incentive mechanisms in the online setting based on an offline budget feasible mechanism [39], which provides a starting point for the online mechanism. Jin et al. [40] pay attention to the quality of the mobile crowd sensing systems and incorporate a metric named QoI (Quality of Information) into the incentive mechanisms. SRC and MRC mechanisms with the criterion of the combinatorial QoI and price are proposed. However, the authors fail to consider the truthfulness of the MRC mechanism. The aforementioned solutions assume that each measurement of sensing task can be represented by a single sensor reading.

Several solutions are proposed to ensure the quality of crowd sensing data. Tanas and Herrera-Joancomartí [48] achieve the first work, which focuses on how to validate sensing data, but the premise of their work is that there are multiple users to submit multiple sensing readings on each task. Kazemi et al. [49] assume each worker has a reputation score, and assign enough number of workers to each spatial task such that workers’ aggregate reputation can satisfy the confidence of the task. However, they focus on self-incentivized spatial crowdsourcing, in which people perform the tasks voluntarily without any reward. Zhang et al. [41] propose a task management framework to match workers to the merged query and sensing tasks efficiently. In their model, each task can be assigned to multiple workers, and each worker can be assigned to at most one task, although each worker may have the preference for multiple tasks. Xu et al. [42] design the incentive mechanism, which considers the issue of stimulating the biased requesters in the competing crowdsourcing market. Xiong et al. [43] consider the k-depth coverage as an MCS data collection constraint, but every subtask is assigned to the same value of . Wang et al. [44] present a detailed quality-aware mobile crowdsourced sensing framework, composed of three MCS components: crowd, crowdsourcer, and crowdsourcing platform. The crowdsourcer is a new role who assesses the posted contributions’ quality. He et al. [45] propose a recruitment strategy in vehicle-based crowdsourcing through taking full advantage of predictable mobility patterns of vehicles, which bring a new insight to improve the quality of crowd sensing system. However, the behaviors of human are affected by many factors. It is far more difficult to predict the mobility patterns of human beings than those of vehicles.

In this paper, we try to enhance the quality-aware incentive mechanism from two main dimensions: the reputation of participants and the design of task.

3. Problem Statement

Different from most crowd sensing systems, the objective of this paper is designing the truthful incentive mechanism with maximum social efficiency and high sensing quality. To achieve this objective, the recruiter needs to select participants who can match the diverse requirements of the crowd sensing application with minimum social cost. Before demonstrating the rigorous problem definition, we would like to present a motivating example to make the problem better understood.

3.1. A Participant Recruitment Example in Air Quality Monitoring

We take the urban air quality monitoring MCS task as an example. As shown in Figure 2, the MCS recruiter wants to collect the state of the air in three regions (denoted as ). Nine candidates () are interested in performing the task and reporting their sensing plans, which include what they can do with the corresponding bid price. The industrial structures vary greatly in different regions. The regions with more plants, which can discharge waste gas, need more participants to monitor. For example, the recruiter wants 5 participants to monitor region and only 3 to monitor region because there are more chemical plants in region . We use squares to represent the regions, and the number above each square denotes its requirement. To the perspective of the candidates, people may not just stay in a certain region in one sensing cycle and can fulfill multiple sensing tasks in different regions. We use disks to represent the candidates, and the number above each disk denotes its corresponding bid price, and the set of regions below each disk denotes the regions that he can monitor.

In this example, the mobile crowd sensing system has some requirements: () Every subtask should be assigned to enough participants so that their aggregate sensing results can ensure the sensing quality. () Every subtask has different sensing requirement. The different number of participants should be recruited to satisfy different sensing requirements with minimum costs. () Every participant has different ability in terms of the task completion and should be assigned to the different number of subtasks based on his particular ability.

3.2. System Model and Problem Formulation

We present the rigorous definition and formulation of the MQMUS problem. In this problem, the recruiter can divide the task into multiple subtasks with different quality factors and the participants can be assigned to multiple subtasks in one sensing cycle.

Suppose that a crowd sensing task can be divided into disjoint subtasks according to the sensing geographic areas, and each subtask has its sensing quality factor (to simplify, we use the number of participants to represent as shown in the above motivating example). The recruiter publicizes the sensing task and the quality factor as a quality constraint for participants selecting.

Considering candidates, are interested in performing the sensing task. Each candidate submits a sensing plan to the recruiter, in which is the set of subtasks that candidate can perform (the superscript of is only used to represent that can fulfill the subtask ) and is bid price that candidate wants to charge for performing .

We assume that the candidate has a reputation score , which states the probability that the candidate performs a task correctly. The recruiter is responsible for maintaining and updating the reputation score of every candidate. The value of is set to 1 initially and updated by

We utilize a voting mechanism to set the value of . This intuition is based on the idea of the wisdom of crowds [50] that the majority of the participants are trusted. The recruiter aggregates the different sensing results to get the reliable result at the end of the sensing cycle. The setting way of is inspired by [44]. is set to “−1” in two cases: () the candidate cannot perform the subtask as the claimed sensing plan; () the sensing result of the same subtask is contrary to more than half of participants’ results; otherwise, is “0.” If , will not be selected until the recruiter resets to 1 after a period of time (e.g., 10 sensing cycles).

Assume that the number of candidates is sufficient to fulfill the sensing task with its quality constraints . This assumption is reasonable for mobile crowd sensing systems as made in [31, 33, 35]. The selected participant is placed into the list according to the order. is the ID of the candidate and its subscript denotes that is the selected participant. The recruiter has to calculate the payment for each participant as the incentive. The utility of participant can be calculated by (2), in which is the real cost of the participant and only known by itself. is not less than due to the selfishness and rationality of participants (if the reputation score of is set to a value less than 0 in this sensing cycle, the utility of will be 0 in the next sensing cycle because he will not be selected).

The utility of the recruiter is calculated by (3). is the value to the recruiter when it has collected enough data to satisfy the quality constraints of the sensing task .

The social efficiency of the sensing task (with the quality constraints ) is calculated by (4). Although the real cost is only known by participant , we will prove that claiming a different cost cannot help to increase the utility of participant in our designed mechanisms. So we use when we attempt to maximize social efficiency in the mechanisms designed below. The objective of maximizing the social efficiency is equivalent to the objective of minimizing the social cost.

Given the list of selected participants is the set of the remaining subtasks excluding those subtasks of participants according to their sensing plans. The goal of achieving high quality crowd sensing with minimum social cost can be formulated as (5) and constrained by (6).

We design a truthful incentive mechanism, , to select appropriate participants to satisfy the objective of this paper, and to eliminate the fear of market manipulation (the participants cannot improve their utility by submitting a bid price different from its real cost).

consists of two phases: winner selection algorithm and payment determination algorithm . For a given and a set of bids , the algorithm selects a subset of participants and the algorithm returns the vector for those selected participants.

We cannot find the optimal solution in polynomial time for the problem presented in (5) and (6) because this problem is NP-hard. The proof is in Appendix.

Our objective is to design the incentive mechanisms satisfying the following four desirable properties to solve problem:(i)Computational Efficiency. A mechanism is computationally efficient if both the winner selection function and payment decision function can be computed in polynomial time.(ii)Individual Rationality. Each participant will have a nonnegative utility upon performing the sensing task.(iii)Truthfulness. A mechanism is truthful if no participant can improve its utility by submitting a bid price different from its real cost, no matter what others submit. In other words, reporting the real cost is a dominant strategy for all participants.(iv)Social Optimization. The objective function is maximizing the social efficiency. We attempt to find optimal solution or approximation algorithm with low approximation ratio when there is no optimal solution computed in polynomial time.

4. Mechanism Design and Analysis

4.1. Mechanism Design

We attempt to find an approximation algorithm following a greedy approach which can be solved in polynomial time because the problem is NP-hard problem. The winner selection algorithm is illustrated in Algorithm 1 and the payment algorithm is illustrated in Algorithm 2.

Input: , set of bids
() ; ; ; for if ;
() while
()   for each if else ;
()   for each Sort in non-decreasing order;
()   ; ; ;
()    while
()      ; ; ; ;
()      for each
        if
()         ;
()         if
()   
() return
input: , set of bids , list of selected participants
() for all do for if ;
() for all do
()    ;; ; ;
()    while
()      for if else
()       for
()       ; ; ;
()      while
()         ; ;
           ; ; ;
()          for
()          if
()            ;
()            if
()    
() return

In Algorithm 1, is the ID of candidates, is the number of selection round, and is the set of remaining subtasks excluding those in the sensing plans of the selected participants before the previous rounds. The effective sensing units of in the round are denoted by which can be calculated by (7), the effective average sensing weight of candidate in the round is denoted by which is calculated in Line () of Algorithm 1.

The main idea of greedy approach is to select candidate with least effective average sensing weight, so of all remaining candidates are sorted in nondecreasing order in Line () of Algorithm 1, and is the ID of the selected participant in the selection round.

The trick of lies in the use of which denotes the number of participants that can be selected in the round. The nondecreasing sorting of implies that (8) is true.

Equation (9) is true; otherwise the first selected participant in the round will be selected in the round.

The calculation method of in Line () of Algorithm 1 implies that cannot be bigger than , so (10) is true which implies the participant is selected in the nondecreasing order of the effective average sensing weight:

Let denote the IDs of the selected participants in the selection round; the set of remaining subtasks would possibly be changed only at the end of the selection round, so (11) is true.

Equation (12) is true by derivation from (10) and (11).

In Algorithm 2, is the ID of candidates with the same role as in , is the number of payment determination round, and is the set of remaining subtasks excluding those in the sensing plans of the selected participants before the previous rounds. The effective sensing units of in the round are denoted by with the same calculation method used in (7). The effective average sensing weight of candidate in the round is denoted by which is calculated in Line () of Algorithm 2.

To compute the payment for each in the winner list , we consider the set of candidates and reselect appropriate participants into the list with the same method used in (the superscript of is used to identify that is not considered as a candidate). Let and denote the selected participant, is the set of remaining subtasks excluding those effective subtasks of participants according to their sensing plans, and (13) is true for the same reason of (12).

4.2. A Walk-Through Example

To better understand the algorithm, we use the example in Figure 2 to illustrate how the works.

With regard to the aforementioned example, the crowd sensing task is divided into 3 subtasks: is set to 3, is set to 4, and is set to 5. There are 9 candidates who want to participate in the task and report their sensing plan: the bid price of is shown above it, and the subtask that can fulfill is given below it in Figure 2. Take , for example, , which can also be represented as , and . The effective sensing units of in the first round are denoted by which is calculated by (7) (i.e., ), and the effective average sensing weight of in the first round is denoted by which is calculated in Line () of Algorithm 1 (i.e., ).

We first assume that all participants are trustworthy and can fulfill the sensing units as they had claimed in their sensing plan.

In the first selection round, . of each candidate in the first round is listed in Table 1. According to , is the first winner and then , and is the third one in the first round. The selected list , , , and .



4/33/21/22/33.5/23.6/23.7/22/16/3

In the second selection round, , , , , and . of each candidate in the second round is listed in Table 2. According to , is the first and only winner. The selected list and .



3/13.5/23.6/23.7/22/16/2

In the third selection round, , , , , and . of each candidate in the third round is listed in Table 3. According to , is the first winner and is the second one. The selected list , , and .



3/13.6/13.7/1MAX6/1

In the fourth selection round, , which implies the selected participants () can satisfy the sensing requirements.

If is a malicious participant, he lies in the results of all sensing units; the reputation of is 0 calculated by (1) which means he will not be selected in the next sensing recruitment cycle.

Owning to the limitation of the space and the similarity of the algorithm process, we only give the payment example of the first selected winner which is similar to the payment determination of other participants. is initialized to 0.

In the first payment determination round, , , , , and . of each candidate in the first round is listed in Table 4. According to , is the first winner; then . is the second winner; then . is the third winner; then .



4/33/22/33.5/23.6/23.7/22/16/3

In the second payment determination round, , , , , and . of each candidate in the second round is listed in Table 5. According to , is the first and only winner; then .



3.5/23.6/23.7/22/16/2

In the third payment determination round, , , , , and . of each candidate in the second round is listed in Table 6. According to , is the first and only winner; then .



3.6/13.7/1MAX6/1

In the fourth payment round, , so the payment to is .

4.3. Properties of QAIM

In this section, we analyze the properties of theoretically to show that is computationally efficient, individually rational, and truthful. The approximation is also discussed in the end. We use to denote the number of candidates, to denote the number of subtasks, to denote all the sensing units, and to denote the effective sensing units of the selected participants , and for ease of analysis they can be calculated by

(1) Is Computationally Efficient. We analyze and , respectively, where takes in the worst case.

The nested for-loop (Lines ()–()) of will be executed times. The maximal value of is which is less than obviously in the worst case when the effective sensing unit of every candidate has only one subtask. is less than and is far less than obviously, so takes in the worst case.

takes in the worst case because there are similar processes in both (Lines ()–()) and (Lines ()–()), which will be executed times less than .

(2) Is Individually Rational. When considering the set of candidates , let be the replacement of participant which appears in the place in the selected list . Equation (16) is true according to the main idea of winner selection. will not be selected in the place if is considered, so (17) is true.

Equation (18) is true based on the derivation from (16) and (17).

Equation (19) is true according to the main idea of payment calculation in Line () of .

From the analysis of (18) and (19), we know .

(3) Is Truthful. According to Myerson’s Theorem [51], an auction is truthful if and only if the selection rule is monotone and each winner is paid the critical value ; if a participant wins the auction by bidding , he also wins by bidding but loses by bidding .

The monotonicity of the selection rule is obvious: if bids a smaller that means will also be selected according to (12).

Suppose ; (20) is true if is greater than .

Equation (20) shows the fact that will not be selected before the previous participants are selected. But if the previous participants are selected, there is no reason to select because the previous selected participants can satisfy different sensing requirements.

(4) The Approximation Factor to Optimal Solution Is . Let denote the minimal social cost computed by optimal solution, denote the effective sensing units of the selected participants calculated by (15), and denote the social cost of the selected participant .

Because the participant is selected in the nondecreasing order of the effective average sensing weight according to and the average cost of the rest uncovered sensing units is not greater than , (21) is true.

Hence the total cost of can be calculated by

5. Performance Evaluation

5.1. Before the Simulation Setup

Because there is no real data set which is consistent with the proposed system model, we have to mine the ways of human mobility from Gowalla [52] and Brightkite [53], which come from the location-based social networking website where users share their locations by checking-in. The details of the data sets are listed in Table 7.


Trace sourceBrightkiteGowalla

Time/duration of trace2008/4–2010/102009/2–2010/10
The number of users58228196591
The number of check-ins44911436442890

We consider the variation law of user’s mobile preferences because the sensing task is dependent on location in most crowd sensing systems. Observing a user’s visiting history can help discover the user’s abilities to fulfill the subtasks in different locations.

We divide the region into square blocks and let denote the block in which represents horizontal and vertical locations, respectively. Let denote the number of checking-ins of user during the time period in block . If is greater than zero, is called the reachable region of user during the time period .

The reachable regions of user can be viewed as the subtasks in different locations that the user can fulfill. Let denote the number of reachable regions which can be viewed as the number of subtasks that the user can fulfill, which is calculated by

With different partition granularity in different data set, more than 95% of the users’ number of reachable regions is smaller than 1.25% of the number of region blocks; the complementary cumulative distribution of follows the same power-law distribution formulated by (24), which can be seen in Figures 3 and 4, respectively.

The phenomenon of power-law distribution is consistent with our life experience: the activity scope of most people has a limited range in several specific regions in their daily life. The above-mentioned result tells us how to set the number of subtasks that the user can fulfill.

5.2. Performance Evaluation

In order to evaluate , we first introduce two baseline algorithms with the similar ideas of using redundancy.(i) is derived from the K-depth coverage objective solution proposed in [43]. No matter how different the sensing quality factor of each subtask is, is set to the maximal value of these quality factors.(ii) is derived from the idea proposed in [41]. No matter how many subtasks one participant can do, he is only assigned with one subtask at a time. So, the participant is selected in the nondecreasing order of the bid price.

The performance metrics include the social cost, the number of winners, the running time, and the truthfulness, and the importance of reputation score is also checked up.

Simulation parameters are shown in Table 8. Each measurement is averaged over 100 instances.


Simulation parametersSettings

Uniformly distributed over
Uniformly distributed over
is random in but is between 3 and 10 which abides to Pareto distribution with = 1 and = 1.75.

(1) Impact of . Figures 57 show the performance of with different candidates when the number of sensing subtasks is set to 100. As shown in Figures 5 and 6, the social cost and the number of winners of are both less than those of or . The variation does not follow the rule of decreasing with increment of the number of candidates but is within a certain range. The reason is and which are generated randomly. <