Discrete Dynamics in Nature and Society

Discrete Dynamics in Nature and Society / 2020 / Article
Special Issue

Analysis, Control and Applications of Passivity in Complex Networks

View this Special Issue

Research Article | Open Access

Volume 2020 |Article ID 2815073 | https://doi.org/10.1155/2020/2815073

Xuemei Sun, Xiaorong Yang, Caiyun Wang, Jiaxin Wang, "A Novel User Selection Strategy with Incentive Mechanism Based on Time Window in Mobile Crowdsensing", Discrete Dynamics in Nature and Society, vol. 2020, Article ID 2815073, 13 pages, 2020. https://doi.org/10.1155/2020/2815073

A Novel User Selection Strategy with Incentive Mechanism Based on Time Window in Mobile Crowdsensing

Academic Editor: Jianquan Lu
Received25 Mar 2020
Revised08 May 2020
Accepted13 May 2020
Published01 Jun 2020

Abstract

With the rapid development of smart phones and wireless communication, mobile sensing has become an efficient environmental data acquisition method capable of accomplishing large-scale and highly complex sensing tasks. Currently, participants want to collect continuous data over a period of time. However, the number of participants varies widely in some periods. In view of this application background, this paper proposes a new incentive mechanism of extra rewards: premium and jackpot incentive mechanism (PJIM), and a new participant selection method based on time window: participant selection for time window dependent tasks (PS-TWDT). In the PJIM, the platform divides the time period of sensing tasks according to the time distribution of task participants and adopts different incentive strategies in different situations; at the same time, it introduces the prize pool mechanism to attract more participants to participate in the sensing task with fewer participants. In the PS-TWDT, we design a participant selection method based on dynamic programming algorithm. The goal is to maximize the data benefit while the sensing time of the selected participants covers the task time period. In addition, the updating strategy of participants’ credit value is added, and the credit value of participants is updated according to their willingness to participate in the task and data quality. Finally, simulation experiment verifies that the incentive mechanism and participant selection method proposed in this paper have good performance.

1. Introduction

With the development of wireless network and the progress of embedded sensors technology, there are many sensors embedded in people's smart devices, such as microphone, camera, temperature sensor, light sensor, and positioning sensor. In addition, the popularity of smart phones has given birth to a new kind of sensing network model—mobile crowdsensing. In the mobile crowdsensing network, people with smart devices can use the popular smart phones to collect data to complete various sensing tasks. As shown in Figure 1, we consider a mobile crowdsensing network consisting of task publishers, task platform, and task participants with smart devices. Task publishers publish their own tasks on the task platform and pay a certain fee to the platform. The task platform selects qualified personnel to participate in the sensing task and gives certain reward to motivate the participants. Finally, the task platform transmits the collected data to the task publisher, and the sensing task ends. Compared with the traditional sensor networks, mobile crowdsensing network has prominent advantages due to its wide coverage and low sensing cost [1]. Therefore, it has great potential in completing large-scale and complex sensing tasks as a new sensing mode and also has great advantages in social networking [24], medical treatment, environmental monitoring of a certain area, and transportation [59].

Although, compared with traditional sensing networks, mobile crowdsensing networks can save the cost of purchasing professional sensors and deploying professionals, it takes time and energy for smart phone participants to participate in sensing tasks. At the same time, smart phone participants sharing their sensing data may reveal their privacy, including location, interests, and identity. Therefore, how to motivate participants to actively participate in the task, complete the task, and successfully submit sensing data is essential. The task platform must have better compensation measures to compensate the participants and drive the crowdsensing task.

In the practical application of mobile crowdsensing, many sensing tasks have certain requirements for the continuity of sensing data, for example, pollution monitoring, traffic monitoring, and noise monitoring in a certain area in recent days or weeks. This kind of sensing tasks needs a long time of continuous monitoring; it cannot be completed by a single participant alone, so it needs multiple participants to cooperate. At the same time, it involves participation in different time periods of the day. Since most participants in a certain region have the same living habits and rest rules, the time periods for participants to participate in the task are relatively concentrated. Some time periods have more participants and some time periods have fewer participants. For example, in a continuous day of sensing tasks, the number of participants at daytime is redundant, and the number of participants at night is insufficient. However, the existing incentive mechanism research does not specifically analyze the participation time period. In order to solve this problem, the platform needs a reasonable incentive strategy to ensure that sufficient participants are involved in the task over a continuous period of time. At the same time, in the practical application of mobile crowdsensing, after ensuring that enough participants engage in the sensing task, the task platform puts forward certain value requirements for the collected data. And these tasks require participants to collect continuous sensing data for a period of time. This task related to the time window puts forward higher requirements for the participant selection method.

The key contributions of our work are summarized as follows:(1)In this paper, the specific task sensing time is divided, and a prize pool mechanism is proposed to attract participants by offering high rewards to those who have less time to participate. In the task, an optimization algorithm based on dynamic programming is proposed to select participants, and then an extra reward mechanism is proposed. It is based on the participation time of each participant to carry out different incentive strategies.(2)A time window related participant selection strategy is proposed. This mechanism mainly includes designing a participant selection method based on dynamic programming algorithm. The goal is to maximize the data benefit while the sensing time of the selected participants covers the task time period. In addition, the updating strategy of participants’ credit value is added, and the credit value of participants is updated according to their willingness to participate in the task and the data quality.(3)The proposed incentive mechanism ensures that there will be enough participants to participate in the sensing task, and the participant selection strategy is to select the appropriate participants to meet the task requirements from enough participants. These two parts of work provide a complete working mode for the task platform.

Research on incentive mechanism in crowdsensing: In [10], Zhao et al., in order to preserve the privacy of task participants and not to ignore the data quality of task participants, propose a privacy-preserving and data quality-aware incentive scheme, called PACE. In particular, data quality consists of the reliability and deviation of data. In [11], Nie et al. apply a two-stage Stackelberg game to analyze the participation level of the mobile users and the optimal incentive mechanism of the crowdsensing service provider using backward induction. In order to motivate the participants, the incentive mechanism is designed by taking into account the social network effects from the underlying mobile social domain. In [12], an incentive mechanism (RTM) based on reputation and trust is proposed for the negative impact of selfish nodes in mobile crowdsensing. This paper analyzes the reputation incentive mechanism and trust incentive mechanism and constructs an incentive model, which is divided into user selection model and reward incentive model. In [13], Zhong et al. study the stage incentive mechanism of mobile crowdsensing and divide the incentive process into two stages: recruitment stage and sensing stage. In [14], Chen et al. proposed an incentive mechanism based on stochastic game for the existing incentive mechanism without considering the uncertainty and probability of user behavior. In [15], two incentive mechanisms are designed from two aspects of participants and task release platform by using bidding game method, and two different system models are used to achieve the effect of user incentive. The paper [16] aims to design an incentive mechanism to maximize the utility of the platform under certain budget conditions. In [17], Feng et al. propose an incentive mechanism called TRAC, which focuses on the position of participants when assigning tasks. In [18], Li et al. design two incentive mechanisms, QUAC-F and QUAC-I, based on the difference in the sensing information level of participants under the condition of ensuring the maximization of platform utility. In [19], Zhan et al., considering the time characteristics of data collection, proposed an incentive mechanism based on reputation to maximize the reward for data collectors. In [20], two kinds of auction incentive mechanism framework based on privacy protection are designed to achieve the approximate minimization of social cost.

Research on participant selection in crowdsensing: In [21], Li et al. study dynamic (real-time) and heterogeneous (with different time and space coverage requirements) sensing tasks and propose offline and online algorithms for dynamic participant selection of heterogeneous sensing tasks. According to the model proposed in [22], a group based method is adopted for selection, and the participants in the group complete the sensing task by using the combination function of smart phones. Literature [23] proposes a data collection mechanism for participants’ reputation perception and divides it into two types of participants: direct sending and indirect sending, dynamically updating participants' reputation to select task participants reasonably. Literature [24] aims to optimize the number of participants and reduce the cost of tasks on the basis of meeting the requirements of task coverage. In [25], Guo et al. propose a visual group intelligence perception framework called UtiPay, which selects participants and data from both macro and micro aspects. In [26], Liu et al., based on the historical activity track data of participants, select the appropriate participants in the sensing area to complete the sensing task. In [27], Li et al. study the privacy of bidding on the basis of time and space and propose a privacy protection participant selection scheme with scalable grouping. In [28], aimed at completing the sensing task with the minimum execution time, a participant selection scheme based on location information in urban area is proposed by using vehicle ad hoc network.

The above literature has done a lot of research and discussion on incentive mechanism and participants’ selection, but this work is only based on the premise that participants are not affected by the time when they perform tasks. For the incentive mechanism, the above work does not consider that participants’ work and rest conditions have an impact on the completion time of tasks, not considering also the incentive for the insufficient time period of participants. For the participants’ selection, this paper requires participants to collect continuous sensing data for a period of time, but the above work cannot be effectively applied to the tasks related to the time window. In order to solve the above problems, in this paper, the specific task sensing time is divided, and an incentive mechanism based on time window is proposed, which can attract participants to cover the task time by high reward for the task time period with insufficient participants. Due to the differences in the credibility of different participants, this paper proposes a time window related participant selection strategy. The goal is to maximize the platform data benefit based on covering the task time window.

3. System Model and Problem Formulation

In this scenario, there is a task publishing platform and many participants. The task publishing platform wants to collect sensing data in a continuous period of time. This paper assumes that the task publishing platform releases a sensing task, which requires continuous monitoring for several days. After the platform publishes the tasks, all participants report their sensing time period and bid price in the first day. According to the sensing time period and bid price of participants, the platform selects appropriate participants to participate in the sensing tasks. It is not difficult to understand that most participants are more willing to participate in sensing tasks during the day, and only a few are willing to participate in sensing tasks at night. This requires an incentive mechanism to stimulate more people to participate in the nighttime data collection. In this paper, an incentive mechanism is proposed to reward the night participants. Through continuous bonus accumulation and iteration, more and more people are gradually stimulated to participate in the night sensing data. Table 1 shows common parameters.


SSet of selected winners
SnSet of participants at night
SdSet of participants in the daytime
Night backup assembly
miThe median of the time period reported by the participant i
sStart time of participating in sensing task
eEnd time of participating in sensing task
WA unit time period of sensing task
TsThe start time of a unit time period
TeThe end time of a unit time period
The length of a time window
siStart time reported by participant i
eiEnd time reported by participant i
WeightThe ratio of participants in the reported daytime period to all participants
DPrice pool reward
piRewards received by participant i
biBid price of participant i
aiExtra rewards received by participant i
diPrice pool reward received by participant i
U, uiSet of participants, participant i
Represents the sensing time of participant i in the k-th time window
Bid price of participant i in the k-th time window
Reliability of participant i in the k-th time window
riCredit value of participant i
Sensing data volume of participant i in the k-th time window
Device battery of participant i in the k-th time window
Data benefit of the k-th time window
Trust status feedback value of participant i in the k-th time window
Degree of willingness to participate of participant i in the k-th time window
Data quality uploaded by participant i in the k-th time window
Rewards received by participant i in the k-th time window
Timeliness, completeness, accuracy, value

In this paper, the sensing time period of tasks is defined as 24 hours a day, expressed in . is a unit time period of sensing task. refers to the start time of a unit time period, and refers to the end time of a unit time period. Suppose that a certain number of participants U= {1, 2, 3, …, n} are interested in the sensing task, and each participant as a two-tuple . is the time when participants participate in sensing tasks, where is the start time and is the end time. refers to the bid price of participant participating in sensing task. As shown in Figure 2, when collecting continuous sensing data in time windows, represents the length of a time window. Suppose that there are enough participants in the platform after the incentive; each participant is represented as , where represents the sensing time of participant in the k-th time window, is the start time, and is the end time. is the bid price of participant in the k-th time window.

3.1. Task Publisher

Task publishers put forward requirements for sensing tasks, for example, to collect sensing data of a certain aspect within a certain range, then send the requirements for sensing tasks to the mobile task platform, and upload the reward to the platform.

3.2. Sensing Participants

After the participants get the task information in the task area, they can decide whether to accept these subtasks according to their own situation. When the sensing participant receives the sensing task, the sensing data is collected and processed by the smart device, and then the sensing data is uploaded to the mobile task platform.

3.3. Mobile Task Platform

According to the requirements of sensing tasks, the mobile task platform divides the tasks reasonably and selects the appropriate participants to send them the sensing tasks. At the same time, the data collected by the participants are processed, analyzed, and integrated; then the sensing data is returned to the task publisher; and the participants who participate in the sensing task are rewarded.

Considering the regularity of participants’ work and rest, a unit time period of the task is divided into daytime time period and nighttime period. This paper designs an incentive mechanism and sets the amount to be paid to participants according to different time periods and participants' participation. In order to compensate participants for participating in tasks and attract more participants during nighttime, this paper not only pays participants according to their bid price, but also gives them extra rewards. If the nighttime participants are not enough to cover the night sensing time period of the task, this paper will issue the price pool reward before the participants report the perception time period next time, and pay the price pool reward to the nighttime participants after the task is successfully executed.

The goal of this paper is to minimize the sum of rewards paid to participants on the premise that the task is successfully performed, which can be described as an optimization problem as shown in the following formulas:

In the objective function, represents the reward paid to according to the bid price. is the additional reward paid to participants in the nighttime period of the task. is the price pool rewards paid to participants in the nighttime period of the task. Participants participating in the daytime period of the task have no additional rewards and price pool rewards, so the rewards and price pool rewards values of those participating in the daytime period of the task are 0. The constraints are shown in (2), where represents the selected participants to participate in the period of time to cover the whole task.

When there are enough participants to participate in the perception task, it is necessary to select the participants of the sensing task. Task publishers require collecting continuous sensing data in time windows, so the sensing time of participants selected by the platform in each time window must continuously covers the time window; otherwise, it does not meet the task requirements. In order to get more accurate sensing data, the platform needs to select appropriate participants for each time window, maximize the reliability of the selected participants, and minimize the sensing cost on the premise that the task requirements are met, that is, to maximize the data benefits of the platform. Each time window objective can be described as an optimization problem as shown in the following formulas:

In the objective function (3), is the sensing data reliability of participant in the k-th time window of the task, and is the bid price of participant in the k-th time window of the task. In this paper, the ratio of the total data reliability of the selected participant and the total bid price in a time window is defined as the data benefits of the time window. In (3), represents the data benefits in the k-th time window. The constraint (4) indicates that the time window participated by the selected participant can cover the time window of the whole task.

4. PJIM and PS-TWDT Algorithm Design

4.1. PJIM Incentive Mechanism Design

In the process of participant selection, most participants tend to participate in sensing tasks at daytime, so this paper assumes that there are enough participants to participate in sensing tasks at daytime. At night, most of the participants will give up participating in the sensing task and choose to rest. Therefore, there may not be enough participants in the sensing task at night. For the convenience of understanding, this paper changes one unit time period of the task from 0 to 24 to 0 to 12. To sum up, a unit time period of the task is divided into daytime and nighttime. In this paper, the nighttime period is 0 to 6 o’clock, and the daytime period is 6 to 12 o’clock.

4.1.1. Extra Reward Mechanism

Because the nighttime of sensing task is not consistent with the working time of most participants, there are not enough nighttime participants in the sensing task. In order to encourage more participants to participate in the nighttime period of the task, this paper sets up extra rewards as shown in (5) to reward participants who complete the nighttime period of the task.where is the extra reward paid to participant . represents the lowest single value of participants in the daytime period of the task. The single value is the ratio of the participants' bid price to the sensing task duration. is the participation time length of participant in the sensing task at night. Weight represents the proportion of the number of participants at the daytime of the task to the total number of participants. The extra reward at the nighttime of the task is directly proportional to the number of participants at the daytime of the task. The more the participants at the daytime of the task, the higher the extra reward paid to participants at the nighttime of the task.

4.1.2. Price Pool Reward Mechanism

If there are not enough participants in the nighttime period of the task within the first unit time period of the task, the task of this unit time period will not be completed. When selecting participants, the participation time period should be reported in advance. The goal is that the selected participants should cover the whole time period. In order to promote enough participants to complete the task in the unit time period of the next task, this paper sets a price pool mechanism to encourage participants to participate in the nighttime period of the task and issues the price pool reward amount before participants report the next sensing time period. The reward amount of the price pool is calculated by the following formula:where pond is the price pool reward amount. represents the lowest single value of participants in the daytime period of the task. indicates the duration of the task nighttime period. Weight represents the proportion of the number of participants at the daytime of the task to the total number of participants. If the second unit time period of the task is still not completed, that is, participants in the nighttime period of the task are still insufficient, and the price pool reward amount is overlapped until the task can be completed, the price pool reward will be paid to participants in the nighttime period of the task.

This paper introduces an important parameter, the private target threshold of participant , . The paper assumes that when the published prize pool amount is greater than the private target threshold of the participant, the participant is satisfied and will be willing to participate in the task night sensing period in the next unit period of the task. Since most participants are not greedy in real life, this assumption is reasonable [29]. In this paper, all sets of participants whose private target threshold is less than the price pool amount are represented by a night backup set .

According to the work and rest rules of the participants, the unit time of the task is divided into daytime and nighttime, and the incentive mechanism is set up, including extra reward and price pool reward to attract enough participants to participate in the nighttime tasks, so as to ensure the successful implementation of the task. The goal of this paper is to minimize the compensation paid to participants on the premise that tasks are successfully executed. First, it is determined whether the nighttime period can be covered by the participation time period of participants. If so, the MST algorithm in literature [30] is used to select the set of participants with the minimum sum of payment and calculate the extra reward amount for nighttime participants. If the nighttime participants are not enough to complete the sensing task, the price pool reward amount will be calculated and announced before the next participant reports the sensing time period. The price pool reward amount is gradually accumulated until there are enough participants in the nighttime period, and then MST Algorithm 1 is used to select the appropriate participants and pay the extra rewards and price pool rewards to the nighttime period participants:(1)Initialize the participant set U, the corresponding participation time period is L, the bid price is B, and set the price pool reward is .(2)Rank participants in ascending order of sensing start time reported.(3)Judge whether the start time of the first participant covers the start time of the task. If so, perform the next step. Otherwise, calculate the price pool reward, end the algorithm, and return the price pool reward pond of the day.(4)Determine the task duration of each participant u in the participant set.(5)Judge whether the reported time period of the night participant can continuously cover the night task time period. If so, perform the next step. Otherwise, calculate the price pool reward and the algorithm ends and return the price pool reward pond of the day.(6)Use the MST algorithm to select the participant set S with the lowest total bid price.(7)Calculate the extra reward of the selected participants in the nighttime period, and then give different rewards to the selected participants who participate in the task in different time periods.(8)The algorithm ends, and return the selected participant set S and the paid reward P.

Input: participant set U, participation time period L, participant bid price B, price pool initial reward D, Participant private target threshold G
Output: the selected participant set S, the paid reward P or price pool amount D
(1)S ⟵ Ø,  ⟵ Ø,  ⟵ Ø,  ⟵ Ø
(2)The participants are ranked and judged by the incrementing start time of the report
(3)if  [, ]
(4) for i = 1 to n do
(5)   = /2
(6) if  [0, 6]
(7)   ⟵ 
(8) else if  [6, 12]
(9)   ⟵ 
(10)  end for
(11)Calculate whether participants in the set can cover the night time period of the task
(12) if the participation time of can’t cover the night task
(13)  D ⟵ Calculate the prize pool amount
(14)   ⟵ Calculate satisfied participants set U
(15)  Return (D, )
(16) else
(17) The dynamic programming algorithm is used to select the participant set S and calculate the cost
(18) for all iU do
(19)   ⟵ 0
(20) end for
(21) for all iS do
(22)   ⟵ cost(U\{i}) − (cost(U) − )
(23) end for
(24) for all iS do
(25)   ⟵ 
(26)   ⟵ Calculate extra rewards
(27)   ⟵ Calculate the prize pool amount
(28) end for
(29) Return(S, P)
(30)else
(31)D ⟵ Calculate the prize pool amount
(32) ⟵ Calculate satisfied participants set U
(33) Return(D, )
4.2. PS-TWDT Participant Selection

When the participants at night meet our requirements, we need to select the participants based on the differences in their credit, participation time, and other aspects. Therefore, this paper proposes two aspects: One is the participant selection method based on dynamic programming algorithm, which aims to maximize the data benefit on the basis of covering the task time window. The second is that the credit value updating mechanism of participants updates the credit value of participants according to the willingness of participants to perform tasks and the quality of data collected.

4.2.1. Data Reliability Definition

Because the sensor nodes in the mobile crowdsensing network are no longer sensor devices deployed by professionals, but ordinary people carrying smart devices, the accuracy of the sensing data provided by participants is low and there are individual differences. The accuracy of sensing data directly affects the results of sensing tasks, so the platform should select participants reasonably in order to meet the task requirements and obtain more reliable sensing data. The reliability of participants is mainly related to two factors. The first is the credibility of participants themselves, that is, the credit value of participants. The higher the credit value of the participants, the higher the reliability of the perceived data. The second is the amount of sensing data that participants can provide. The larger the amount of data, the more the high-quality data, and the higher the data reliability. According to the above two points, the reliability of participants defined in this paper is shown in the following formula:where represents the reliability of participant in the k-th time window; represents the credit value of the participant, which reflects the past performance of the task; represents the amount of data perceived by participants in a unit time; and represents the amount of sensing data that the participant can provide in the k-th time window and is mainly determined by the power of the smart device that the participant carries [26]. The specific relationship is shown in formula (8), where is the initial power of the device when the participant participates in the k-th time window of the task, and are parameters in the functional relationship, is 8.179, and is 0.4633.

4.2.2. Credit Value Update Mechanism

In this paper, the credit value of participants is defined as , which is quantified as a value within the range [0, 1]. A credit value of 0 indicates complete untrustworthiness, 0.5 indicates uncertainty, and 1 indicates complete credibility. The initial credit value of participants involved in the sensing task for the first time is set to 0.5, which indicates the uncertainty of credit at the initial time. In order to make the credit value of participants more accurately reflect the credibility of participants, this paper designs a credit value update mechanism, which updates the credit value of participants according to the performance of participants after each task time window. Before introducing the mechanism of updating reputation value, we first introduce the trust state feedback value. The definition of trust state feedback value is divided into two parts: degree of willingness to participate and data quality.

(1) Degree of Willingness to Participate. The degree of willingness to participate is shown in (9), indicating the degree of participation in sensing tasks, where represents the time proportion of perception task participants in the k-th time window. The larger the time proportion is, the higher the enthusiasm of task participants is. In order to avoid the one-sided influence brought by single factor measurement of willingness to participate, this paper adds the current power of mobile devices when participants engage in the k-th time window; the more the power is, the more active they are in the sensing task.

(2) Data Quality. The data quality is shown in (10). represents the data quality provided by the k-th time window of participant , including timeliness , completeness , accuracy , and value . In this paper, these four factors are simply quantified as values in [0, 1]. 0 represents poor timeliness, incompleteness, inaccuracy, and no value of the data collected by participants; the data quality is totally unreliable. 1 means the collected data is timely, complete, accurate, and valuable; the data quality is completely reliable.

(3) Trust State Feedback Value. The calculation method of the trust state feedback value is obtained by comparing the trust status of participants in the task with the average trust status of other selected participants in the task, as shown in (11). represents the trust state feedback value obtained after the participant participated in the k-th time window. The higher the willingness and data quality of participant are, the more accurate and reliable the perception task is, and the higher the feedback value of trust state is. In addition, the trust state is inversely proportional to the reward received by the participants. The higher the reward received by the participants, the higher the cost of data collection by the platform, which is not conducive to the completion of perception tasks. At this time, the trust state is lower.

(4) Credit Value Update. At the end of each task time window, the platform updates the current credit value of the participant according to the trust status feedback value of the participant in this time window, as shown in (12) [31], and the updated credit value will be the credit value of the participant participating in the next time window.

In this paper, data reliability parameters are defined according to the sensing data quantity and credit value of participants when they participate in tasks. The goal of this paper is to ensure that the sensing time of the selected participants in each task time window continuously covers the time window, and maximize the data benefit in each time window. Finally, the compensation is paid according to the bid price of the selected participants. The pseudocode selected by the participants is shown in Algorithm 2, and the detailed description is as follows:(1)Initialize participant set U, corresponding to participation time window, bid price, battery of the mobile phone, and credit value.(2)Calculate the sensing data quantity and sensing data reliability of participants.(3)Sort all participants incrementally by end time.(4)Use dynamic programming algorithm to select the participant set with the greatest data benefit.(5)Calculate data benefit .(6)Reward participants according to their bid price(7)End of the algorithm, and return to the selected participant set and data benefit .

Input: set of participants U, set of initial mobile phone battery E, set of initial credit value R, task duration T
Output: Selected participants S, Data benefit V
(1)for i = 1 to N do
(2) ← Calculate the amount of sensing data
(3) ← Calculate sensing data reliability
(4)end for
(5)Sort participants by end time incrementally
(6)for i = 1 to N do
(7) if  [, ] then
(8)  pre(i) ← (−1), P(i) = , B(i) = ;
(9) else
(10)  pre(i) ← arg max ≥ , j < i (P(j) + pi/B(j) + )
(11)  P(i) ← P(pre(i)) + , B(i) ← B(pre(i)) + 
(12) end if
(13)end for
(14)i ← arg max  [,], jU (P(j) + /B(j) + )
(15)V ← P(i)/B(i)
(16)while i ≠ −1
(17)S ← S{i}, i ← pre(i)
(18)for all iU do
(19) ← 0
(20)end for
(21)for all iS do
(22) ← 
(23)end for
(24)Return(S, V)
(25)for all iS do
(26) Calculate the degree of willingness to participate
(27) Calculated data quality
(28) Calculate the trust state feedback value
(29) ← Update participant credit value
(30)end for

5. Performance Evaluation

5.1. PJIM and PS-TWDT Experimental Settings

In this paper, we propose an incentive mechanism based on the specific division of the time period of the sensing task. Considering the time period reported by the mobile participants, we use the dynamic programming algorithm to select the participants, and carry out different incentive mechanisms for the participants in different time periods. In order to verify the effectiveness of the mechanism proposed in this paper, simulation experiments are carried out in MATLAB R2016a experimental environment, and the results are compared and analyzed. At the same time, in order to verify the effectiveness of the participant selection mechanism proposed in this paper, the experimental results are compared with MST [30] and random participant selection (Random) in terms of data reliability, data benefit, and sensing cost.

Here is the random participant selection method. In this paper, the length of a time window of the task is set to 12 hours. There are 100 participants in each time window of the task. For example, if the task is from 0 o’clock to 12 o’clock, then the selected participants need to cover this time when performing the task. Thus, the random selection method will sort all the participants by their starting time incrementally. Then select the participants from 0 o’clock. For the sake of understanding, let us assume that there are three participants. The first participant reported that the task was performed from 0 o’clock to 2 o’clock. The second participant reported that the task was performed from 1 o’clock to 3 o’clock. The third participant reported that the task was performed from 2 o’clock to 4 o’clock. When choosing one of the three participants at random, you must cover the time point of 0 o’clock. Suppose you pick the first person at random. Thus, the end time for the first person is 2 o’clock. Next, when you select participants, you need to include 2 o’clock and so on, covering directly up to 12 o’clock, until the selection process is complete.

In order to verify the effectiveness of PJIM mechanism, m participants were simulated in MATLAB experimental environment. Because the participants in the crowdsensing are nonprofessionals, the process of sensing data is very random. In order to simulate the actual sensing task, this paper assumes that the bid price of participants is a random value obtained from the uniform distribution of [1, 10], and the private target threshold of each participant is also a random value obtained from the uniform distribution of [1, 10]. Then verify the effectiveness of PS-TWDT. Suppose that the task publisher requires collecting the data of the air quality in a region from 6:00 to 18:00 for ten consecutive days. In order to simulate the actual sensing task, this paper sets a time window of 12 hours and executes 10 time windows continuously. The candidate number of participants is 100. Randomly set the sensing time reported by the participants, and the bid price of the participants is subject to the uniform distribution on the interval [1, 10]. The electric power of participants’ equipment is subject to the uniform distribution on [1, 100], and the timeliness, completeness, accuracy, and value of participants are subject to the uniform distribution on [0, 1].

5.2. Experimental Results
5.2.1. Number of People in Backup Set at Night

Set task time T = 12 and task quantity as 1. When the number of mobile participants u and task price pool change, this will affect the number of backup participants at night. As the number of mobile participants increases from 10 to 60, the number of corresponding night backup sets increases. This is because when the number of mobile participants increases, the private target threshold of mobile participants will also increase. For the same price pool, more participants will participate in the sensing task. At the same time, when the number of participants remains the same, the price pool increases, and the corresponding number of night backup sets increases. This is because when the number of participants remains the same, the price pool amount increases cumulatively, which gradually meets the private target threshold of participants, so the number of nighttime backup sets increases as shown in Figure 3.

5.2.2. Price Pool Reward

Set task time T = 12 and task quantity as 1. When the number of mobile participants u and sensing task days change, the price pool amount will be affected. As the number of days of unfinished tasks increases, the price pool amount increases. This is because the initial amount of the price pool is small at the beginning of the task release. With the increase of time, the amount of the price pool gradually accumulates. When the number of unfinished tasks increases gradually, the price pool amount will meet the private target threshold of all participants. As the number of participants in the sensing task increases from 30 to 60, the price pool amount generally decreases as shown in Figure 4.

5.2.3. Night Participants’ Compensation and Day Participants’ Compensation

Figure 5 shows the change in rewards given to night and day participants as the number of mobile participants increases. It can be seen from the figure that the payment for night participants is significantly higher than that for day participants. This is because participants are compensated for participating in tasks during the nighttime period to attract more participants. The average payment for night participants is 17.2067, while the average payment for day participants is 12.

5.2.4. Ratio of the Number of People in the Backup Set at Night to the Total Number of People

Figure 6 shows the change in the ratio of the number of night backup participants to the total number of participants as the number of days of unfinished tasks increases. It can be seen from the figure that, with the increase of the number of days of unfinished tasks, the price pool reward will increase, which will meet the private target threshold of more people. Finally, the ratio of the number of people in the nighttime backup set to the total number of people will reach 1.

5.2.5. Data Reliability

In terms of verifying data reliability, two trend charts of data reliability in Figures 7 and 8 are obtained as the number of task time windows increases. In Figure 7, the ordinate is the average data reliability of the selected participants in a single time window. In the figure, it can be clearly seen that as the number of time windows for task execution increases from 1 to 10, the proposed PS-TWDT selection method is superior to MST and Random in terms of average data reliability in each time window. This is because the data reliability of participants should be fully considered in the selection process of the proposed algorithm. In each time window, participants with high data reliability are selected based on dynamic programming algorithm. In Figure 8, the ordinate is the cumulative data reliability of the selected participants in a single time window. As the number of time windows for task execution increases from 1 to 10, the cumulative increase speed of the PS-TWDT selection method proposed in this paper is significantly faster than the other two selection methods in terms of the total data reliability.

5.2.6. Data Benefit

In terms of data benefits, two trend charts of data benefits in Figures 9 and 10 were obtained as the number of task time windows increased. In Figure 9, the ordinate is the average data benefit of the selected participants in a single time window. As the number of time windows for task execution increases from 1 to 10, although the average data benefit of PS-TWDT selection method proposed in this paper fluctuates greatly, the average data benefit in each time window is better than MST and Random. Because the trust state update mechanism is added to PS-TWDT, each participant selected in the continuous execution task has a high credit value and high data reliability. And due to the dynamic programming algorithm adopted, the ratio between the total reliability selected in each time window and the sum of bid price is the largest, so the data benefit of the last selected participant is the highest. In Figure 10, the ordinate is the cumulative data benefit of the selected participants in a single time window. As the number of task execution time windows increases from 1 to 10, the cumulative growth rate of the PS-TWDT selection method proposed in this paper is significantly faster than the other two selection methods in terms of data benefit, and the gap is gradually increasing.

5.2.7. The Average Cost

In terms of the average data cost, the change trend chart of the average data cost with the increase of the number of task time windows is obtained in Figure 11, in which the ordinate is the average cost of the selected participants in a single time window. As the number of time windows for task execution increases from 1 to 10, we can see that the average data cost of PS-TWDT and MST proposed in this paper is not much different and much lower than that of Random method. This is because, in the process of selecting participants, both the participant selection mechanism and MST proposed in this paper adopt the dynamic programming algorithms, which take the bid price into account when selecting participants, while the Random method does not take the bid price into account; its average data cost is higher than the other two methods, and its fluctuation range is the largest.

6. Conclusions

In the crowdsensing problem that the sensing task depends on the time period, this paper divides the specific time and then proposes an incentive mechanism, PJIM, which gives different rewards to participants in different time periods, setting additional rewards and price pools to attract participants to participate in a smaller number of sensing time periods. Finally, the simulation results show that the incentive mechanism has a higher reward for the time period with a small number of participants and can attract more participants to participate in the sensing task. When selecting participants, this paper proposes a participant selection mechanism, PS-TWDT, to solve the time window related participant selection problem in mobile crowdsensing. Some of the participants' own factors are taken into consideration in the selection, and, at the same time, the credit value updating mechanism is introduced, which can dynamically update the credit value of the participants after each task is performed, thus improving the reliability and data efficiency of the collected data.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work was supported by the Natural Science Foundation of Tianjin under Grant no. 19JCYBJC15400, the Program for the Science and Technology Plans of Tianjin, China, under Grant 19JCTPJC48300, and the National Natural Science Foundation of China under Grant 61702366.

References

  1. W. Z. Khan, Y. Xiang, M. Y. Aalsalem, and Q. Arshad, “Mobile phone sensing systems: a survey,” IEEE Communications Surveys & Tutorials, vol. 15, no. 1, pp. 402–427, 2013. View at: Publisher Site | Google Scholar
  2. Z. Wang, C. Xia, Z. Chen et al., “Epidemic propagation with positive and negative preventive information in multiplex networks,” IEEE Transactions on Cybernetics, vol. 23, pp. 1–9. View at: Publisher Site | Google Scholar
  3. C. Xia, Z. Wang, C. Zheng et al., “A new coupled disease-awareness spreading model with mass media on multiplex networks,” Information Sciences, vol. 34, pp. 185–200, 2019. View at: Google Scholar
  4. Z. Wang, Q. Guo, S. Sun, and C. Xia, “The impact of awareness diffusion on SIR-like epidemics in multiplex networks,” Applied Mathematics and Computation, vol. 349, pp. 134–147, 2019. View at: Publisher Site | Google Scholar
  5. N. Lane, E. Miluzzo, H. Lu, D. Peebles, T. Choudhury, and A. Campbell, “A survey of mobile phone sensing,” IEEE Communications Magazine, vol. 48, no. 9, pp. 140–150, 2010. View at: Publisher Site | Google Scholar
  6. Z. Duan, W. Li, and Z. Cai, “Distributed auctions for task assignment and scheduling in mobile crowdsensing systems,” in Proceedings of the IEEE International Conference on Distributed Computing Systems, pp. 635–644, Atlanta, GA, USA, June 2017. View at: Google Scholar
  7. Y. Wang, Z. Cai, G. Yin, Y. Gao, X. Tong, and G. Wu, “An incentive mechanism with privacy protection in mobile crowdsourcing systems,” Computer Networks, vol. 102, pp. 157–171, 2016. View at: Publisher Site | Google Scholar
  8. Z. Duan, M. Yan, Z. Cai, X. Wang, M. Han, and Y. Li, “Truthful incentive mechanisms for social cost minimization in mobile crowdsourcing systems,” Sensors, vol. 16, no. 4, p. 481, 2016. View at: Publisher Site | Google Scholar
  9. L. Zhang, X. Wang, J. Lu, P. Li, and Z. Cai, “An efficient privacy preserving data aggregation approach for mobile sensing,” Security and Communication Networks, vol. 9, no. 16, pp. 3844–3853, Jul.2016. View at: Publisher Site | Google Scholar
  10. B. Zhao, S. Tang, X. Liu et al., “PACE: privacy-preserving and quality-aware incentive mechanism for mobile crowdsensing,” IEEE Transactions on Mobile Computing, vol. 34, 2020. View at: Publisher Site | Google Scholar
  11. J. Nie, J. Luo, Z. Xiong, D. Niyato, and P. Wang, “A Stackelberg game approach toward socially-aware incentive mechanisms for mobile crowdsensing,” IEEE Transactions on Wireless Communications, vol. 18, no. 1, pp. 724–738, 2019. View at: Publisher Site | Google Scholar
  12. H. Wang, C. Liu, Y. Wang, and D. Sun, “A novel incentive mechanism based on reputation and trust for mobile crowd sensing network,” in Proceedings of the International Conference on Control, Automation and Information Sciences, pp. 526–530, Hangzhou, China, 2018. View at: Google Scholar
  13. S. Zhong, D. Tao, H. Luo, M. S. Obaidat, and T. Wu, “Staged incentive mechanism for mobile crowd sensing,” in Proceedings of the IEEE International Conference on Communications (ICC), pp. 1–5, Kansas City, MO, USA, March 2018. View at: Google Scholar
  14. X. Chen, Y. Zhao, Z. Li, and Y. Chen, “Incentive mechanism design based on stochastic game for multi-modality crowd sensing,” in Proceedings of the IEEE International Conference on Smart Internet of Things (IEEE SmartIoT 2018), pp. 222–228, Xi’an, China, August 2018. View at: Google Scholar
  15. D. Yang, G. Xue, X. Fang, and J. Tang, “Incentive mechanisms for crowdsensing: crowdsourcing with smartphones,” IEEE/ACM Transactions on Networking, vol. 24, no. 3, pp. 1732–1744, Jun.2016. View at: Google Scholar
  16. X. Zhang, Z. Yang, Z. Zhou, H. Cai, L. Chen, and X. Li, “Free market of crowdsourcing: incentive mechanism design for mobile sensing,” IEEE Transactions on Parallel and Distributed Systems, vol. 25, no. 12, pp. 3190–3200, 2014. View at: Publisher Site | Google Scholar
  17. Z. Feng, Y. Zhu, Q. Zhang, L. M. Ni, and A. V. Vasilakos, “TRAC: truthful auction for location-aware collaborative sensing in mobile crowdsourcing,” in Proceedings of the IEEE INFOCOM 2019 - IEEE Conference on Computer Communications, pp. 1231–1239, Toronto, ON, USA, 2014. View at: Google Scholar
  18. M. Li, J. Lin, D. Yang, D. Xue, and J. Tang, “QUAC: quality-aware contract-based incentive mechanisms for crowdsensing,” in Proceedings of the 14th IEEE International Conference on Mobile Ad hoc and Sensor Systems (MASS), pp. 72–80, Orlando, FL, USA, October 2017. View at: Publisher Site | Google Scholar
  19. Y. Zhan, Y. Xia, J. Zhang, and Y. Wang, “Incentive mechanism design in mobile opportunistic data collection with time sensitivity,” IEEE Internet of Things Journal, vol. 5, no. 1, pp. 246–256, Feb.2018. View at: Publisher Site | Google Scholar
  20. J. Lin, D. Yang, M. Li, J. Xu, and G. Xue, “Frameworks for privacy-preserving mobile crowdsensing incentive mechanisms,” IEEE Transactions on Mobile Computing, vol. 17, no. 8, pp. 1851–1864, 2018. View at: Publisher Site | Google Scholar
  21. H. Li, T. Li, W. Wang, and Y. Wang, “Dynamic participant selection for large-scale mobile crowd sensing,” IEEE Transactions on Mobile Computing, vol. 18, no. 12, pp. 2842–2855, 2019. View at: Publisher Site | Google Scholar
  22. M. E. Barachi, A. Lo, S. S. Mathew, and K. Afsari, “A novel quality and reliability-based approach for participants’ selection in mobile crowdsensing,” IEEE Access, vol. 7, pp. 30768–30791, 2019. View at: Publisher Site | Google Scholar
  23. J. Yang, P. Li, and J. Yan, “MCS data collection mechanism for participants’reputation awareness,” Chinese Journal of Engineering, vol. 39, no. 12, pp. 1922–1934, 2017. View at: Google Scholar
  24. H. Xiong, D. Zhang, L. Wang et al., “CrowdRecruiter: selecting participants for piggyback crowdsensing under probabilistic coverage constraint,” in Proceedings of the 2014 ACM International Joint Conference on Pervasive and Ubiquitous Computing, pp. 703–714, Berlin, Germany, 2014. View at: Google Scholar
  25. B. Guo, H. Chen, Q. Han, Z. Yu, D. Zhang, and Y. Wang, “Worker-contributed data utility measurement for visual crowdsensing systems,” IEEE Transactions on Mobile Computing, vol. 16, no. 8, pp. 2379–2391, 2017. View at: Publisher Site | Google Scholar
  26. C. H. Liu, B. Zhang, X. Su, J. Ma, W. Wang, and K. K. Leung, “Energy-aware participant selection for smartphone-enabled mobile crowd sensing,” IEEE Systems Journal, vol. 11, no. 3, pp. 1435–1446, 2017. View at: Publisher Site | Google Scholar
  27. T. Li, T. Jung, H. Li et al., “Scalable privacy-preserving participant selection in mobile crowd sensing,” in Proceedings of the 2019 IEEE International Conference on Pervasive Computing and Communications (PerCom), Kyoto, Japan, March 2019. View at: Google Scholar
  28. Y. Xu, J. Tao, Y. Gao, and L. Zeng, “Location-aware worker selection for mobile opportunistic crowdsensing in VANETs,” in Proceedings of the IEEE Global Communications Conference (GLOBECOM), Abu Dhabi, UAE, December 2017. View at: Google Scholar
  29. Y. Sun, “Game theory based strategic decision making of participants in participatory sensing systems,” 2015. View at: Google Scholar
  30. J. Xu, J. Xiang, and D. Yang, “Incentive mechanisms for time window dependent tasks in mobile crowdsensing,” IEEE Transactions on Wireless Communications, vol. 14, no. 11, pp. 6353–6364, 2015. View at: Publisher Site | Google Scholar
  31. J. Champaign, R. Cohen, J. Zhang et al., “The validation of an annotations approach to peer tutoring through simulation incorporating the modeling of reputation,” 2011. View at: Google Scholar

Copyright © 2020 Xuemei Sun et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.


More related articles

 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views82
Downloads136
Citations

Related articles

We are committed to sharing findings related to COVID-19 as quickly as possible. We will be providing unlimited waivers of publication charges for accepted research articles as well as case reports and case series related to COVID-19. Review articles are excluded from this waiver policy. Sign up here as a reviewer to help fast-track new submissions.