Abstract

The existing incentive mechanisms of crowdsourcing construct the expected utility function based on the assumption of rational people in traditional economics. A large number of studies in behavioral economics have demonstrated the defects of the traditional utility function and introduced a new parameter called loss aversion coefficient to calculate individual utility when it suffers a loss. In this paper, combination of behavioral economics and a payment algorithm based on the loss aversion is proposed. Compared with usual incentive mechanisms, the node utility function is redefined by the loss aversion characteristic of the node. Experimental results show that the proposed algorithm can get a higher rate of cooperation with a lower payment price and has good scalability compared with the traditional incentive mechanism.

1. Introduction

Crowdsensing, which can be described as people/human-centric sensing, has gradually become the ideal method for large-scale data collection [1]. The collection of sensed data relies on every single node that participates in the perception. Each node is a natural person holding smart devices such as smart phones or computers. Crowdsensing systems usually require support from a large amount of sensed data [25], while it would cost smart devices a certain amount of price to participate in crowdsensing (such as spatial movements and consumption of memory and power). Nodes do not perceive selflessly, as they need a certain amount of compensation to be motivated; on the other hand, the requester of the sensing tasks would measure the reward of publishing the value of data. A sensing task usually offers limited reward, so it would be better to get more valuable sensed data with less reward.

At present, there are a series of incentive mechanisms, which seek benefit maximization mainly based on utility function, which is based on the expected function theory of traditional economics. However, the development of economics itself has gradually revised this theory. The behavioral economics shows that individual decision-making must consider the influence of psychological factors. The traditional expected utility function is no longer completely reasonable when psychological-related parameters are involved.

The loss aversion is an important branch of the prospect theory of behavioral economics. What it describes is loss is more unbearable than gain that has the same amount of value. Therefore, individuals are more willing to voluntarily prevent possible losses. In order to motivate the nodes, this paper, from the point of view of the nodes, analyzes the decision-making process of the nodes, models them, introduces the loss aversion coefficient into the utility function of them, adjusts the payment mode in the traditional incentive mechanisms, effectively stimulates nodes to participate in perception, and enhances the performance of the crowdsensing system.

The main innovations of this paper are as follows:(i)The paper uses the loss aversion to build the incentive mechanism, which revises the cooperative behavior researches based on traditional economics, so as to make up for the basic assumption insufficiency in the traditional economics about human rationality, self-interest, complete information, utility maximization, and preference consistent.(ii)By using the influence of the loss aversion psychology on decision-making, a compensation payment algorithm based on the loss aversion is proposed in the crowdsensing.

2.1. The Incentive Mechanisms of Crowdsensing

In order to increase the number of participants in sensing tasks and ensure the data quality, a series of incentive mechanisms have been put forward, including monetary incentive, entertainment and gamification incentive, and social connections incentive [6]. References [7, 8] pointed out that the attributes of human beings are diversified and individuals’ decision-making behaviors are influenced not only by their own cognitive, thinking, preferences, and other factors, but also by the surrounding environment at the same time. According to the motivation mechanism, this subject makes use of the individual’s individuality and sociality and divides the common incentive mechanisms into individual incentive mechanisms and social incentive mechanisms.

(1) Individual Incentive Mechanisms. The individual incentive mechanisms mainly utilize the inherent pursuit of interests of the nodes, including the desire for money, the motivation to maintain and manage its own reputation, the pursuit of more virtual points, and a better entertainment experience.

(i) Mechanisms Based on Monetary Payment. The incentive mechanisms based on the monetary payment are that the platform motivates potential participants to join the sensing task and provides the required sensed data by giving workers a certain monetary reward. This kind of incentive mechanisms is usually the combination of economics and computer science. The most common auction mechanisms include reverse auctions [912], portfolio auctions [13], multiattribute auctions [14], all pay auctions [15], double auctions [16], and VCG (Vickrey-Clarke-Groves) auctions [17]. In monetary incentive mechanisms, game-theoretic mechanisms provide good mathematical models to resolve server-to-player conflicts and determine problems, while providing sufficient theoretical data to analyze participants’ behavior. The monetary incentive mechanism can effectively stimulate the enthusiasm of participants [18] and has a good theoretical basis. However, it also has obvious shortcomings. The system usually can hardly establish a suitable price architecture. Most importantly, the current pricing scheme cannot solve the dilemma between the requesters and the workers: If paid in advance, the workers can get the reward without working, which is called free-riding; if paid afterwards, the requesters can refuse to pay after getting the required information, which is called false-reporting [19].

(ii) Mechanisms Based on Entertainment and Gamification. The incentive mechanisms based on entertainment and gamification change the sensing tasks to sensing games, so as to allow workers to contribute to the sensing tasks in the game process. The mechanisms usually motivate workers to complete the sensing tasks by generating rankings in the game, task points and their intrinsic fun, and so on. The authors in [20] used a ranking scheme and a badge scheme to motivate workers to participate in. The authors in [21] designed a collection game called Treasure to collect information in the gaming area to draw a Wi-Fi coverage map. The author in [22] used a player’s text or photo tag in the play area to generate a series of recognizable geographic information that supports route navigation.

Individual motivation mechanisms neglect the environment in which the nodes are located. In the crowdsensing, individual nodes have the ability to interact with other nodes, and their behaviors and connections are mutually influential. Some researchers found that, in the incentive mechanisms of crowdsensing, the position of the workers’ social structure will affect their resources and access to information, as well as the degree of completion of the sensing tasks and the amount of needed compensation [23]. Therefore, a series of social incentive mechanisms for nodes are proposed.

(2) Social Incentive Mechanisms. Social incentive mechanisms consider the social aspects of nodes. Crowdsensing is made up of a large number of nodes, so the choices and behaviors among nodes are not completely isolated. Nodes adjust self-cognition at any time by the influence of other nodes and they draw up their behavioral strategy based on the information in social networks.

(i) Mechanisms Based on Social Connections. The incentives mechanisms based on social connections focus on the interactions and relationships among individuals. The authors in [24] established the social network among the participants based on Stackelberg, in order to maximize the participants’ utility. Based on social networks, the authors in [25] use a penalty mechanism to detect dishonest participants and build a trust system to improve existing incentive mechanisms. The authors in [26] improve the choice of participants and payment of remuneration, to enhance the integrity of the individual by supervision and reporting in nepotism. Incentive mechanisms based on social connections motivate the participants to a certain extent, but because of the impact of network relations itself, the reliability and credibility of social networks are in bad need of [18].

(ii) Mechanisms Based on Service. The incentive mechanisms based on service are designed by using the principle of mutual benefits. In the crowdsensing, service consumers can also be considered as service providers. That is, if the nodes want to get the services provided by the system, the nodes must also contribute to the system. For example, in the Parking Information System designed by [27], nodes play the roles of both consumers and contributors. The authors in [28] have designed two incentive mechanisms under this framework: Incentive with Demand Fairness (IDF) and Iterative Tank Filling (ITF). They are used separately to maximize the fairness of nodes and the social welfare of the system. Some researchers consider service incentives from a group level. The authors in [29] illustrate the inspiration from blood donation that contributors are driven not only by their own utility but also by the effects of their relatives and friends. And this group incentive has proved to be effective in practice.

The above incentive mechanisms based on the utility are put forward to maximize utilities of both the platform and the participants. The model of those incentive mechanisms can be expressed as the following formula [30]:

That is reflected in the classic monetary incentive mechanisms [917]: each node makes its decisions to maximize its payment with the lowest cost. All the nodes, during the bidding between the server and them, would maintain their rationality for more benefits. In entertainment and gamification incentive mechanisms [2022], virtual credits and ranks take the place of money, so that nodes would make decisions that maximize their interests. Moreover, considering the interactions within the nodes, social incentive mechanisms aim at maximizing utilities of groups instead of individuals [18, 2429].

All in all, as shown in formula (2), most of incentive mechanisms still use the traditional expected utility function to describe the decision-making of the individuals:

Formula (2) is used to describe the utility of a node , taking as the probability for choosing event and as the utility of event (the value of could be positive or negative). Each individual would calculate its expected utility according to formula (2) and use the result for its decision-making.

Formula (2) is based on the two hypotheses from traditional economics: source independence and the invariance of risk preference [31].

Source independence means the fungibility of wealth. In traditional economics theory, the values of wealth do not depend on how it is acquired, nor are they labeled [32]. That is, nodes measure their gains and losses in a similar way. The differences between benefits and costs could be circulated, which is obvious in the mechanisms which evolved punishment [25, 26]: the cost of punishment and the benefit of cooperation could be superimposed without any differences.

The invariance of risk preference means that the risk preference of the individuals is constant, objective, and consistent, which has been called process invariance in traditional economics [33]. In this situation, individuals in crowdsensing would never change their risk preference, no matter how the information environment in the system changes. In the mechanisms of social networks, individuals attitudes towards risk of pursuing benefits or avoiding losses are constant, although they would change their decisions according to their opponents.

However, incentive mechanisms apart from monetary incentive mechanisms [917] (e.g., entertainment and gamification incentive mechanisms [2022] and social incentive mechanisms [18, 2426]) consider not only the economic gains and losses, but also the psychological factors of the individuals. So it is not reasonable to calculate individuals’ utilities with the methods from monetary incentive mechanisms. Even in the monetary incentive mechanisms, the impacts of some factors (such as the values of money, the sources of money, and the risk tendencies of money) on decision-making cannot be ignored. In addition, the two hypotheses including the independence of sources and the invariance of risk preference have been questioned [34]. A number of studies have demonstrated that individuals’ irrational behaviors may be refracted into the program and that will lead to the occurrence of the irrational decisions [35].

2.2. Incentive Mechanism from the Perspective of Behavioral Economics

Behavioral economics put forward a well-known theory called loss aversion, which states that losses are even more unbearable when people face the same amount of benefits and losses [36]. The loss aversion has been proved to be a common feature embodied in individual decisions.

The loss aversion has changed the expected utility function in traditional economics and introduced an unprecedented loss aversion coefficient to describe the loss aversion characteristics of individuals, which is used to calculate the utility of individuals when they suffer losses as the following formula [37]:

Formula (3) is a simplified form of the loss-aversion-typed value function, with 0 as the reference point to express the individual gains and losses. After introducing the loss aversion coefficient, this function overthrows the traditional expected utility function and can be used to explain a series of phenomena that cannot be explained by the expected utility function, such as Allais paradox [38], reflex effect [39], and preference reversal [40].

Therefore, the actual individual decision-making must consider the utility function after the loss aversion coefficient is considered.

Formula (4) adds that when the utility corresponding to event is negative, its value changes under the influence of the loss aversion coefficient, which will affect the overall utility function.

Besides, the loss aversion also overturns the view of source independence and the invariance of risk preference in traditional economics and considers that money has nonfungibility and individual risk preference is variable [41].

(1) Nonfungibility. Being different from the view of source independence, behavioral economics holds that when external information enters the individual cognitive mental accounting, it can effectively reflect the trade-off between expected return and possible loss and confirm whether the threshold value boundary has been reached [42]. Because of this effective boundary, money cannot flow freely among subaccounts. In other words, due to the existence of the loss aversion, when individuals make decision, the value of funds is different and irreplaceable according to different sources and expenditures. The value curve of traditional economics is like Figure 1(a), and the actual value function should be like Figure 1(b).

(2) The Variability of Risk Preference. Individual’s risk preference plays an important guiding role in individual decision-making. Behavioral economics considers that the loss aversion leads to the change of individuals’ risk preference. Individuals are risk-averse when faced to defined returns, while they tend to seek risks when faced to established losses [43]. When making economic decisions, individuals are more worried about losing money than expecting to gain profits, which in turn encourages them to have more motivation to stop the loss. This is also a sign of the loss aversion. The crowdsensing system is fuzzy and uncertain for nodes. In addition to considering the proceeds, the nodes in the system are more worried about their loss than other nodes, which will greatly affect the decision-making behavior of the nodes.

To sum up, the crowdsensing system is an environment with unequal information, uncertainty, and ambiguity. Nodes are inevitably affected by this environment and cannot maintain individual rationality, and utility function is bound to be influenced by psychological factors. Because the loss aversion theory has been well applied in the fields of economics [44], game theory [45], biology [46], environmental ecology [47], and so on, it has proved its feasibility. However, according to our study, the loss aversion has not been used in the field of crowdsensing, so we believe that the incentive mechanism of the past crowdsensing system did not make full use of the characteristics of the node and target the incentives. After the loss aversion is introduced, the structure of the incentive mechanism can be extended to formula (5), where represents the psychological factors of the nodes:

3. Our Mechanism

There is a famous grape experiment [48] to verify the loss aversion, which clearly shows how the loss aversion works in the psychological aspects. We have established the mapping of the experiment and the crowdsensing system. We propose a different payment algorithm by the reasonable analysis and construction of the model. In this algorithm, the nodes’ loss aversion is aroused, so as to be more proactive in the perception to improve system efficiency.

3.1. The Introduction of the Loss Aversion

Monkeys were used as subjects in the grape experiment, and the experimenter designed two game schemes for monkeys:(i)Option 1: the experimenter first puts a grape in front of the a monkey and does a coin toss. The coins face down, allowing the monkey to only get that one grape; the coins face up, then allowing the monkey to get an extra grape. The expectation that each monkey can get grapes is 1 + 50%.(ii)Option 2: the experimenter places two grapes in front of a monkey and then does a coin toss. The coins face down; then the experimenter takes one of the grapes; the coins face up; then the monkey can get both the two grapes. The expectation that each monkey can get grapes is 2–50%.

For the sake of reason, the expected benefits of both experiments are 1.5 grapes. The difference is that, in the first option, each monkey has chance to get an extra grape in the case of ensuring a grape is obtained, while the second option is chance to lose one of the grapes on the premise that two grapes may be obtained.

If the monkey is a completely rational individual, then its preference for these two experiments should be the same. However, the experimental results show that when the monkeys finally understand they may lose one of the two grapes in option 2, they all tend to choose option 1. This experiment shows that the pain brought by the loss of a grape to the monkeys is heavier than the happiness brought by getting a grape.

We simulate the grape experiment and propose a new mechanism for enhanced cooperation based on the loss aversion, which is different from the traditional incentive mechanisms, as shown in Table 1.

Theorem 1. The individual’s pain of loss is often greater than the value of the actual loss when measuring the gains and losses, expressed as follows:

Proof. Traditional economics argues that the resulting is equal in value to the lost , expressed as follows:However, due to the impact of loss of aversion on the value curve, the individual really perceived that loss part of the value should be adjusted as follows:Since, so when , which is , we can get . That is,

Based on the above analysis, the new mechanism of the loss aversion is to enlarge the value of in the psychological level, so that limited rational nodes tend to choose the cooperation without losing the contribution value, thus promoting the enthusiasm of the node.

3.2. System Modeling

We use the classic crowdsensing architecture to build the system. The typical system architecture is shown in Figure 2. The system includes the server platform and task participants (data providers). The server in the cloud receives a service request from the data requesters (the data requester can be the data providers; they are the same group), assigns the sensing task to the participants, processes the collected sensed data, and performs other administrative functions. After a participant receives the sensing task, the participant senses the required data and then uploads the data to the server. The server returns the data to the data requesters after processing.(i)The finite set of represents the service requests from the data users, that is, the sensing tasks that need to be completed in the system.(ii) represents the number of workers of the system; these workers are handheld intelligent devices in practical applications. A worker can only join one sensing process at a time.

3.2.1. Platform

Each of the sensing tasks has its own attributes. We extract the main attributes needed in this paper, denoted as , means its payoff, means its budget, and means its congestion level, respectively, and congestion level indicates the current degree of participation of the task.

There are various reward pricing schemes in the crowdsensing system. A budget-limited perception task has a rated total remuneration. In order to obtain plenty of high-quality sensed data in a defined amount, this paper defines that each sensing task is paid to each worker with a reward of , where represents the current strategy of a worker .

In some cases, a worker may refuse to provide sensed data, and we define this behavior as sleeping in which the node does not have to pay any perceived costs and will not receive any remuneration. We manually introduce to describe it; ’s reward and the degree of congestion are always zero. The expanded task set is .

3.2.2. Worker

We describe each worker as a quaternion . Knowing that each worker has his own selfish threshold, and when the external condition reaches the threshold, the worker will have the power to engage in labor; we define the variable to represent the intrinsic property of the worker. means the objective cost of the behavior of participation in a task for a node . In addition, we have learned in the second section that the worker is not entirely rational and less likely to remain rational in every decision stage. Due to the cognition, experience, reference, and other psychological factors, the worker gets the conclusion that does not exactly match the objective fact when he analyzes and judges an external condition. So we define two functions and . The former indicates the objective reward that a worker can actually obtain when he participates in a perceived task ; the latter means his gains in his cognition. When a worker makes a decision, his actual reward in his cognition is the latter.

We define that each worker has two kinds of behaviors within the system: (a) choosing a sensing task and participating in perception and (b) sleeping. The main reason for preventing worker from participating in a sensing task is that there is the cost that must be paid in the process. Obviously, only when the worker thinks the reward he can get meets his own selfish threshold will he choose to participate in this perception task; otherwise he would rather sleep to prevent the loss of meaningless cost.

The worker maintains a real-time connection with the server in order to receive the task pushing at any time and chooses a satisfying task to perform according to his own conditions. The worker node sends its current decision information to the server platform, and the server platform updates the overall policy information in real time, then updates the participation of task, and gets the latest overall strategy and the information of task congestion.

3.3. The Construction of Loss Aversion in Crowdsensing
3.3.1. Basic Definitions

This paper needs to use the following concepts.

Definition 2 (the benefits of user nodes). The objective benefit of a user’s involvement in a sensing task is the difference between the reward he receives for his sensing task and the cost of his participation in perception , as follows:

Definition 3 (the congestion degree of a task). The sum of all nodes’ contributions to a task is its congestion degree as formula (12). The congestion degree represents the situation in which a task is currently executed by nodes.We artificially define in formulas (12)–(15) as the contribution of a node to a task; its value is 0 or 1. Its value being 1 indicates that the node completes the task, and its value being 0 indicates that the node does not complete the task.

Definition 4 (social welfare). The social welfare in crowdsensing is the sum of the benefits of all workers as follows:

Definition 5 (the average value of each data). The average value of each data is the ratio of the total reward for all tasks in the system to the total number of data as follows:

Definition 6 (cooperation rate). The rate of the total number of cooperators to the total number of nodes:In order to measure the value of profit or loss from the reference point, and to successfully describe behavioral characteristics, the concrete expression of the value function is given as formula (16). This function explains three behavioral characteristics of the limited rational person: (a) most people are risk-averse when faced with the profit; (b) most people are risk-seeking when faced with the loss; (c) people are more sensitive to the loss than the profit.where represents the reference point of the decision-maker, and if the gain is greater than the reference point, the decision-maker will perceive the profit, or else the loss will be perceived. and are the risk attitude coefficient, and is the loss aversion coefficient.

3.3.2. Payoff Matrix

This paper sets as the objective reward that a node participating in a task can get, determined by the budget and the current congestion level of the task, as follows:

The payoff matrix of nodes and platforms is shown in Table 2. Platform selects cooperation, that is, providing information and rewards of tasks for nodes. In the crowdsensing system, platform needs to do this all the time, which means platform keeps cooperation forever. The nodes select cooperation; that is, the nodes perform sensing tasks and feedback data as specified. This behavior needs to pay the cost, but also receive the appropriate reward; the nodes choose noncooperation, which are not involved in the perception of any task, keeping sleeping with paying nothing and receiving nothing.

3.4. The Reward Payment Algorithm Based on the Loss Aversion

The decision-making process of this paper focuses on the loss aversion in the decision-making of the nodes. In this way, we adjust the traditional pipelined payment model and divide the payment process into three stages: the release stage, the selection stage, and the settlement stage.

(a) The First Stage: The Release. In the release phase, first we establish the highest control authority for the server. It is reasonable. Although there is no centralized control to control the behavior of each individual as crowdsensing is a distributed system, the establishment of the common reputation mechanism and the virtual integration system are on the premise of the server’s highest control authority.

As shown in Figure 3, in our system, for each node registered at the platform, the node obtains a system-specific part on its account, which allows the platform to pay or deduct reward to the node after joining in the crowdsensing system.

Before a sensing task is settled, this part of the node account that participates in the task is under the supervision of the server, and if the node exits the system in the middle, the part is recycled by the platform. Only when a perceived task is settled will the part of the reward be truly transferred to the node account and by its domination.

At this stage as described in Algorithm 1, the platform and the workers perform the following operations, respectively, as shown in Algorithm 1. The server platform collects sensing tasks from the requesters, generates task set, and extracts the properties of each task. The platform sorts and counts the registered users and generates the worker set. The platform pushes the task information to all users and gives payment vouchers and declares the rules to the node at the same time. The nodes that are able to perceive register on the platform and define their threshold attribute and cost attribute. The former indicates the demand of nodes, and the latter indicates the cost of nodes participating in task perception. The nodes need to keep the real-time communication with the platform to receive the push perception task of the platform. And the nodes understand the rules of the platform and are able to determine their own behavior.

(1)   for    do
(2)   ,
(3)   if   is available do
(4)     Register on the platform
(5)     Create a temporary account which is under the supervision of the system
(6)   else quit the system
(7)   end if
(8)   
(9)   end for
(10) for    do
(11)   
(12) end for

(b) The Second Stage: The Assignment. In the second section, we already know that nodes are not infinitely greedy, and different nodes have different target thresholds because of their own factors. The selfish threshold of the node is the standard of how the node makes decisions.

(1) The Node Selects Noncooperation. From the common sense, it is easy to know that if the reward that a sensing task provided to a work node is not higher than the threshold of a selfish node, the node will not generate sufficient momentum to participate in perception. Its psychological state at this time is described as follows:

(2) The Node Selects Cooperation

(i) In the Traditional Incentive Mechanisms. When the node judges that the external conditions meet its own selfish threshold, the node will participate in perception. Its psychological state at this time is described as follows:

(ii) In the Loss Aversion Mechanism. The introduction of the loss aversion not only retains the rational characteristics, but also considers the situation of limited rational of nodes. Due to the loss aversion, the node makes not only the rational judgment, but also an additional judgment whether it can afford to lose the reward that platform puts in the system-specific part on its account. If the pain of the loss reaches a certain threshold, the node will choose to participate in the perception task to avoid this pain, thus contributing to cooperative behaviors. Its psychological state at this time is described as shown in the following:

As described in Algorithm 2, in the selection phase of the node, the node selects the task that satisfies itself according to the above condition, and if it does not, it will sleep itself. At the same time as the node selection, the platform updates the congestion degree of tasks and the unit price of the compensation in real time. The node then judges whether to participate in the task according to the situation’s change. Until all the nodes in the system can no longer find the task to maximize their own rewards, the stage stops.

  When there are users and tasks in the system do
  for every   when he is unsatisfied do
Search every open task
Get the estimated price if participate in this task from platform
if    do
Add to
else if    do
if    do
Add to
end if
end if
Choose the task and
    end for
    Until every user cannot find a valuable task

(c) The Third Stage: The Settlement. Each task has a fixed opening time; platform settles the task when the time arrived. The settlement phase as sketched in Algorithm 3 is mainly carried out as follows.

The platform checks whether the task has reached the settlement stage
  for    do
  Platform settles the task and issue a response command
  for every node   do
  if it doesn’t response in time do
  punish the node
  else do
  finish this assignment
  end if
  end for
  Platform process the data and feed back
  end for

Platform(i)Platform determines whether the task reached the settlement phase.(ii)When the task reaches the settlement phase, the platform sends a settlement signal to the node receiving the task; the task that does not reach the settlement phase is not performed.(iii)Platform makes the statistics of the sensed data feedback and thus finishes the transaction with the nodes that complete the task on time, and then reclaims rewards for the nodes that failed to finish on time.(iv)The platform continues to push unfinished tasks.(v)The platform will send the collected sensed data to the requester. This release ends.

Worker(i)The nodes that decide to participate in the task do their work and send the sensed data back to the platform; the nodes that decide not to participate then give the rewards back.(ii)Nodes figure out the reward in the current round.

In the incentive mechanisms including virtual credit and reputation mechanisms, when we need not consider the security of payment in advance, the introduction of the loss aversion should be able to effectively improve the enthusiasm of the participants. In the monetary payment-type incentive, this paper sets up the account block structure in the release stage, so that we can effectively guarantee the security of unsettled tasks. We believe that because the monetary payment-type incentive is the most direct incentive, the effect of the loss aversion may be the most obvious.

The start of a task requires the requester to give the reward; those that want to get the data and refused to pay will not start a task at all, so the false-reporting behavior will not happen; and because of the settlement phase, those malicious nodes that want to take rewards but do not work return in vain, so put an end to the node’s free-riding behavior.

4. Simulation

4.1. Parameters Setup

In order to evaluate the incentive effect of the loss aversion on nodes, we set up two simulation scenarios, including the loss aversion algorithm (LAA) and the completely rational algorithm (CRA) [49].

We set the CRA as a control group; in this set of simulations, the node is completely rational, and its judgment is mechanical. In the LAA, the node is limited rational and its loss aversion is easily aroused. We mainly analyze the algorithm performance from the cooperation rate and the average value of each data. The cooperation rate intuitively describes the incentive effect of the algorithm, while the average value of each dataset shows whether the system can acquire a sensed data at a lower average price. We evaluate the scalability of the system with the number of online nodes and the number of perceived tasks in the system. Evaluate the influence of nodes with different attributes on the system with nodes’ gates and costs change. Evaluate the effect of the loss aversion level on the algorithm with the risk attitude coefficient and the loss aversion coefficient.

We assume that some tasks within the system require large amounts of data, so their reward budgets are high; some require small amount of data, so their reward budgets are low; for the simulation of this kind of situation, we make the reward budget of the sensing task for . The reward when a node completes the perceived task is ; its value is defined in Section 3.3.2; in this setting the task reward can get reasonable allocation, neither too high resulting in waste nor too low to attract workers. The node requirement is a random number that belongs to . A node may have different requirements for each task or may be the same, which is determined by the node itself. is the similar manner. In formula (16), parameters and are the risk attitude coefficients, and is the loss aversion coefficient. Their reference values are usually derived from the experimental results of the loss aversion presenters; that is, and . We take these classical values as references to discuss the influence of the changes of these values on the results. The average of 50 times was taken in all experiments.

4.2. Analysis
4.2.1. Dynamic Changes of the Task Participation

Set and to observe the dynamic changes of the system. Task selection is a dynamic process. The system constantly adjusts according to the different requirements of nodes in order to eventually find a sensing task to meet their requirements. In this process, due to the change of the congestion level of the task (rise up as a whole, because the nodes are constantly added, as shown in Figure 4(b)), the reward of payment to each node will also change. This may cause the situation that some nodes participate in the task when the congestion degree is low, but are not willing to participate when the congestion degree increases; then these nodes will exit the task to look for other tasks which meet their demands, resulting in the congestion degrees decreasing. Until finally all nodes find the satisfying tasks, the nodes with no satisfying tasks selected to sleep; the congestion curve of each task tends to be stable. Overall, the number of cooperators in the system is constantly increasing, which may have small fluctuations, because there are some nodes needing strategy adjustment, but in the end they can reach a stable situation.

4.2.2. The Number of Nodes and Number of Tasks

We compare the cooperation rates of the CRA and the LAA under different conditions to analyze the expansibility and stability of the system. We set and compare the lower demand for nodes with and and the higher demand with and . Figure 5(a) shows that increasing the online nodes can gain more data when the given sensing tasks are unchanged in the system. Still as shown in Figure 5(a), the superiority of the LAA is more pronounced when the congestion degree of the system is high. When and , the cooperation rate of the CRA is , while the cooperation rate of the LAA is . When and , the cooperation rate of the CRA was only , while the LAA remains at . This is due to the fact that the increase of the number of the nodes will cause the increase of the congestion degree of tasks. The available resources are fewer relative to the number of the nodes. The nodes in the LAA are more inclined to accept the tasks which are slightly lower than their own expectations because of their own psychological factors. This is more obvious in the case of nodes with higher demands. When there are fewer tasks meeting their own needs, the nodes in the LAA will make themselves willing to make some concessions under the strong psychological effect.

The average value of each dataset represents the average reward that the system needs to pay for an effective sensed data. The lower the value is, the more “cost-effective” the system is. Figure 5(b) shows the change in the average value of each dataset in the same situation. Since our tasks are budget-limited, their total budget is established. Therefore, when the number of participants is small, the reward assigned to each node will be very high; when the number of participants is too large, the reward is low. Too high unit reward causes waste, and too low reward cannot attract sufficient numbers of the nodes to work. When we fix the number of sensing tasks within the system and increase the number of the online nodes, at the beginning, the average value of each dataset is too high because the number of nodes is insufficient. In this case, the performance of the CRA and the LAA is similar, because nodes do not have to worry about the fact that they would find no suitable work when there is a surplus of resources. When the number of online nodes increases, the resources become insufficient. The effect of the LAA is very obvious; it can almost get the data at half the price of the CRA algorithm.

When the number of online nodes in the system does not change, we observe the changes in the number of cooperators with the increase in the number of sensing tasks in Figure 6. Obviously, when the number of tasks in the system is less, the number of participants will be less. However, when the system resources are extremely insufficient, we slowly add new resources (new sensing tasks) to the system and observe that the nodes in the LAA are more encouraged. This shows that, with the resource constraints, putting the same amount of new resources provokes more nodes in the LAA than the CRA. Similarly, when the number of online nodes is fixed and resources are changed, it can be seen that the LAA can keep a lower average value of each dataset when resources are extremely tight (when is between 5 and 20).

4.2.3. Upper Bound of the Node Demand Threshold and the Cost

When the number of online nodes and the number of sensing tasks are unchanging, the selfish threshold of the node itself and the cost of its participation in the task also affect the cooperation rate. We compared the two cases where resources are more abundant (, and ) and resources are more insufficient (, and ). The nodes will participate in the tasks that meet their demands; this dynamic searching task process could give priority to low-demand nodes to find the right task, so as to ensure that the two algorithms will certainly make nodes cooperate. On one hand, the LAA’s cooperation curve is significantly better than the CRA’s. On the other hand, in Figure 7(a), comparing two curves when with those when , we found when , the CRA cooperation rate is , while the LAA cooperation rate is . In the case of , the CRA cooperation rate is , and the LAA cooperation rate is . The difference between the CRA and the LAA is more obvious in the latter case. With the increase of , the decrease trend of the LAA curve slows down, which proves that the high demand node is more sensitive to the loss aversion.

The influence of the node threshold on the average value of each dataset is also obvious. Figure 7(c) compares them in case of and , respectively. Since the individuals and the number of participants in the system are changing (the costs of participating in the same task for different individuals are also different), it is normal that the data changed in a small range. So we only need to compare the differences between the two algorithms in the same case. In the case of , the two datasets are similar. And when , it indicates that the node’s demand is increased. At this time, although the congestion degree does not change, the tasks which can satisfy the nodes are reduced. increases; the average of each dataset becomes higher, which is reasonable; in this case the LAA can still obtain data at a relatively low price, reflecting the superiority of the LAA.

Besides the selfish threshold of the nodes, the cost of performing a task for nodes will also have a significant impact. In order to facilitate the analysis, we set , and to compare the situations. of the nodes is a random number that belongs to . Figures 8(a) and 8(b) show that as increases, the cooperation rate decreases. When , the cooperation rate of the CRA is , and the cooperation rate of the LAA is . The latter is about 15% higher than the former. When , the cooperation rate of the CRA is , and the cooperation rate of the LAA is .

Interestingly, we found that, with the growth of , the average value of each dataset of the CRA is constantly higher, while the trend of the LAA is reduced in Figure 9. This is because, in the CRA algorithm, the main factor that a node considers is not the cost of the nodes, but the net reward (the reward minus the cost). As the cost increases, the number of tasks that can meet the needs of the nodes becomes less, the number of participating nodes becomes less, and the average value of each dataset increases. And for the LAA node, due to the loss aversion, in order to avoid the pain caused by the loss, the nodes accept the tasks as long as they did not want to lose the rewards that the task “has paid." So the number of partners within the system can be maintained at a high level; the average value of each dataset is reduced.

We compared the influences of and on the cooperation rate and still set the scale of and . It can be seen that the effect of is larger than in the LAA. For the same number of increments, is more pronounced for the reduction of the cooperation rate. This is because the cost is objective; the LAA cannot reduce the values of the costs, but the human selfish threshold is relatively variable; the LAA reduces the node selfish threshold in fact by expanding its loss part. It is easier for nodes to compromise when resources become limited and then secondly to choose a suboptimal task.

4.2.4. The Risk Attitude Coefficient and the Loss Aversion Coefficient

For the risk attitude coefficient and the loss aversion coefficient, there are a number of discussions after the loss aversion has been proposed. In the experiments above this section, we all use and as the node’s loss aversion attribute. In this section, we discuss the changes about these two values, that is, how the degree of the loss aversion of a node will impact the algorithm. We set the analysis in the case of .

In order to compare with the completely rational algorithm, we take and as the reference point (when the loss aversion coefficient is 1, it returns to the general model).

When and , the cooperation rate can still be maintained at a higher and more stable level as shown in Figure 10(a); the value is about , and when and , the cooperation rate will be reduced by about in the same case. For the average value of each dataset, it can be maintained below 1.6 when and , and when and , it needs to pay 2.4 units to get the data as shown in Figure 10(b).

Considering the above two graphs, and can maintain the higher cooperation rate of the system, but the average value of each dataset is also higher, which is not cost-effective for the system. And when and , although the average value of each dataset is low, the cooperation rate is not high, which might not collect enough data. Only when the loss of a node is in the two ranges, that is, and , can the system collect enough data and purchase data at a lower price, which can guarantee the quality and prices of the sensed data at the same time.

5. Conclusion

Cooperative guarantee is always a hotspot in the crowdsensing system. For greedy nodes, the benefits are, of course, the higher the better, but the limited budget makes the resources in crowdsensing in most cases insufficient. This is the basic contradiction in the crowdsensing. In order to alleviate this contradiction, the traditional incentive mechanisms under the premise of rational individuals designed a lot of external mechanisms to regulate the behaviors of nodes. These incentive mechanisms promote the cooperative behaviors of the nodes from a variety of angles, but how to design a reasonable internal mechanism for the cooperation is still an unresolved problem.

This paper presents an incentive mechanism LAA based on the limited rational premise, which makes full use of the psychological activities that people cannot ignore in the decision-making process. It emphasizes the sensitive characteristics of the node to the loss, which makes the node expand the value of the lost parts irrationally in its cognition. By adjusting the architecture and the process of the payment algorithm, we have stimulated the loss aversion of the nodes, making the nodes more active in the sensing task. Finally, we use the experimental data to analyze the efficiency of the algorithm.

We have realized that considering irrational factors to solve the problem may have a multiplier effect, and the nature of the cooperation is still to be discussed more; in the next step, we consider the following: comparing the loss aversion algorithm with the more current incentive mechanism to further improve the performance of crowdsensing systems; considering more irrational factors and stimulating the cooperative psychology of the node from the framing bias, endowment effect, choice architecture, and so on. We believe that irrationality thinking can open up new ideas for crowdsensing systems.

Parameters

:The number of online nodes in the system,
:The number of sensing tasks in the system,
:The reward budget for each perceived task; the value is
:A reward scheme for the payment of a perceived task to a single participant aware node; the value is
:Upper bound of node demand threshold; we set it 20, 40, 60
:The node cost, related to ; the value is
:A coefficient of the cost to the demand; the value is 0.5, 1, 2
:The risk attitude coefficient; the value is
:The loss aversion coefficient; the value is .

Conflicts of Interest

The authors declare that they have no conflicts of interest.

Acknowledgments

This work is supported by the National Natural Science Foundation of China under Grant 61572528.