Scientific Programming

Scientific Programming / 2021 / Article
Special Issue

Big Data Mining and Applications in Smart Cities

View this Special Issue

Research Article | Open Access

Volume 2021 |Article ID 4600764 | https://doi.org/10.1155/2021/4600764

Zhao Huiqi, Abdullah Khan, Xu Qiang, Shah Nazir, Yasir Ali, Farhad Ali, "MCDM Approach for Assigning Task to the Workers by Selected Features Based on Multiple Criteria in Crowdsourcing", Scientific Programming, vol. 2021, Article ID 4600764, 12 pages, 2021. https://doi.org/10.1155/2021/4600764

MCDM Approach for Assigning Task to the Workers by Selected Features Based on Multiple Criteria in Crowdsourcing

Academic Editor: Zhu Xiao
Received08 May 2021
Accepted09 Jun 2021
Published18 Jun 2021

Abstract

Crowdsourcing in simple words is the outsourcing of a task to an online market to be performed by a diverse group of crowds in order to utilize human intelligence. Due to online labor markets and performing parallel tasks, the crowdsourcing activity is time- and cost-efficient. During crowdsourcing activity, selecting the proper labeled tasks and assigning them to an appropriate worker are a challenge for everyone. A mechanism has been proposed in the current study for assigning the task to the workers. The proposed mechanism is a multicriteria-based task assignment (MBTA) mechanism for assigning the task to the most suitable worker. This mechanism uses approaches for weighting the criteria and ranking the workers. These MCDM methods are Criteria Importance Through Intercriteria Correlation (CRITIC) and Technique for Order Preference by Similarity to Ideal Solution (TOPSIS). Criteria have been made for the workers based on the identified features in the literature. Weight has been assigned to these selected features/criteria with the help of the CRITIC method. The TOPSIS method has been used for the evaluation of workers, with the help of which the ranking of workers is performed in order to get the most suitable worker for the selected tasks to be performed. The proposed work is novel in several ways; for example, the existing methods are mostly based on single criterion or some specific criteria, while this work is based on multiple criteria including all the important features. Furthermore, it is also identified from the literature that none of the authors used MCDM methods for task assignment in crowdsourcing before this research.

1. Introduction

The term crowdsourcing refers to the outsourcing of different tasks to a huge amount of people known as crowd in order to utilize collective human intelligence. Crowdsourcing was first defined by J. Howe as the outsourcing of tasks or work to a network of undefined people by means of an open call format. The term crowdsourcing represents an act of an institution by taking a function performed by crowd and then outsourcing it to a network of undefined group of individuals [1]. The process of crowdsourcing contains three main actors, the requester or client who requests the work or task to be performed, the crowd who performs the requested task, and the platforms who act as a broker between the clients and crowd [2]. Crowdsourcing platforms contain paid and unpaid platforms, where in paid platforms crowd performs the tasks as a result of monetary rewards, while in unpaid platforms the volunteer crowd performs the tasks [3]. The use of crowdsourcing is increasing day after day as it is time- and cost-efficient for software development and many other tasks. It has been applied in several domains such as design-based apps, text translation to different languages, and labelling datasets [2].

The term CSE (crowdsourced software engineering) is derived from crowdsourcing. By means of an open call, it recruits online workers globally to perform several software engineering tasks such as requirement elicitation, coding, designing, and testing. It reduces the time to market due to parallelism. CSE rapidly gained interest in both industry and academia [4]. Day after day, crowdsourcing is gaining the attention of the communities. While crowdsourcing the task assignment is a challenging phase for everyone, selecting an appropriate labeled task from the client or the requestor and assigning it to an appropriate worker are a challenging issue in crowdsourcing. During crowdsourcing process, some of the crowds select irrelevant tasks in order to get the rewards but they do not have the potential to perform the task. As a result, the crowd then submit the low-quality result and thus decrease the client trust, which directly affects the crowdsourcing process. So it is shown that the task assignment is an important step to be taken during the crowdsourcing activity [5]. That is why a solution is required in order to fix the task assignment problem. The main contributions of the proposed study are as follows:(i)A mechanism has been presented for addressing the issue regarding the assignment of different tasks during crowdsourcing activity(ii)The proposed work identified different features of workers and selected the important ones for making the criteria for task assignment(iii)The existing task assignment methods are mostly based on single criterion, while this work is based on multiple criteria(iv)Two MCDM methods, CRITIC and TOPSIS, have been used for giving weight to the selected features and evaluation of the workers to rank them for assigning tasks to the appropriate workers

The remainder of the article is composed of four main sections. Section 2 describes the existing/related work, Section 3 describes research method, and the conclusion is given in Section 4.

In the existing literature, the task assignment models and methods are recommended with the help of different techniques. The competitor’s history of participation such as participation frequency and recency as well as winning frequency and recency along with tenure and last performance is derived in order to construct a model [6]. Literature work also proposed the framework for task recommendation for the task-based preference modelling as well as preference-based recommendation of task, the goal of which is to recommend task to crowd [7]. The recommendation algorithm for the personalized task is also advocated by the authors. The authors also suggest approaching the issue regarding the design of the mechanism of the personalized task recommendations. The three task selection methods, heuristic-based method, bound-based method, and active learning method, have also been identified by the literature [8]. The SmartCrowd framework is also proposed by the authors, which focuses on the optimization of crowd to the task assignment in a knowledge-intensive crowdsourcing environment. It focuses on the production of knowledge instead of simple tasks. It also integrates multiple factors of humans such as crowd expertise, the required wage, and the ratio of acceptance in the assignment process [9]. The task offloading was considered as one of the significant areas of research [10].

A bandit formulation for the assignment of tasks in heterogeneous crowdsourcing is also proposed, which is known as bandit-based task assignment. The worker is represented by the arm of the bandit. It mainly focuses on the selection strategy of workers in the heterogeneous crowdsourcing. The goal is to select the worker that is suitable and good for the task [11]. A dynamic solution is also presented for the assignment problem of tasks in the crowdsourcing platforms. The crowd select approach offers an algorithm for assigning workers to the tasks in a cost-efficient way and also ensures the accuracy of the tasks. The two main components of the approach are worker selection and their error rate prediction [12]. A learning algorithm has been described by the authors, which groups the task from the history into cluster and then derives from every cluster the features of the worker which optimize the quality of the contribution. Then these features are used by the algorithm to select the appropriate worker for the task [13]. To measure the effect of personality on the selection of tasks, the experiment was conducted based on the task characteristics such as type, money, and time. The type of personality of the workers was measured based on the Myers-Briggs Type Indicator. Sixteen different personality types are categorized. Then an experiment was performed (four rounds) to determine which worker belongs to which category. The round 1 and round 3 personality types are interested in prize and money instead of complexity. The other two rounds of personality types want the prize and money rather than deadlines [14]. The Skills Improvement Aware Framework is also proposed by the authors to recommend crowd for task in the development process of software and crowdsourcing. A study of the developer performance on the TopCoder platform was also conducted [15].

The assignment problem of tasks is also explored in a budget constraint with different variety of skill levels and different required quality. An algorithm was also designed for the generation of outcomes for many-to-one matching problem with upper bound and lower bound and the skill level of the worker [16]. The sensitive task assignment is proposed by the authors. First of all, sensitive task is partitioned and then assigned to workers. For the avoidance of colluding participants with the help of which they can exchange the data, a three-step task assignment method is proposed, which is known as sensitive task assignment. The steps include collusion estimation, worker selection, and task partitioning [17]. A data-driven learning approach was also proposed by the authors. The supervised learning and reinforcement learning have been combined in the approach to enable the agents to imitate the task allocation strategies, which show good performance [18]. TOP-K-T and TOP-K-W are the two real-time recommendation algorithms proposed by Safran and Che. The first one computes the appropriate task for a worker and the second one computes the appropriate worker for the task [19]. The expertise prediction heuristics have also been proposed by the authors to identify the experts automatically and to filter a nonexpert during crowdsourcing activity. Based upon the four expertise prediction heuristics, an experiment was performed. These include demographics of evaluator, reaction time of evaluation, and mechanical reasoning aptitude, and the last one is easy version of the evaluation tasks [20]. The Learning Automata Based Task Assignment (LEATask) is also proposed by the authors, which works upon the worker similarity in their performances. The algorithm has exploration stage and exploitation stage [21]. Different areas can be considered in the task assignment allocation such as energy cooperation and joint information for heterogeneous networks [22].

The batch allocation technique is also proposed for the crowdsourcing tasks with the overlapping skill requirements. The designed heuristics approaches include core-based batch allocation and layered batch allocation. The experiment is made on the upwork dataset [23]. Two online task assignment mechanisms are also developed by the authors, which will dynamically assign set of tasks to the incoming crowd (worker) with the help of which the worker will gain the maximum expected gain and maximum expected and potential gain. The task is divided into clusters by the authors and then they proposed a Latent Topic Model to describe the structure of topic and expertise of the workers [24]. Based on the deep learning, the existing literature also proposed Tag-Semantic Task Recommendation model. The similarity of the word vectors is computed and then the Semantic Tag Matrix Database has been established based upon the Word2vec deep learning. Then the recommendation model for task is established based upon the Semantic Tags to achieve the recommendation of the tasks in crowdsourcing. The task and worker relevancy is obtained by computing the tags similarity [25]. The Dynamic Utility Task Allocation (DUTA) algorithm was proposed in the paper. For the estimation of the worker’s initial value, the attributes are used, which are given by the worker during the time of registration. The developmental capabilities of workers are also calculated by the history of completed task, complexity of tasks, and quality and efficiency of the result. Then the matching degree is calculated based upon the weight of crowd skills and posted task requirement [26].

The active time of the worker is also used to get a solution for Multitask Multiworker Allocation. The three factors that the authors consider are the ability of the worker, active time of the worker, and the complexity of task module. The individuals are divided into collaborative groups, and then, for the optimal selection of worker to perform a task, the Hungarian algorithm is used [27]. The automatic detection for improper task is also proposed in the process of crowdsourcing. With the help of analyzing the estimated classifier, a variety of effective measures for the detection of improper task are observed, including words that appeared in task information, reward or money which will be received by the workers after performing the task, and their qualification for performing the task [5]. Various research fields can benefit from using the proposed approach such as IoT underlying heterogeneity [28], investigation of data aggregation in mobile sensor networks for IIoT [29], sharing of resource in heterogeneous vehicular network [30], and many other fields.

3. Methodology

As the assignment of task during crowdsourcing activity is a challenge for everyone, in the current study, a mechanism is proposed for task assignment method, which is based on multiple criteria. The study proposed a multicriteria-based task assignment (MBTA) mechanism. Two methods have been used in the current study. The CRITIC method has been used for assigning weights to the selected features and the TOPSIS method is used for ranking the workers. The work done in these methods can also be performed manually, but selecting these methods for doing the proposed work can give authenticity and appropriateness to the research work. Performing work manually contains several chances of mistakes, but doing it with predefined and already experimented methods increases the quality of the work; therefore, these two methods have been selected for the proposed study. The details of the study are discussed in the following sections.

3.1. Criteria for Task Assignment

To define the criteria for task assignment, first of all a variety of features have been identified from the existing literature. 33 of the most famous and important features have been identified during literature study. These features are then analyzed and the most important ones are selected for the development of mechanism for task assignment. The weight has then been assigned to these identified features by the CRIRIC method, which is discussed in the next section. The list of identified features is shown in Table 1.


S. no.FeaturesCitations

1Profile management[31]
2Flexibility[32]
3Worker history[7]
4Worker performance history[7]
5Worker task searching history[7]
6Task completion ratio[7]
7Period of time for task[7]
8Participation frequency[6]
9Participation recency[6]
10Winning frequency[6]
11Winning recency[6]
12Tenure[6]
13Reliability[33]
14Worker qualification[5, 34]
15Quality of task[5]
16Knowledge[35]
17Skills/expertise[9, 35]
18Cheap/cost-effective/cost-efficient[1, 9, 12]
19Software worker behavior[36]
20Task similarity[37]
21Delivery time[38]
22Task acceptance ratio[9]
23Accuracy ratio[12]
24Response ratio/quality of response[12]
25Quality of task[12]
26Trustworthy/honesty[12]
27Relevant experience[39]
28Interest[40]
29Reaction time[20]
30Personality type[14]
31Skill level[16]
32Active time[27]
33Development efficiency[26]

Table 1 shows all the identified features analyzed during systematic literature review. The important features are then selected from the list in order to develop a mechanism.

3.2. Case Study

To complete data collection for making criteria, a case study was performed. In this case study, the issues regarding the assignment of task have been highlighted. All the gaps have been discussed briefly. A comprehensive observation was carried out in order to select the features for criteria from the identified features during literature. The experts were asked different questions in order to select the most important features. These features were then scaled, ranging from 1 to 10, with the help of experts. A group of experts scaled these features so that important features get more weight among the other features, thus making it easy to rank the crowd who have good qualities at the top. After that, these features have been used for analyzing and making criteria as well as for evaluation of the workers to rank them for assigning a task. As all the features have been identified from the existing literature, for further analysis, experts were asked some questions based on these selected features in order to analyze the importance of these features. The questions the experts were asked are shown in Table 2.


Q1What is the importance of worker history while assigning a task?
Q2What is the importance of trustworthiness in task assignment?
Q3How much worker qualification matters during task assignment?
Q4Is the reliability of the worker important for assigning a task?
Q5What is the role of response ratio in assigning a task?
Q6Does skill level matter for task assignment?
Q7Is the quality of task important for clients?
Q8What is the importance of delivery time in crowdsourcing?
Q9What is the role of cost in assigning a task?

A list of the selected features is shown in Table 3.


S. no.FeaturesCitations

1Worker history[7]
2Trustworthiness/honesty[12]
3Worker qualification[5, 34]
4Reliability[33]
5Response ratio/quality of response[12]
6Skill level[16]
7Quality of task[5]
8Delivery time[38]
9Cheap/cost-effective/cost-efficient[1, 9, 12]

3.3. Weight of Selected Features

The features have been analyzed by experts in the relevant field. Scaling was given to each criterion/feature, ranging from 1 to 10, by these experts in order to get the most important criteria. Weights have been assigned to all these selected features with the help of the CRITIC method. The final weights have been obtained by applying equations (1)–(4), respectively. The final weights have been shown in Table 4 and Figure 1. Table 5 describes the scales of the selected features.


SumStandard deviationCjWj

8.8880.3322.9530.114
8.8790.3132.7780.107
8.2860.3893.2260.125
8.6900.3142.7310.105
8.7680.3593.1490.122
9.0700.2812.5490.098
8.8300.3593.1720.122
7.8100.3342.6110.101
8.0870.3372.7280.105


FeaturesWorker historyTrustworthiness/honestyQualificationReliabilityResponse ratio/quality of responseSkill levelQuality of taskDelivery timeCheap/cost-effective/cost-efficient

Worker 1675843978
Worker 2348657697
Worker 3978577645
Worker 4574896587
Worker 5753896426
Worker 6487536915
Worker 7857328628
Worker 8277689345
Worker 9763188954
Worker 10175968243

The weights for each criterion are shown in Figure 2.

3.4. Proposed Mechanism for Task Assignment

The MBTA mechanism has been proposed, which is based on multiple criteria. This mechanism has been developed based upon two methods. The CRITIC method is used to assign weights to the selected features, and then the TOPSIS method is used for ranking the workers. The details are discussed in the following sections.

3.4.1. CRITIC Approach for Allocating Weights to Features

CRITIC is a type of correlation method which was first introduced in 1995. It is a multicriteria decision-making approach that is used for assigning weights to features or criteria during research work. During this method, the weight is assigned to the criteria objectively rather than by pairwise comparison or decision-makers judgments [41].

m” is the number of possible alternatives such as Ai, when i = 1, 2, 3, …, m, and “n” is the number of evaluation criteria such as Cj for j = 1, 2, 3, …, n, in a problem. The following steps are followed in the approach.Step-1. Building a Decision Matrix A decision matrix “X” is created in the first step:In equation (1), Xij shows the performance value of the ith alternative on the jth criterion.Step-2. Decision Matrix Normalization The process of normalization is done through the following equation: is the normalized performance value of the ith alternative on the jth criterion. Step-3. Calculating Standard Deviation and Its Correlation In the third step, the weights of the jth criterion can be found with the following equation: In equation (3), Cj is the amount of information contained in the jth criterion. Cj is calculated as follows:where is the standard deviation of the jth criterion and rjj′ is the correlation coefficient between the two criteria [41].

3.4.2. Numerical Work of the CRITIC Method

Weights are assigned to the criteria using the CRITIC method. The determination of this study was to find the top worker based upon the features for the offered task. The workers that will perform the tasks have been used as alternatives such as A1, A2, A3, A4, A5, A6, A7, A8, A9, and A10; and the features have been used as criteria such as worker history (C1), trustworthiness/honesty (C2), worker qualification (C3), reliability (C4), response ratio/quality of response (C5), skill level (C6), quality of task (C7), delivery time (C8), and cheap/cost-effective/cost-efficient (C9). Decision matrix has been established for these 10 workers (alternatives) with respect to defined features/criteria as shown in Table 6. The results given in Table 6 are obtained by normalizing the decision matrix while applying equation (2). Figure 2 shows the steps followed in this method.


Decision matrix
C1C2C3C4C5C6C7C8C9

A1675843978
A2348657697
A3978577645
A4574896587
A5753896426
A6487536915
A7857328628
A8277689345
A9763188954
A10175968243
Best988999998
Worst143123213

Now calculations of the CRITIC method are followed step by step. Table 5 describes the CRITIC method decision matrix.

Table 7 shows the CRITIC method normalized decision matrix.


Normalized decision matrix
C1C2C3C4C5C6C7C8C9

A10.3750.7500.6000.1250.7141.0000.0000.2500.000
A20.7500.0000.0000.3750.5710.3330.4290.0000.200
A30.0000.7500.0000.5000.2860.3330.4290.6250.600
A40.5000.7500.8000.1250.0000.5000.5710.1250.200
A50.2500.2501.0000.1250.0000.5000.7140.8750.400
A60.6250.0000.2000.5000.8570.5000.0001.0000.600
A70.1250.7500.2000.7501.0000.1670.4290.8750.000
A80.8750.2500.2000.3750.1430.0000.8570.6250.600
A90.2500.5001.0001.0000.1430.1670.0000.5000.800
A101.0000.2500.6000.0000.4290.1671.0000.6251.000
Standard deviation0.3320.3130.3890.3140.3590.2810.3590.3340.337

Measure of conflict has been calculated as shown in Table 8.


C1C2C3C4C5C6C7C8C9

C11.000−0.655−0.116−0.462−0.020−0.2380.452−0.2060.357
C2−0.6551.0000.1780.117−0.0460.242−0.166−0.159−0.389
C3−0.1160.1781.000−0.143−0.5380.217−0.007−0.0260.149
C4−0.2370.117−0.1431.0000.213−0.424−0.5290.2410.073
C5−0.020−0.046−0.5430.2131.0000.189−0.4200.221−0.362
C60.0000.2420.217−0.4240.1891.000−0.503−0.266−0.523
C70.452−0.166−0.007−0.529−0.420−0.5031.0000.0590.283
C8−0.206−0.159−0.0260.2410.221−0.2660.0591.0000.325
C90.357−0.3890.1490.073−0.362−0.5230.2830.3251.000

Standard deviation and its correlation with other criteria have been calculated for criteria weights as shown in Table 9.


C1C2C3C4C5C6C7C8C9

C10.0001.6551.1161.4621.0201.2380.5481.2060.643
C21.6550.0000.8220.8831.0460.7581.1661.1591.389
C31.1160.8220.0001.1431.5380.7831.0071.0260.851
C41.2370.8831.1430.0000.7871.4241.5290.7590.927
C51.0201.0461.5430.7870.0000.8111.4200.7791.362
C61.0000.7580.7831.4240.8110.0001.5031.2661.523
C70.5481.1661.0071.5291.4201.5030.0000.9410.717
C81.2061.1591.0260.7590.7791.2660.9410.0000.675
C90.6431.3890.8510.9271.3621.5230.7170.6750.000

For each worker, all the 9 features/criteria have been scaled, ranging from 1 to 10, as shown in Table 4.

3.4.3. TOPSIS Approach for Ranking of Workers

The TOPSIS approach deals with achieving ideal solutions. This approach has adopted simple computation procedures and thus it is reliable and well established. The selected alternatives in the TOPSIS method should have a minimum distance from positive ideal solution and maximum distance from negative ideal solution [41]. In this work, we will apply the TOPSIS method for ranking the alternatives. In this section, first of all, the TOPSIS method along with its steps and procedure to be followed will be discussed and then how this method has been used in this research will be discussed. The following are the steps used in the TOPSIS method in order to select and rank the best workers among different alternatives:Step-1. Determining Weight and Building Decision Matrix Decision matrix D is constructed in the first step by using multiple criteria and alternatives. For example, for “n” number of alternatives and criteria, the decision matrix can be found as follows: where A1, A2, A3, …, An are variable alternatives and C1, C2, C3, …, Cn are the criteria.Step-2. Normalized Decision Matrix As the input data of the decision matrix is originated from several different sources, it has to be converted into a dimensionless matrix by normalization. The comparison between different criteria is done via this dimension matrix. By using formula (6), a normalized decision matrix has been built. where i = 1, …, m and j = 1, …, n.Step-3. Weighted Normalized Decision Matrix As it is not necessary that the importances of all attributes will be the same, by multiplying the elements of the normalized decision matrix with random weight number, a weighted normalized decision matrix can be obtained. The weight number for multiplication is given in the following formula:Step-4. Finding Ideal Positive and Negative Solutions In this step, A+ denotes positive ideal solution and A denotes negative ideal solution. These are demonstrated through the weighted decision matrix. where J denotes the beneficial attributes and J′ denotes the nonbeneficial attributes.Step-5. Separation Measures By the following formulas, ideal and nonideal separations are calculated.Step-6. Finding Relative Closeness It is determined with respect to the ideal solutions by using the following equation:Step-7. Alternatives Ranking By using Ci value, the process of ranking is prepared; high Ci value shows top rank order of the alternative, which can be labeled as superior in terms of efficiency. The descending order can be adopted for the comparison of improved performance [41].

3.4.4. Numerical Work of the TOPSIS Method

In this section, evaluation of the workers and their ranking will be obtained based upon 9 identified features by using the TOPSIS method. The data has been collected from different questionaries answered by several experts in relevant fields. The decision matrix is constructed by the data obtained from the panel of experts.

All the work is done step by step as shown in Figure 3.

By using equation (6), normalized decision is obtained. The results are listed in Table 10 along with criteria weights.


Input data
AttributesWorker historyTrustworthiness/honestyQualificationReliabilityResponse ratio/quality of responseSkill levelQuality of taskDelivery timeCheap/cost-effective/cost-efficient

W1675843978
W2348657697
W3978577645
W4574896587
W5753896426
W6487536915
W7857328628
W8277689345
W9763188954
W10175968243

Decision matrix based on weighted normalization is obtained by using equation (7). Ideal positive and ideal negative solutions are calculated by using equations (8) and (9) and their values are given in Table 11.


AttributesWorker historyTrustworthiness/honestyQualificationReliabilityResponse ratio/quality of responseSkill levelQuality of taskDelivery timeCheap/cost-effective/cost-efficient

W10.330.350.260.400.190.140.450.420.42
W20.160.200.420.300.240.320.300.540.37
W30.490.350.420.250.340.320.300.240.26
W40.270.350.210.400.430.270.250.480.37
W50.380.250.160.400.430.270.200.120.32
W60.220.390.370.250.140.270.450.060.26
W70.440.390.370.250.140.270.450.060.26
W80.110.250.370.150.100.360.300.120.42
W90.380.350.370.300.390.410.150.240.26
W100.050.300.160.050.390.360.450.300.21
Weights0.1140.1070.1250.1050.1220.0980.1220.1010.105

For alternatives such as W1, W2, W3, W4, W5, W6, W7, W8, W9, and W10, the relative closeness (Pi) value of potential location is computed to solution through equation (12) which is shown in Table 12.


Weighted normalized data
AttributesWorker historyTrustworthiness/honestyQualificationReliabilityResponse ratio/quality of responseSkill levelQuality of taskDelivery timeCheap/cost-effective/cost-efficientS+S

W10.0370.0370.0330.0420.0230.0130.0550.0420.0440.0500.078
W20.0190.0210.0530.0310.0290.0310.0370.0550.0390.0540.074
W30.0560.0370.0530.0260.0410.0310.0370.0240.0280.0450.078
W40.0310.0370.0260.0420.0530.0270.0300.0490.0390.0800.080
W50.0440.0260.0200.0420.0530.0270.0240.0120.0330.0670.070
W60.0250.0420.0460.0260.0180.0270.0550.0060.0280.0730.059
W70.0500.0420.0460.0260.0180.0270.0550.0060.0280.0660.071
W80.0120.0260.0460.0160.0120.0360.0370.0120.0440.0820.047
W90.0440.0370.0460.0310.0470.0400.0180.0240.0280.0540.073
W100.0060.0320.0200.0050.0470.0360.0550.0300.0220.0780.061
Positive ideal0.0560.0420.0530.0420.0530.0400.0550.0550.044
Negative ideal0.0060.0210.0200.0050.0120.0130.0180.0060.022

Positive ideal solution and negative ideal solution are used for finding ideal and nonideal separation measures. These separation measures are calculated by using equations (10) and (11). Ideal separation measures (S+) for W1, W2, W3, W4, W5, W6, W7, W8, W9, and W10 are 0.050, 0.054, 0.045, 0.080, 0.067, 0.073, 0.066, 0.082, 0.054, and (0.078), respectively. Similarly, nonideal separation measures are calculated by equation (11) and values of S for W1, W2, W3, W4, W5, W6, W7, W8, W9, and W10 are 0.078, 0.074, 0.078, 0.080, 0.070, 0.059, 0.071, 0.047, 0.073, and 0.061, respectively.

Ranking is done upon the value of Pi, and the high value of Pi shows the top alternative. After relative closeness has been calculated, the ranking of workers is done based upon the value of Pi. In this research, alternative W3 had higher Pi value among other alternatives and thus got the first ranking, while W1 had second highest value and got rank 2, and so on. As W3 had higher Pi value and was ranked as 1, it was more reliable among all the other workers and was the most appropriate for the selected task to be performed. The details are shown in Table 13.


WorkersS+SS+ + SPiRank

W10.0500.0780.1280.60872
W20.0540.0740.1290.57763
W30.0450.0780.1230.63531
W40.0800.0800.1590.50006
W50.0670.0700.1370.50857
W60.0730.0590.1320.44818
W70.0660.0710.1370.51805
W80.0820.0470.1290.365010
W90.0540.0730.1270.57584
W100.0780.0610.1400.43899

From Table 13, ranking of the workers is clearly presented, and tasks will be assigned to the most suitable workers according to their ranking. Graphical representation of workers’ ranking is shown in Figure 4.

As the figure shows, workers W1, W2, W3, W4, W5, W6, W7, W8, W9, and W10 are ranked as 2, 3, 1, 6, 7, 8, 5, 10, 4, and 9, respectively. These rankings are directly dependent upon the Pi values of these workers, which have been calculated with the help of the TOPSIS method. Higher Pi value indicates higher rank, while lower Pi value indicates lower rank, as shown in Table 13.

4. Conclusion

Assigning the task to the most appropriate worker is very important in crowdsourcing because if the task is assigned to an inappropriate worker it affects crowdsourcing activity in several ways such as waste of time, money, and clients trusts. The proposed research presents a mechanism for assigning a task to the worker. This proposed mechanism is based on multiple criteria. Worker features such as worker history, trustworthiness/honesty, worker qualification, reliability, response ratio/quality of response, skill level, quality of task, delivery time, and cheap/cost-effective/cost-efficient are selected by the identified features. Two MCDM methods, CRITIC and TOPSIS, have been used. Weights have been assigned to these features by the CRITIC method and then evaluation and ranking of the workers have been analyzed by the TOPSIS method in order to assign the task to the most appropriate worker. As the existing task assignment is based on single criterion, the proposed work is novel in terms of assigning workers based on multiple criteria as well as using MCDM methods for current work in crowdsourcing.

Data Availability

No data were used to support this study.

Conflicts of Interest

The authors declare that there are no conflicts of interest.

References

  1. M. Hosseini, A. Shahri, K. Phalp, J. Taylor, and R. Ali, “Crowdsourcing: a taxonomy and systematic mapping study,” Computer Science Review, vol. 17, pp. 43–69, 2015. View at: Publisher Site | Google Scholar
  2. A. Sarı, A. Tosun, and G. I. Alptekin, “A systematic literature review on crowdsourcing in software engineering,” Journal of Systems and Software, vol. 153, pp. 200–219, 2019. View at: Google Scholar
  3. R. M. Borromeo, T. Laurent, and M. Toyama, “The influence of crowd type and task xomplexity on crowdsourced work quality,” in Proceedings of the 20th International Database Engineering & Applications Symposium, Montreal, Canada, July 2016. View at: Google Scholar
  4. K. Mao, L. Capra, M. Harman, and Y. Jia, “A survey of the use of crowdsourcing in software engineering,” Journal of Systems and Software, vol. 126, pp. 57–84, 2017. View at: Publisher Site | Google Scholar
  5. Y. Baba, H. Kashima, K. Kinoshita, G. Yamaguchi, and Y. Akiyoshi, “Leveraging non-expert crowdsourcing workers for improper task detection in crowdsourcing marketplaces,” Expert Systems with Applications, vol. 41, no. 6, pp. 2678–2687, 2014. View at: Publisher Site | Google Scholar
  6. H. J. Khasraghi and A. Aghaie, “Crowdsourcing contests: understanding the effect of competitors’,” Behaviour & Information Technology, vol. 33, no. 12, pp. 1383–1395, 2014. View at: Publisher Site | Google Scholar
  7. M. C. Yuen, I. King, and K. S. Leung, “Task recommendation in crowdsourcing systems,” in Proceedings of the First International Workshop on Crowdsourcing and Data Mining, Beijing, China, August 2012. View at: Publisher Site | Google Scholar
  8. G. Li, J. Wang, Y. Zheng, and M. Franklin, “Crowdsourced data management: a survey,” in Proceedings of the IEEE 33rd International Conference on Data Engineering (ICDE), San Diego, CA, USA, April 2017. View at: Publisher Site | Google Scholar
  9. S. B. Roy, I. Lykourentzou, S. Thirumuruganathan, S. Amer-Yahia, and G. Das, “Task assignment optimization in knowledge-intensive crowdsourcing,” The VLDB Journal, vol. 24, no. 4, pp. 467–491, 2015. View at: Publisher Site | Google Scholar
  10. Z. Xiao, X. Dai, H. Jiang et al., “Vehicular task offloading via heat-aware MEC cooperation using game-theoretic method,” IEEE Internet of Things Journal, vol. 7, pp. 2038–2052, 2019. View at: Google Scholar
  11. H. Zhang and M. Sugiyama, “Task selection for bandit-based task assignment in heterogeneous crowdsourcing,” in Proceedings of the 2015 Conference on Technologies and Applications of Artificial Intelligence (TAAI), pp. 164–171, Tainan, Taiwan, November 2015. View at: Publisher Site | Google Scholar
  12. C. Qiu, A. C. Squicciarini, B. Carminati, J. Caverlee, and D. R. Khare, “CrowdSelect increasing accuracy of crowdsourcing tasks through behavior prediction and user selection,” in Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, Indianapolis, IN, USA, October 2016. View at: Publisher Site | Google Scholar
  13. T. Awwad, N. Bennani, K. Ziegler, V. Sonigo, L. Brunie, and H. Kosch, “Efficient worker selection through history-based learning in crowdsourcing,” in Proceedings of the IEEE 41st Annual Computer Software and Applications Conference (COMPSAC), pp. 923–928, Turin, Italy, July 2017. View at: Publisher Site | Google Scholar
  14. M. Z. Tunio, H. Luo, W. Cong et al., “Impact of personality on task selection in crowdsourcing software development: a sorting approach,” IEEE Access, vol. 5, pp. 18287–18294, 2017. View at: Publisher Site | Google Scholar
  15. Z. Wang, H. Sun, Y. Fu, and L. Ye, “Recommending crowdsourced software developers in consideration of skill improvement,” in Proceedings of the 32nd IEEE/ACM International Conference on Automated Software Engineering, Urbana, IL, USA, November 2017. View at: Publisher Site | Google Scholar
  16. Y. Xiaoyan, C. Yanjiao, and L. Baochun, “Task assignment with guaranteed quality for crowdsourcing platforms,” in Proceedings of the IEEE/ACM 25th International Symposium on Quality of Service (IWQoS), pp. 1–10, Vilanova i la Geltrú, Spain, June 2017. View at: Google Scholar
  17. H. Sun, B. Dong, B. Zhang, W. H. Wang, and M. Kantarcioglu, “Sensitive task assignments in crowdsourcing markets with colluding workers,” in Proceedings of the IEEE 34th International Conference on Data Engineering (ICDE), pp. 377–388, Paris, France, April 2018. View at: Publisher Site | Google Scholar
  18. L. Cui, X. Zhao, L. Liu, H. Yu, and Y. Miao, “Learning complex crowdsourcing task allocation strategies from humans,” in Proceedings of the 2nd International Conference on Crowd Science and Engineering, Beijing, China, July 2017. View at: Google Scholar
  19. M. Safran and D. Che, “Real-time recommendation algorithms for crowdsourcing systems,” Applied Computing and Informatics, vol. 13, no. 1, pp. 47–56, 2017. View at: Publisher Site | Google Scholar
  20. A. Burnap, R. Gerth, R. Gonzalez, and P. Y. Papalambros, “Identifying experts in the crowd for evaluation of engineering designs,” Journal of Engineering Design, vol. 28, no. 5, pp. 317–337, 2017. View at: Publisher Site | Google Scholar
  21. A. Moayedikia, K. L. Ong, Y. L. Boo, and W. G. S. Yeoh, “Task assignment in microtask crowdsourcing platforms using learning automata,” Engineering Applications of Artificial Intelligence, vol. 74, pp. 212–225, 2018. View at: Publisher Site | Google Scholar
  22. Z. Xiao, F. Li, H. Jiang et al., “A joint information and energy cooperation framework for CR-enabled macro–femto heterogeneous networks,” IEEE Internet of Things Journal, vol. 7, pp. 2828–2839, 2019. View at: Google Scholar
  23. J. Jiang, B. An, Y. Jiang, P. Shi, Z. Bu, and J. Cao, “Batch Allocation for tasks with overlapping skill requirements in crowdsourcing,” IEEE Transactions on Parallel and Distributed Systems, vol. 30, no. 8, pp. 1722–1737, 2019. View at: Publisher Site | Google Scholar
  24. Y. Du, Y. E. Sun, H. Huang, L. Huang, H. Xu, and X. Wu, “Quality-aware online task assignment mechanisms using latent topic model,” Theoretical Computer Science, vol. 803, pp. 130–143, 2019. View at: Publisher Site | Google Scholar
  25. Q. Pan, H. Dong, Y. Wang, Z. Cai, and L. Zhang, “Recommendation of crowdsourcing tasks based on Word2vec semantic tags,” Wireless Communications and Mobile Computing, vol. 2019, Article ID 2121850, 10 pages, 2019. View at: Publisher Site | Google Scholar
  26. D. Yu, Y. Wang, and Z. Zhou, “Software crowdsourcing task allocation algorithm based on dynamic utility,” IEEE Access, vol. 7, pp. 33094–33106, 2019. View at: Publisher Site | Google Scholar
  27. D. Yu, Z. Zhou, and Y. Wang, “Crowdsourcing software task assignment method for collaborative development,” IEEE Access, vol. 7, pp. 35743–35754, 2019. View at: Publisher Site | Google Scholar
  28. H. Jiang, Z. Xiao, Z. Li, J. Xu, F. Zeng, and D. Wang, “An energy-efficient framework for internet of things underlaying heterogeneous small cell networks,” IEEE Transactions on Mobile Computing, vol. 99, p. 1, 2020. View at: Publisher Site | Google Scholar
  29. Z. Qin, D. Wu, Z. Xiao, B. Fu, and Z. Qin, “Modeling and analysis of data aggregation from convergecast in mobile sensor networks for industrial IoT,” IEEE Transactions on Industrial Informatics, vol. 14, no. 10, pp. 4457–4467, 2018. View at: Publisher Site | Google Scholar
  30. Z. Xiao, X. Shen, F. Zeng et al., “Spectrum resource sharing in heterogeneous vehicular networks: a noncooperative game-theoretic approach with correlated equilibrium,” IEEE Transactions on Vehicular Technology, vol. 67, no. 10, pp. 9449–9458, 2018. View at: Publisher Site | Google Scholar
  31. R. Khazankin, H. Psaier, D. Schall, and S. Dustdar, “QoS-based task scheduling in crowdsourcing environments,” in Proceedings of the Service-Oriented Computing, pp. 297–311, Berlin, Germany, December 2011. View at: Publisher Site | Google Scholar
  32. D. Schall, B. Satzger, and H. Psaier, “Crowdsourcing tasks to social networks in BPEL4People,” World Wide Web, vol. 17, no. 1, pp. 1–32, 2014. View at: Publisher Site | Google Scholar
  33. A. Tarasov, S. J. Delany, and B. M. Namee, “Dynamic estimation of worker reliability in crowdsourcing for regression tasks: making it work,” Expert Systems with Applications, vol. 41, no. 14, pp. 6190–6210, 2014. View at: Publisher Site | Google Scholar
  34. L. Machado, R. Prikladnicki, F. Meneguzzi, C. R. B. D. Souza, and E. Carmel, “Task allocation for crowdsourcing using AI planning,” in Proceedings of the 3rd International Workshop on CrowdSourcing in Software Engineering, Austin, TX, USA, May 2016. View at: Publisher Site | Google Scholar
  35. D. Geiger and M. Schader, “Personalized task recommendation in crowdsourcing information systems–current state of the art,” Decision Support Systems, vol. 65, pp. 3–16, 2014. View at: Publisher Site | Google Scholar
  36. R. L. Saremi and Y. Yang, “Dynamic simulation of software workers and task completion,” in Proceedings of the Second International Workshop on CrowdSourcing in Software Engineering, Florence, Italy, May 2015. View at: Publisher Site | Google Scholar
  37. W. Maalej and M. Ellmann, “On the similarity of task contexts,” in Proceedings of the Second International Workshop on Context for Software Development, Florence, Italy, May 2015. View at: Google Scholar
  38. A. Ellero, P. Ferretti, and G. Furlanetto, “Realtime crowdsourcing with payment of idle workers in the retainer model,” Procedia Economics and Finance, vol. 32, pp. 20–26, 2015. View at: Publisher Site | Google Scholar
  39. A. Carvalho, S. Dimitrov, and K. Larson, “How many crowdsourced workers should a requester hire?” Annals of Mathematics and Artificial Intelligence, vol. 78, no. 1, pp. 45–72, 2016. View at: Publisher Site | Google Scholar
  40. A. S Fonteles, S. Bouveret, and J. Gensel, “Trajectory recommendation for task accomplishment in crowdsourcing–a model to favour different actors,” Journal of Location based Services, vol. 10, no. 2, pp. 125–141, 2016. View at: Publisher Site | Google Scholar
  41. L. Ning, Y. Ali, H. Ke, S. Nazir, and Z. Huanli, “A hybrid MCDM approach of selecting lightweight cryptographic cipher based on ISO and NIST lightweight cryptography security requirements for internet of health things,” IEEE Access, vol. 8, Article ID 220187, 2020. View at: Publisher Site | Google Scholar

Copyright © 2021 Zhao Huiqi et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

Related articles

No related content is available yet for this article.
 PDF Download Citation Citation
 Download other formatsMore
 Order printed copiesOrder
Views368
Downloads338
Citations

Related articles

No related content is available yet for this article.

Article of the Year Award: Outstanding research contributions of 2021, as selected by our Chief Editors. Read the winning articles.