Complexity and Robustness Trade-Off for Traditional and Deep ModelsView this Special Issue
Research Article | Open Access
Yongjun Huang, Shah Nazir, Jiyu Wu, Fida Hussain Khoso, Farhad Ali, Habib Ullah Khan, "An Efficient Decision Support System for the Selection of Appropriate Crowd in Crowdsourcing", Complexity, vol. 2021, Article ID 5518878, 11 pages, 2021. https://doi.org/10.1155/2021/5518878
An Efficient Decision Support System for the Selection of Appropriate Crowd in Crowdsourcing
Crowdsourcing is a complex task-solving model that utilizes humans for solving organizational specific problems. For assigning a crowdsourced task to an online crowd, crowd selection is carried out to select appropriate crowd for achieving the task. The efficiency and effectiveness of crowdsourcing may fail if irrelevant crowd is selected for performing a task. Early decisions regarding selection of a crowd can ultimately lead to successful completion of tasks. To select most appropriate crowd from crowdsourcing, this paper presents a decision support system (DSS) for appropriate selection of crowd. The system has been implemented in the Superdecision tool by plotting hierarchy of goals, criteria, and alternatives. Various calculations have been done for performing the proposed research. Results of the study reveal that the proposed system is effective and efficient for selection of crowd in crowdsourcing by performing various pairwise computation of the study.
Crowds are online people who have abilities to accomplish different types of tasks. These crowds may be newcomers who are accomplishing tasks for the first time or they may be experienced members who have completed various tasks previously. Crowdsourcing is a practice that acquires the services of huge group of people for obtaining information or completing a project . It is internet-enabled collaborative activity that solves organizational problems by collecting the knowledge of online communities. The contributing editor Jeff Howe in June 2006 first used the word “crowdsourcing” in article “The Rise of Crowdsourcing” that was published in Wired magazine . “Crowdsourcing is a type of participative online activity in which individual, institution, non-profit organization, or company proposes to a group of individuals of varying knowledge, heterogeneity, and number, via a flexible open call, voluntary undertaking of a task. The undertaking of task, of variable complexity and modularity, and in which crowd should participate bringing their work, money, knowledge and/or experience, always entails mutual benefit . The user will receive satisfaction of a given type of need, be it economic, social recognition, self-esteem, or the development of individual skills, while crowdsourcer will obtain and utilize to their advantage that what the user has brought to the venture, whose form will depend on type of activity undertaken” .
The applications of crowdsourcing are used widely for software testing , usability testing , machine learning processes , and decision making . The productivity of large organizations has been enhanced by crowdsourcing . The crowds comprise diverse-background participants who possess skills relevant to tasks and experience in the field and have expertise in carrying out crowdsourced task or tackling complex problems . Organizations are commonly using crowdsourcing to address challenges simultaneously with the large involvement of crowds. Crowdsourcing is an effective way to mitigate organizational dilemmas .
Crowdsourcing helps business organizations to recruit global, cheap, and skilled workers from different platforms [12, 13]. The new era of Web 3.0 is driven by innovations in ICT and social networking, and as a result organizational decision-making process has also been changed . Modern corporations use the Internet to recruit a massive crowd. The Internet is a media for contact between crowds and businesses, and they work together using gadgets like iPads, mobile phones, laptops, wearable watches, etc. [15–17]. The crowds are recruited for completing different tasks from social or global societies . By consuming small amount of management cost, time organization can achieve appropriate solutions with multiple crowd worker participation [19, 20]. As crowdsourcing is an online activity, it may entail certain risks, such as the announcement of tasks on websites and the selection of a suitable and qualified team . The increased interest of crowdsourcing makes the selection of crowd workers a challenge. The crowded workers may be untrustworthy whose work can be followed by different errors. The choice of right and proper workers would boost the efficacy of crowdsourcing [22, 23]. Different business organization employs a suitable worker to complete task . The following are the contributions of the proposed study:(i)A DSS is presented for the appropriate selection of crowd(ii)The proposed system is implemented in the Superdecision tool(iii)The hierarchy of goals, criteria, and alternatives is plotted with various pairwise comparisons to perform the proposed research(iv)Results of the study reveal that the proposed approach is effective and efficient
The organization of the paper is as follows. Section 2 presents the related work on the various aspects of the crowd and crowdsource concepts. Section 3 shows the details of the methodology with a description of the decision support system and the selection of features from literature. Results and discussion are given in Section 4. The paper is concluded in Section 5.
2. Literature Review
Crowdsourcing is an online process that could be linked to various challenges, such as crowd selection problem . Various strategies, approaches, and models were presented in the past to address the crowd selection problem. Selection of crowd was based on the characteristics they possess which includes personal characteristics such as gender, age, qualification, education, language, and worker nationality; behavioral characteristics such as sociolects, left/right handed, and personality traits; cognitive or perceptual characteristics that include a person’s memory capacity, vision, hearing, or these may include skills, capabilities, past service, expertise, experience, and majors. Based on these characteristics, crowd are selected [25, 26]. A crowd targeting framework was implemented that automatically discovers and targets a specific crowd to improve data quality. The targeted crowd is selected by the worker characteristics such as nationality, education level, gender, and major or even personality test score and any other screening measures. Information gain that is a new characteristic measure for worker selection is also introduced. The framework selects workers using 3 main stages. The first is the probing stage in which the tasks are distributed to the whole population of crowd. Crowds are allowed to complete these tasks. The workers characteristics such as gender and location are also gathered from their profile for future use in this stage. The second is a discovery stage that is related to the discovery of the best workers, where unbiased worker samples of the entire crowd population are identified. The workers are evaluated using criteria such as good, bad, available, etc. The third is targeting stage in which the remaining and upcoming tasks are assigned to the discovered groups. The targeting stage improves the quality of data and increases budget performance .
Workers are selected by various organizations based on their capabilities for generating ideas or solving problems related to technology . To allow a worker to participate in difficult tasks, an organization assesses the worker’s ability . Based on ability, a worker is selected . For verifying workers ability, Borda ranking algorithm can be utilized . Worker skills may also be an indicator for its selection as the skills reflect its ability to perform a task. The crowds are judged on the basis of their skills. Skills are, therefore, one of the main considerations for selection of right participants . Workers possess various skills such as writing, IT, problem solving, process management, time management, communication, creative, e-skills, business thinking, and enterprise . Skills’ assessment or testing is used to assess various worker skills and these are helpful in the task matching processes. Organization offers certification that does certify workers posse’s sufficient skills [33–35]. The certification is used for selection of workers . The crowd who possess essential skills complete the task . Trust is a major factor for consideration of workers for a task [23, 33].
Organization selects crowd workers for accomplishment of various task based on its trustworthiness . For evaluating trust value “Trust-Based Access Control (TBAC)” model is utilized, and for decision concerning whether a worker is to be trusted or not, a discrete model was implemented . Crowd trust was proposed that is a context-aware model for the evaluation of trust related to the type of task “TaTrust” and for the calculation of trust associated with task reward amount “RaTrust.” For the selection of trustworthy workers with 2 context-aware trusts, “MOWS GA” that is an evolutionary algorithm and depends on NSGA-II was introduced. The dishonest workers can be identified using the crowd trust model . A recruitment process was introduced in spatial mobile crowdsourcing that automatically selects trustworthy workers by utilizing the services of IoT. A huge group of workers is reduced to potential trustworthy workers using Lovain Community detection algorithm, and the optimal set of crowd is selected by utilizing integer Linear program . By utilizing approaches of machine learning, the prediction of trustworthiness was improved with the exploration of endorsement (interwork relationship) .
Workers may also be selected on the basis of their experience with tasks. Experience is considered as a crucial factor  for crowd selection. The crowd consists of huge masses of people and according to the level of experience best workers are selected [25, 42]. For selection of experienced participant for task, experience strategy is utilized . Selection of crowd greatly relies on its expertise . A crowd is selected based on its expertise level . Only workers having requisite expertise are allowed to carry out the task . For ensuring the workers expertise level filtering [28, 33] is performed. Workers are judged according to varied expertise using expertise-estimation approaches . The task can considerately be performed by worker having expertise . Qualified workers are judged by means of qualification tests and these assessments are superior filters for quality enhancement. The work quality can be controlled by conducting qualification tests. In these tests, a worker has to answer various questions provided by organizations. Workers must pass the qualification test before engaging with projects or tasks . Workers are assessed based on their qualification level .
Profile based selection is also carried out for worker selection as the profile represents the personal features of the worker that can be directly observed. The profile contains worker details such as sex, age, education, and history of accomplished tasks [31, 47]. Exploiting workers’ profiles would improve the assessment, assignments, and the quality of task . Workers are responsible for maintaining and modifying their profiles for getting work from organizations . For selection of workers based on workers profile personality based tool may be utilized . Profile based approach was implemented for an effective selection of worker in crowdsourcing to reduce overhead time and budget by replacing an offline learning process with the online probing stage. This was done for the purpose of learning profile features and these features will be used by the online targeting algorithm for the selection of effective workers for different tasks . The profile based selection of crowd can enhance the decision-making process of crowd selection .
DSS is related to the discipline of information system area that supports and enhances the decision-making process of an organization . It is difficult for decision makers to give preferences as high volumes of data regarding crowds are available. DSS is implemented for broadening the capabilities of human information and for enhancing the process of decision making when dealing with large amount of data . Crowdsourcing can play a role in the organizational decision-making process. A complex problem can easily be solved by crowd as they provide ideas, solicit opinions, give prediction, accumulate knowledge, etc. . There is a lack of research which suggests a DSS for the selection of suitable crowds. Existing research studies were analyzed for the purpose of identifying the multifeatures of crowd. Table 1 represents these features. The multifeatures will be used by our DSS for the appropriate selection of crowd. Crowdsourcing activity entails three entities that are crowdsourcer/requestor which are organizations, individuals, or institutions who initiate the crowdsourcing process and seek out the ability of people for completing tasks which are shown in Figure 1 ; the crowd that consists of large group of people having enactive, cognitive, and perceptual abilities for solving tasks ; and the platform or market which is an online website or place where workers acquire and accomplish tasks .
The reason behind choosing the DSS for the proposed study was to consider the early decision of the crowd from the crowdsource. Various features of crowd were identified in the literature. Keeping in view the suitability of the crowd, the following key features were identified as the most suitable features from the literature. Table 1 shows the identified features of crowd based on literature.
3.1. Experimental Setup
The process of implementation and experimentation was done in the Superdecision software. The features were given as input to the software and then plotted as a hierarchy of goals, criteria, and alternatives. Figure 2 shows the process of making a hierarchy of the features along with the alternatives of crowds with the goal of selecting the crowd from the available options.
After plotting the features and crowd, the process of comparison was then done for each feature with respect to each crowd. For the information here only one comparison is shown. The same process is done for the comparison of all features and all crowds. Figure 3 graphically represents the process of comparison.
The values were given to each feature and then crowd. This process was done through the support of the tool. Figure 4 represents the graphical representation of the weights to each feature.
After assigning relevant weights to each feature and crowd, the process of comparison was done and the unweighted, weighted, and limit matrices were obtained for making the selection decision of crowd.
4. Results and Discussion
Crowdsourcing is a complex task-solving model to utilize the efforts of humans for solving organizational-specific issues. For assigning a crowdsourced task to an online crowd, the process of selecting a crowd is carried out to select a suitable crowd for attaining the given task. Making an early decision associated with the selection of the crowd can ultimately lead to successful completion of tasks. For selecting the best and right crowd from the crowdsourcing, this research presents a DSS for the appropriate selection of the crowd. The approach has been implemented in the Superdecision tool by plotting the hierarchy of goals, criteria, and alternatives. Various comparisons of the identified features with respect to crowds and crowds with respect to each feature were done. Relevant weights were given and after completion of the comparison process, different results were obtained. These results are shown in the form of tables and figures. Figure 5 graphically represents the priorities of features and available crowds based on normalization process by cluster and limiting.
As shown in Section 3, all the process of pairwise comparisons has been done in the software and for understanding only one representation is given as shown in Figures 3 and 4. The same processes have been done in the software for the rest of the attributes and alternatives. Once, all the comparisons process completed then all the normalized values of each criteria and alternatives are brought into unweighted and weighted super matrix. In unweighted matrix, the sum column values are greater than 1; then it is normalized again and then converted into weighted super matrix.
After the pairwise comparisons, all resulting comparisons for features and crowds were integrated into an unweighted matrix. The unweighted matrix is the collection of all pairwise comparisons done in the proposed research. Table 2 shows the unweighted matrix.
The unweighted matrix was then normalized for obtaining the weighted matrix. Table 3 represents the weighted matrix.
The weighted matrix was then converted to the limit matrix which is the final matrix for making the decision. The limit matrix was obtained by taking the power of the weighted matrix. Table 4 represents the limit matrix. From this matrix, the decision regarding the crowd can be made.
Figure 6 graphically shows the ranking of available crowds. Among the available alternatives of crowds, crowd2 has obtained the highest score which was considered as the highest priority, followed by crowd1, and so on. Therefore, from this figure, one can make decisions regarding the selection of the best crowd among the available alternatives.
Crowds are online people who have the capabilities to complete diverse types of tasks and projects. These crowds may be new comers who are accomplishing tasks for first time or they may be experienced members who have already finished various tasks in preceding projects. Crowdsourcing is a composite task-solving approach utilizing humans for solving organizational explicit problems. For assessing the crowdsourced task with online crowd, the crowd selection is carried out for the selection of an optimal and appropriate crowd for achieving the task. Early and on-time decision associated with the selection of the crowd can eventually put forward the successful completion of tasks. To select the most appropriate crowd from the crowdsourcing, the present study endeavors to attempt and devise a DSS for the selection of crowd from the crowdsource. The proposed DSS has been executed in the Superdecision tool. In the given tool, the hierarchy of criteria, alternatives, and goal was defined and then a process of pairwise comparisons has been done. Each table of pairwise comparison process was normalized in order to achieve optimal results for the selection of appropriate crowd. The experimental results of the study show that the proposed DSS is efficient and effective for the appropriate selection of crowd in crowdsourcing. In the future, the applicability of the proposed DSS will be tried through various parameters against robustness of the system and its effectiveness will be checked for effective usage in the crowdsource projects.
No data were used to support the study.
Conflicts of Interest
The authors declare no conflicts of interest.
- N. Lazar, “The big picture: crowdsourcing your way to big data,” Chance, vol. 32, 2019.
- J. Howe, “The rise of crowdsourcing,” 2006, https://www.wired.com/2006/06/crowds/.
- E. Estellés-Arolas and F. González-Ladrón-de-Guevara, “Towards an integrated crowdsourcing definition,” Journal of Information Science, vol. 38, no. 2, pp. 189–200, 2012.
- D. C. Brabham, Crowdsourcing, Mit Press, Cambridge, MA, USA, 2013.
- M. Yan, H. Sun, and X. Liu, “iTest: testing software with mobile crowdsourcing,” in Proceedings of the 1st International Workshop on Crowd-Based Software Development Methods and Technologies, pp. 19–24, Hong Kong, China, November 2014.
- D. Liu, R. G. Bias, M. Lease, and R. Kuipers, “Crowdsourcing for usability testing,” Proceedings of the American Society for Information Science and Technology, vol. 49, no. 1, pp. 1–10, 2012.
- E. D. Simpson, S. Reece, M. Venanzi et al., “Language understanding in the wild: combining crowdsourcing and machine learning,” in Proceedings of the 24th International Conference on World Wide Web, pp. 992–1002, Florence, Italy, May 2015.
- A. Slivkins and J. W. Vaughan, “Online decision making in crowdsourcing markets,” ACM SIGecom Exchanges, vol. 12, no. 2, pp. 4–23, 2014.
- A. Afuah and C. L. Tucci, “Crowdsourcing as a solution to distant search,” Academy of Management Review, vol. 37, no. 3, pp. 355–375, 2012.
- B. Morschheuser, J. Hamari, and A. Maedche, “Cooperation or competition-when do people contribute more? A field experiment on gamification of crowdsourcing,” International Journal of Human-Computer Studies, vol. 127, pp. 7–24, 2019.
- I. Dissanayake, J. Zhang, and B. Gu, “Virtual team performance in crowdsourcing contest: a social network perspective,” in Proceedings of the 2015 48th Hawaii International Conference on System Sciences, vol. 5, pp. 4894–4897, Kauai, HI, USA, January 2015.
- P. Shi, M. Zhao, W. Wang et al., “Best of both worlds: mitigating imbalance of crowd worker strategic choices without a budget,” Knowledge-Based Systems, vol. 163, pp. 1020–1031, 2019.
- K.-J. Stol, B. Caglayan, and B. Fitzgerald, “Competition-based crowdsourcing software development: a multi-method study from a customer perspective,” IEEE Transactions on Software Engineering, vol. 45, no. 3, pp. 237–260, 2019.
- D. R. Soriano, F. J. Garrigos‐Simon, R. L. Alcamí, and T. B. Ribera, “Social networks and Web 3.0: their impact on the management and marketing of organizations,” Management Decision, vol. 50, 2012.
- A. Ghezzi, D. Gabelloni, A. Martini, and A. Natalicchio, “Crowdsourcing: a review and suggestions for future research,” International Journal of Management Reviews, vol. 20, no. 2, pp. 343–363, 2018.
- M. Alsayyari and S. Alyahya, “Supporting coordination in crowdsourced software testing services,” in Proccedingsof the 2018 IEEE Symposium on Service-Oriented System Engineering (SOSE),, pp. 69–75, Bamberg, Germany, March 2018.
- L. Zhai, H. Wang, and X. Li, “Optimal task partition with delay requirement in mobile crowdsourcing,” Wireless Communications and Mobile Computing, vol. 2019, Article ID 5216495, 12 pages, 2019.
- M. Alhamed and T. Storer, “Estimating software task effort in crowds,” in Proceedings of the 2019 IEEE International Conference on Software Maintenance and Evolution (ICSME), pp. 281–285, Cleveland, OH, USA, September 2019.
- A. Dwarakanath, “CrowdBuild: a methodology for enterprise software development using crowdsourcing,” in Proceedings of the Second International Workshop on CrowdSourcing in Software Engineering, Florence, Italy, May 2015.
- C. Qiu, A. C. Squicciarini, B. Carminati, J. Caverlee, and D. R. Khare, “CrowdSelect: increasing accuracy of crowdsourcing tasks through behavior prediction and user selection,” in Proceedings of the 25th ACM International on Conference on Information and Knowledge Management, Indianapolis, IN, USA, May 2016.
- E. Mourelatos and M. Tzagarakis, “An investigation of factors affecting the visits of online crowdsourcing and labor platforms,” NETNOMICS: Economic Research and Electronic Networking, vol. 19, no. 3, pp. 95–130, 2018.
- A. Dubey, G. Virdi, M. S. Kuriakose, and V. Arora, “Towards adopting alternative workforce for software engineering,” in Proceedings of the 2016 IEEE 11th International Conference on Global Software Engineering (ICGSE), pp. 16–23, August 2016, Orange County, CA, USA.
- Y. Zhao, G. Liu, K. Zheng, A. Liu, Z. Li, and X. Zhou, “A context-aware approach for trustworthy worker selection in social crowd,” World Wide Web, vol. 20, no. 6, pp. 1211–1235, 2017.
- Q. Cui, J. Wang, G. Yang, M. Xie, Q. Wang, and M. Li, “Who should Be selected to perform a task in crowdsourced testing?” in Proceedings of the 2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC), vol. 1, pp. 75–84, Turin, Italy, July 2017.
- N. Leicht, I. Blohm, and J. M. Leimeister, “Leveraging the power of the crowd for software testing,” IEEE Software, vol. 34, no. 2, pp. 62–69, 2017.
- D. Archambault, H. Purchase, and T. Hoßfeld, “Evaluation in the crowd. crowdsourcing and human-centered experiments,” in Proceedings of the Dagstuhl Seminar 15481, vol. 2017, Springer, Dagstuhl Castle, Germany, November 2015.
- H. Li, B. Zhao, and A. Fuxman, “The wisdom of minority: discovering and targeting the right group of workers for crowdsourcing,” in Proceedings of the 23rd International Conference on World Wide Web, pp. 165–176, Seoul, South Korea, April 2014.
- K. L. Jeffcoat, T. J. Eveleigh, and B. Tanju, “A conceptual framework for increasing innovation through improved selection of specialized professionals,” Engineering Management Journal, vol. 31, no. 1, pp. 22–34, 2019.
- F. Kamoun, D. Alhadidi, and Z. Maamar, “Weaving risk identification into crowdsourcing lifecycle,” Procedia Computer Science, vol. 56, pp. 41–48, 2015.
- B. Zhang, C. H. Liu, J. Lu et al., “Privacy-preserving QoI-aware participant coordination for mobile crowdsourcing,” Computer Networks, vol. 101, pp. 29–41, 2016.
- K. Abhinav, G. K. Bhatia, A. Dubey, S. Jain, and N. Bhardwaj, “TasRec: a framework for task recommendation in crowdsourcing,” in Proceedings of the 15th International Conference on Global Software Engineering, pp. 86–95, Seoul, South Korea, June 2020.
- S.-A. Barnes, A. Green, and M. de Hoyos, “Crowdsourcing and work: individual factors and circumstances influencing employability,” New Technology, Work and Employment, vol. 30, no. 1, pp. 16–31, 2015.
- A. Sarı, A. Tosun, and G. I. Alptekin, “A systematic literature review on crowdsourcing in software engineering,” Journal of Systems and Software, vol. 153, pp. 200–219, 2019.
- M. Modaresnezhad, L. Iyer, P. Palvia, and V. Taras, “Information technology (IT) enabled crowdsourcing: a conceptual framework,” Information Processing & Management, vol. 57, no. 2, Article ID 102135, 2020.
- Z. Peng, X. Gui, J. An, R. Gui, and Y. Ji, “TDSRC: a task-distributing system of crowdsourcing based on social relation cognition,” Mobile Information Systems, vol. 2019, Article ID 7413460, 12 pages, 2019.
- M. Christoforaki and P. G. Ipeirotis, “A system for scalable and reliable technical-skill testing in online labor markets,” Computer Networks, vol. 90, pp. 110–120, 2015.
- H. Amintoosi, S. S. Kanhere, and M. Allahbakhsh, “Trust-based privacy-aware participant selection in social participatory sensing,” Journal of Information Security and Applications, vol. 20, pp. 11–25, 2015.
- O. Folorunso and O. A. Mustapha, “A fuzzy expert system to Trust-Based Access Control in crowdsourcing environments,” Applied Computing and Informatics, vol. 11, no. 2, pp. 116–129, 2015.
- B. Ye, Y. Wang, and L. Liu, “Crowd trust: a context-aware trust model for worker selection in crowdsourcing environments,” in Proceedings of the 2015 IEEE international conference on web services, pp. 121–128, IEEE, New York, NY, USA, July 2015.
- A. Khanfor, A. Hamrouni, H. Ghazzai, Y. Yang, and Y. Massoud, “A trustworthy recruitment process for spatial mobile crowdsourcing in large-scale social IoT,” in Proceedings of the 2020 IEEE Technology & Engineering Management Conference, Novi, MI, USA, June 2020.
- C. Wu, T. Luo, F. Wu, and G. Chen, “An endorsement-based reputation system for trustworthy crowdsourcing,” in Proceedings of the 2015 IEEE Conference on Computer Communications Workshops (INFOCOM WKSHPS), pp. 89-90, IEEE, Hong Kong, China, April 2015.
- H. J. Pongratz, “Of crowds and talents: discursive constructions of global online labour,” New Technology, Work and Employment, vol. 33, no. 1, pp. 58–73, 2018.
- N. Luz, N. Silva, and P. Novais, “A survey of task-oriented crowdsourcing,” Artificial Intelligence Review, vol. 44, no. 2, pp. 187–213, 2015.
- A. Moayedikia, W. Yeoh, K.-L. Ong, and Y. L. Boo, “Improving accuracy and lowering cost in crowdsourcing through an unsupervised expertise estimation approach,” Decision Support Systems, vol. 122, Article ID 113065, 2019.
- J. T. Bush and R. M. Balven, “Catering to the crowd: an HRM perspective on crowd worker engagement,” Human Resource Management Review, vol. 31, 2018.
- F. R. Assis Neto and C. A. S. Santos, “Understanding crowdsourcing projects: a systematic review of tendencies, workflow, and quality management,” Information Processing & Management, vol. 54, no. 4, pp. 490–506, 2018.
- X. Peng, “CrowdService: serving the individuals through mobile crowdsourcing and service composition,” in Proceedings of the 2016 31st IEEE/ACM International Conference on Automated Software Engineering (ASE), pp. 214–219, Singapore, September 2016.
- J. Mtsweni, E. K. Ngassam, and L. Burge, “A profile-aware microtasking approach for improving task assignment in crowdsourcing services,” in Proceedings of the 2016 IST-africa Week Conference, pp. 1–10, IEEE, Durban, South Africa, May 2016.
- T. Awwad, N. Bennani, K. Ziegler, V. Sonigo, L. Brunie, and H. Kosch, “Efficient worker selection through history-based learning in crowdsourcing,” in Proceedings of the 2017 IEEE 41st Annual Computer Software and Applications Conference (COMPSAC), vol. 1, pp. 923–928, Turin, Italy, July 2017.
- A. Smirnov, A. Ponomarev, and N. Shilov, “Hybrid crowd-based decision support in business processes: the approach and reference model,” Procedia Technology, vol. 16, pp. 376–384, 2014.
- D. Arnott and G. Pervan, “A critical analysis of decision support systems research revisited: the rise of design science,” Enacting Research Methods in Information Systems, vol. 3, pp. 43–103, 2016.
- M. Rhyn and I. Blohm, “Combining collective and artificial intelligence: towards a design theory for decision support in crowdsourcing,” in Proceedings of the Twenty-Fifth European Conference on Information Systems (ECIS), Guimarães, Portugal, June 2017.
- C.-M. Chiu, T.-P. Liang, and E. Turban, “What can crowdsourcing do for decision support?” Decision Support Systems, vol. 65, pp. 40–49, 2014.
- F. Saab, I. H. Elhajj, A. Kayssi, and A. Chehab, “Modelling cognitive bias in crowdsourcing systems,” Cognitive Systems Research, vol. 58, pp. 1–18, 2019.
- N. Mazlan, S. S. Syed Ahmad, and M. Kamalrudin, “Volunteer selection based on crowdsourcing approach,” Journal of Ambient Intelligence and Humanized Computing, vol. 9, no. 3, pp. 743–753, 2018.
- I. Dissanayake, N. Mehta, P. Palvia, V. Taras, and K. Amoako-Gyampah, “Competition matters! Self-efficacy, effort, and performance in crowdsourcing teams,” Information & Management, vol. 56, no. 8, Article ID 103158, 2019.
- O. Tokarchuk, R. Cuel, and M. Zamarian, “Analyzing crowd labor and designing incentives for humans in the loop,” IEEE Internet Computing, vol. 16, no. 5, pp. 45–51, 2012.
- U. Gadiraju, G. Demartini, R. Kawase, and S. Dietze, “Crowd anatomy beyond the good and bad: behavioral traces for crowd worker modeling and pre-selection,” Computer Supported Cooperative Work (CSCW), vol. 28, no. 5, pp. 815–841, 2019.
- A. L. Zanatta, L. Machado, and I. Steinmacher, “Competence, collaboration, and time management: barriers and recommendations for crowdworkers,” in Proceedings of the 2018 IEEE/ACM 5th International Workshop on Crowd Sourcing in Software Engineering (CSI-SE), pp. 9–16, Gothenburg, Sweden, May 2018.
- G. Montelisciani, D. Gabelloni, G. Tazzini, and G. Fantoni, “Skills and wills: the keys to identify the right team in collaborative innovation platforms,” Technology Analysis & Strategic Management, vol. 26, no. 6, pp. 687–702, 2014.
- T. D. LaToza, W. Ben Towne, A. van der Hoek, and J. D. Herbsleb, “Crowd development,” in Proceedings of the 2013 6th International Workshop on Cooperative and Human Aspects of Software Engineering (CHASE), pp. 85–88, San Francisco, CA, USA, May 2013.
- L. Machado, R. Prikladnicki, F. Meneguzzi, C. R. B. d. Souza, and E. Carmel, “Task allocation for crowdsourcing using AI planning,” in Proceedings of the 2016 IEEE/ACM 3rd International Workshop on Crowd Sourcing in Software Engineering (CSI-SE), pp. 36–40, Austin, TX, USA, May 2016.
- X. Wang, H. J. Khasraghi, and H. Schneider, “Towards an understanding of participants’ sustained participation in crowdsourcing contests,” Information Systems Management, vol. 37, no. 3, pp. 213–226, 2019.
- J. Lee and D. Seo, “Crowdsourcing not all sourced by the crowd: an observation on the behavior of Wikipedia participants,” Technovation, vol. 55-56, pp. 14–21, 2016.
- E. Schenk, C. Guittard, and J. Pénin, “Open or proprietary? Choosing the right crowdsourcing platform for innovation,” Technological Forecasting and Social Change, vol. 144, pp. 303–310, 2019.
- S. Standing and C. Standing, “The ethical use of crowdsourcing,” Business Ethics: A European Review, vol. 27, no. 1, pp. 72–80, 2018.
- G. D. Saxton, O. Oh, and R. Kishore, “Rules of crowdsourcing: models, issues, and systems of Control,” Information Systems Management, vol. 30, no. 1, pp. 2–20, 2013.
- M. Hosseini, K. Phalp, J. Taylor, and R. Ali, “The four pillars of crowdsourcing: a reference model,” in Proceedings of the 2014 IEEE Eighth International Conference on Research Challenges in Information Science (RCIS), pp. 1–12, IEEE, Marrakesh, Morocco, May 2014.
- T. Erickson, “Some thoughts on a framework for crowdsourcing,” in Proceedings of the Workshop on Crowdsourcing and Human Computation, pp. 1–4, Washington, DC, USA, July 2011.
- S. Faradani, B. Hartmann, and P. G. Ipeirotis, “What’s the right price? pricing tasks for finishing on time,” in Proceedings of the Workshops at the Twenty-Fifth AAAI Conference on Artificial Intelligence, Citeseer, San Francisco, CA, USA, August 2011.
Copyright © 2021 Yongjun Huang et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.