Abstract
AbstractThe success of crowdsourcing projects relies critically on motivating a crowd to contribute. One particularly effective method for incentivising participants to perform tasks is to run contests where participants compete against each other for rewards. However, there are numerous ways to implement such contests in specific projects, that vary in how performance is evaluated, how participants are rewarded, and the sizes of the prizes. Also, the best way to implement contests in a particular project is still an open challenge, as the effectiveness of each contest implementation (henceforth, incentive) is unknown in advance. Hence, in a crowdsourcing project, a practical approach to maximise the overall utility of the requester (which can be measured by the total number of completed tasks or the quality of the task submissions) is to choose a set of incentives suggested by previous studies from the literature or from the requester’s experience. Then, an effective mechanism can be applied to automatically select appropriate incentives from this set over different time intervals so as to maximise the cumulative utility within a given financial budget and a time limit. To this end, we present a novel approach to this incentive selection problem. Specifically, we formalise it as an online decision making problem, where each action corresponds to offering a specific incentive. After that, we detail and evaluate a novel algorithm, , to solve the incentive selection problem efficiently and adaptively. In theory, in the case that all the estimates in (except the estimates of the effectiveness of each incentive) are correct, we show that the algorithm achieves the regret bound of $\mathcal {O}(\sqrt {B/c})$
O
(
B
/
c
)
, where B denotes the financial budget and c is the average cost of the incentives. In experiments, the performance of is about 93% (up to 98%) of the optimal solution and about 9% (up to 40%) better than state-of-the-art algorithms in a broad range of settings, which vary in budget sizes, time limits, numbers of incentives, values of the standard deviation of the incentives’ utilities, and group sizes of the contests (i.e., the numbers of participants in a contest).
Funder
Bộ Giáo dục và ào tạo
Engineering and Physical Sciences Research Council
Publisher
Springer Science and Business Media LLC
Reference57 articles.
1. Truong N V-Q, Stein S, Tran-Thanh L, Jennings NR (2018) Adaptive incentive selection for crowdsourcing contests. In: Proceedings of the 17th international conference on autonomous agents and multiagent systems (AAMAS). IFAAMAS, pp 2100–2102
2. Doan A, Ramakrishnan R, Halevy AY (2011) Crowdsourcing systems on the world-wide web. Commun ACM 54(4):86–96. https://doi.org/10.1145/1924421.1924442
3. Ghezzi A, Gabelloni D, Martini A, Natalicchio A (2018) Crowdsourcing: A review and suggestions for future research. Int J Manag Rev 20(2):343–363. https://doi.org/10.1111/ijmr.12135
4. Jain S, Deodhar SJ (2021) Social mechanisms in crowdsourcing contests: a literature review. Behaviour & Information Technology,, pp 1–35. https://doi.org/10.1080/0144929X.2021.1880638
5. Vermicelli S, Cricelli L, Grimaldi M (2021) How can crowdsourcing help tackle the COVID-19 pandemic? An explorative overview of innovative collaborative practices. R&D Management 51(2):183–194. https://doi.org/10.1111/radm.12443
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献