The Library
Efficient and adaptive incentive selection for crowdsourcing contests
Tools
Truong, Nhat Van-Quoc, Dinh, Le Cong, Stein, Sebastian, Tran-Thanh, Long and Jennings, Nicholas R. (2023) Efficient and adaptive incentive selection for crowdsourcing contests. Applied Intelligence, 53 . pp. 9204-9234. doi:10.1007/s10489-022-03593-2 ISSN 0924-669X.
|
PDF
WRAP-Efficient-adaptive-incentive-selection-crowdsourcing-22.pdf - Published Version - Requires a PDF viewer. Available under License Creative Commons Attribution 4.0. Download (3749Kb) | Preview |
Official URL: https://doi.org/10.1007/s10489-022-03593-2
Abstract
The success of crowdsourcing projects relies critically on motivating a crowd to contribute. One particularly effective method for incentivising participants to perform tasks is to run contests where participants compete against each other for rewards. However, there are numerous ways to implement such contests in specific projects, that vary in how performance is evaluated, how participants are rewarded, and the sizes of the prizes. Also, the best way to implement contests in a particular project is still an open challenge, as the effectiveness of each contest implementation (henceforth, incentive) is unknown in advance. Hence, in a crowdsourcing project, a practical approach to maximise the overall utility of the requester (which can be measured by the total number of completed tasks or the quality of the task submissions) is to choose a set of incentives suggested by previous studies from the literature or from the requester’s experience. Then, an effective mechanism can be applied to automatically select appropriate incentives from this set over different time intervals so as to maximise the cumulative utility within a given financial budget and a time limit. To this end, we present a novel approach to this incentive selection problem. Specifically, we formalise it as an online decision making problem, where each action corresponds to offering a specific incentive. After that, we detail and evaluate a novel algorithm, HAIS, to solve the incentive selection problem efficiently and adaptively. In theory, in the case that all the estimates in HAIS (except the estimates of the effectiveness of each incentive) are correct, we show that the algorithm achieves the regret bound of O( √B/c), where B denotes the financial budget and c is the average cost of the incentives. In experiments, the performance of HAIS is about 93% (up to 98%) of the optimal solution and about 9% (up to 40%) better than state-of-the-art algorithms in a broad range of settings, which vary in budget sizes, time limits, numbers of incentives, values of the standard deviation of the incentives’ utilities, and group sizes of the contests (i.e., the numbers of participants in a contest).
Item Type: | Journal Article | |||||||||
---|---|---|---|---|---|---|---|---|---|---|
Subjects: | Q Science > QA Mathematics > QA76 Electronic computers. Computer science. Computer software | |||||||||
Divisions: | Faculty of Science, Engineering and Medicine > Science > Computer Science | |||||||||
Library of Congress Subject Headings (LCSH): | Crowdsourcing, Incentive (Psychology) | |||||||||
Journal or Publication Title: | Applied Intelligence | |||||||||
Publisher: | Springer New York LLC | |||||||||
ISSN: | 0924-669X | |||||||||
Official Date: | April 2023 | |||||||||
Dates: |
|
|||||||||
Volume: | 53 | |||||||||
Page Range: | pp. 9204-9234 | |||||||||
DOI: | 10.1007/s10489-022-03593-2 | |||||||||
Status: | Peer Reviewed | |||||||||
Publication Status: | Published | |||||||||
Access rights to Published version: | Open Access (Creative Commons) | |||||||||
Date of first compliant deposit: | 30 August 2022 | |||||||||
Date of first compliant Open Access: | 30 August 2022 | |||||||||
RIOXX Funder/Project Grant: |
|
Request changes or add full text files to a record
Repository staff actions (login required)
View Item |
Downloads
Downloads per month over past year