Skip to content Skip to navigation
University of Warwick
  • Study
  • |
  • Research
  • |
  • Business
  • |
  • Alumni
  • |
  • News
  • |
  • About

University of Warwick
Publications service & WRAP

Highlight your research

  • WRAP
    • Home
    • Search WRAP
    • Browse by Warwick Author
    • Browse WRAP by Year
    • Browse WRAP by Subject
    • Browse WRAP by Department
    • Browse WRAP by Funder
    • Browse Theses by Department
  • Publications Service
    • Home
    • Search Publications Service
    • Browse by Warwick Author
    • Browse Publications service by Year
    • Browse Publications service by Subject
    • Browse Publications service by Department
    • Browse Publications service by Funder
  • Help & Advice
University of Warwick

The Library

  • Login
  • Admin

How clumpy is my image? Evaluating crowdsourced annotation tasks

Tools
- Tools
+ Tools

Hutt, Hugo, Everson, Richard, Grant, Murray, Love, John and Littlejohn, George (2013) How clumpy is my image? Evaluating crowdsourced annotation tasks. In: 13th UK Workshop on Computational Intelligence (UKCI) , Guildford, 9-11 Sep 2013. Published in: 2013 13th UK Workshop on Computational Intelligence (UKCI) pp. 136-143. ISBN 9781479915668. doi:10.1109/UKCI.2013.6651298

Research output not available from this repository.

Request-a-Copy directly from author or use local Library Get it For Me service.

Official URL: http://dx.doi.org/10.1109/UKCI.2013.6651298

Request Changes to record.

Abstract

The use of citizen science to obtain annotations from multiple annotators has been shown to be an effective method for annotating datasets in which computational methods alone are not feasible. The way in which the annotations are obtained is an important consideration which affects the quality of the resulting consensus estimates. In this paper, we examine three separate approaches to obtaining scores for instances rather than merely classifications. To obtain a consensus score annotators were asked to make annotations in one of three paradigms: classification, scoring and ranking. A web-based citizen science experiment is described which implements the three approaches as crowdsourced annotation tasks. The tasks are evaluated in relation to the accuracy and agreement among the participants using both simulated and real-world data from the experiment. The results show a clear difference in performance between the three tasks, with the ranking task obtaining the highest accuracy and agreement among the participants. We show how a simple evolutionary optimiser may be used to improve the performance by reweighting the importance of annotators.

Item Type: Conference Item (Paper)
Divisions: Faculty of Science, Engineering and Medicine > Science > Life Sciences (2010- )
Journal or Publication Title: 2013 13th UK Workshop on Computational Intelligence (UKCI)
Publisher: IEEE
ISBN: 9781479915668
Book Title: 2013 13th UK Workshop on Computational Intelligence (UKCI)
Official Date: 2013
Dates:
DateEvent
2013Published
Page Range: pp. 136-143
DOI: 10.1109/UKCI.2013.6651298
Status: Peer Reviewed
Publication Status: Published
Access rights to Published version: Restricted or Subscription Access
Conference Paper Type: Paper
Title of Event: 13th UK Workshop on Computational Intelligence (UKCI)
Type of Event: Conference
Location of Event: Guildford
Date(s) of Event: 9-11 Sep 2013

Request changes or add full text files to a record

Repository staff actions (login required)

View Item View Item
twitter

Email us: wrap@warwick.ac.uk
Contact Details
About Us