Skip to content Skip to navigation
University of Warwick
  • Study
  • |
  • Research
  • |
  • Business
  • |
  • Alumni
  • |
  • News
  • |
  • About

University of Warwick
Publications service & WRAP

Highlight your research

  • WRAP
    • Home
    • Search WRAP
    • Browse by Warwick Author
    • Browse WRAP by Year
    • Browse WRAP by Subject
    • Browse WRAP by Department
    • Browse WRAP by Funder
    • Browse Theses by Department
  • Publications Service
    • Home
    • Search Publications Service
    • Browse by Warwick Author
    • Browse Publications service by Year
    • Browse Publications service by Subject
    • Browse Publications service by Department
    • Browse Publications service by Funder
  • Help & Advice
University of Warwick

The Library

  • Login
  • Admin

Holding AI to account : challenges for the delivery of trustworthy AI in healthcare

Tools
- Tools
+ Tools

Procter, Rob, Tolmie, Peter and Rouncefield, Mark (2022) Holding AI to account : challenges for the delivery of trustworthy AI in healthcare. ACM Transactions on Computer-Human-Interaction . doi:10.1145/3577009 ISSN 1073-0516. (In Press)

[img]
Preview
PDF
WRAP-Holding-AI-account-challenges-delivery-trustworthy-AI-healthcare-22.pdf - Accepted Version - Requires a PDF viewer.

Download (941Kb) | Preview
Official URL: https://doi.org/10.1145/3577009

Request Changes to record.

Abstract

The need for AI systems to provide explanations for their behaviour is now widely recognised as key to their adoption. In this paper, we examine the problem of trustworthy AI and explore what delivering this means in practice, with a focus on healthcare applications. Work in this area typically treats trustworthy AI as a problem of Human-Computer Interaction involving the individual user and an AI system. However, we argue here that this overlooks the important part played by organisational accountability in how people reason about and trust AI in socio-technical settings. To illustrate the importance of organisational accountability, we present findings from ethnographic studies of breast cancer screening and cancer treatment planning in multidisciplinary team meetings to show how participants made themselves accountable both to each other and to the organisations of which they are members. We use these findings to enrich existing understandings of the requirements for trustworthy AI and to outline some candidate solutions to the problems of making AI accountable both to individual users and organisationally. We conclude by outlining the implications of this for future work on the development of trustworthy AI, including ways in which our proposed solutions may be re-used in different application settings.

Item Type: Journal Article
Divisions: Faculty of Science, Engineering and Medicine > Science > Computer Science
Journal or Publication Title: ACM Transactions on Computer-Human-Interaction
Publisher: ACM
ISSN: 1073-0516
Official Date: 22 December 2022
Dates:
DateEvent
22 December 2022Available
8 November 2022Accepted
DOI: 10.1145/3577009
Status: Peer Reviewed
Publication Status: In Press
Reuse Statement (publisher, data, author rights): © ACM, 2023. This is the author's version of the work. It is posted here by permission of ACM for your personal use. Not for redistribution. The definitive version was published in ACM Transactions on Computer-Human-Interaction, {VOL#, ISS#, (DATE)} http://doi.acm.org/10.1145/3577009
Access rights to Published version: Restricted or Subscription Access
Description:

Special issue on Human-centred AI in healthcare : Challenges appearing in the wild

Date of first compliant deposit: 30 November 2022
Date of first compliant Open Access: 6 March 2023
Related URLs:
  • Publisher

Request changes or add full text files to a record

Repository staff actions (login required)

View Item View Item

Downloads

Downloads per month over past year

View more statistics

twitter

Email us: wrap@warwick.ac.uk
Contact Details
About Us