The Library
Visual explanation for Open-domain question answering with BERT
Tools
Shao, Zekai, Sun, Shuran, Zhao, Yuheng, Wang, Siyuan, Wei, Zhongyu, Gui, Tao, Turkay, Cagatay and Chen, Siming (2023) Visual explanation for Open-domain question answering with BERT. IEEE Transactions on Visualization and Computer Graphics . pp. 1-18. doi:10.1109/tvcg.2023.3243676 ISSN 1941-0506. (In Press)
|
PDF
WRAP-Visual-explanation-Open-domain-question-answering-BERT-23.pdf - Accepted Version - Requires a PDF viewer. Download (14Mb) | Preview |
Official URL: https://doi.org/10.1109/tvcg.2023.3243676
Abstract
Open-domain question answering (OpenQA) is an essential but challenging task in natural language processing that aims to answer questions in natural language formats on the basis of large-scale unstructured passages. Recent research has taken the performance of benchmark datasets to new heights, especially when these datasets are combined with techniques for machine reading comprehension based on Transformer models. However, as identified through our ongoing collaboration with domain experts and our review of literature, three key challenges limit their further improvement: (i) complex data with multiple long texts, (ii) complex model architecture with multiple modules, and (iii) semantically complex decision process. In this paper, we present VEQA, a visual analytics system that helps experts understand the decision reasons of OpenQA and provides insights into model improvement. The system summarizes the data flow within and between modules in the OpenQA model as the decision process takes place at the summary, instance and candidate levels. Specifically, it guides users through a summary visualization of dataset and module response to explore individual instances with a ranking visualization that incorporates context. Furthermore, VEQA supports fine-grained exploration of the decision flow within a single module through a comparative tree visualization. We demonstrate the effectiveness of VEQA in promoting interpretability and providing insights into model enhancement through a case study and expert evaluation.
Item Type: | Journal Article | |||||||||||||||||||||
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
Subjects: | Q Science > Q Science (General) Q Science > QA Mathematics > QA76 Electronic computers. Computer science. Computer software |
|||||||||||||||||||||
Divisions: | Faculty of Social Sciences > Centre for Interdisciplinary Methodologies | |||||||||||||||||||||
SWORD Depositor: | Library Publications Router | |||||||||||||||||||||
Library of Congress Subject Headings (LCSH): | Question-answering systems, Machine learning, Visual analytics | |||||||||||||||||||||
Journal or Publication Title: | IEEE Transactions on Visualization and Computer Graphics | |||||||||||||||||||||
Publisher: | Institute of Electrical and Electronics Engineers (IEEE) | |||||||||||||||||||||
ISSN: | 1941-0506 | |||||||||||||||||||||
Official Date: | 28 February 2023 | |||||||||||||||||||||
Dates: |
|
|||||||||||||||||||||
Page Range: | pp. 1-18 | |||||||||||||||||||||
DOI: | 10.1109/tvcg.2023.3243676 | |||||||||||||||||||||
Status: | Peer Reviewed | |||||||||||||||||||||
Publication Status: | In Press | |||||||||||||||||||||
Reuse Statement (publisher, data, author rights): | © 2023 Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works. | |||||||||||||||||||||
Access rights to Published version: | Restricted or Subscription Access | |||||||||||||||||||||
Date of first compliant deposit: | 16 March 2023 | |||||||||||||||||||||
Date of first compliant Open Access: | 16 March 2023 | |||||||||||||||||||||
RIOXX Funder/Project Grant: |
|
|||||||||||||||||||||
Related URLs: |
Request changes or add full text files to a record
Repository staff actions (login required)
View Item |
Downloads
Downloads per month over past year