Skip to content Skip to navigation
University of Warwick
  • Study
  • |
  • Research
  • |
  • Business
  • |
  • Alumni
  • |
  • News
  • |
  • About

University of Warwick
Publications service & WRAP

Highlight your research

  • WRAP
    • Home
    • Search WRAP
    • Browse by Warwick Author
    • Browse WRAP by Year
    • Browse WRAP by Subject
    • Browse WRAP by Department
    • Browse WRAP by Funder
    • Browse Theses by Department
  • Publications Service
    • Home
    • Search Publications Service
    • Browse by Warwick Author
    • Browse Publications service by Year
    • Browse Publications service by Subject
    • Browse Publications service by Department
    • Browse Publications service by Funder
  • Help & Advice
University of Warwick

The Library

  • Login
  • Admin

Visually-aware acoustic event detection using heterogeneous graphs

Tools
- Tools
+ Tools

Amir, Shirian, Somandepalli, Krishna, Sanchez Silva, Victor and Guha, Tanaya (2022) Visually-aware acoustic event detection using heterogeneous graphs. In: 23rd INTERSPEECH Conference, Incheon, Korea, 18-22 Sep 2022. Published in: INTERSPEECH proceedings pp. 2428-2432. doi:10.21437/Interspeech.2022-10670

[img]
Preview
PDF
WRAP-visually-aware-acoustic-event-detection-heterogeneous-graphs-Amir-2022.pdf - Accepted Version - Requires a PDF viewer.

Download (3473Kb) | Preview
Official URL: https://doi.org/10.21437/Interspeech.2022-10670

Request Changes to record.

Abstract

Perception of auditory events is inherently multimodal relying on both audio and visual cues. A large number of existing multimodal approaches process each modality using modality-specific models and then fuse the embeddings to encode the joint information. In contrast, we employ heterogeneous graphs to explicitly capture the spatial and temporal relationships between the modalities and represent detailed information about the underlying signal. Using heterogeneous graph approaches to address the task of visually-aware acoustic event classification, which serves as a compact, efficient and scalable way to represent data in the form of graphs. Through heterogeneous graphs, we show efficiently modelling of intra- and inter-modality relationships both at spatial and temporal scales. Our model can easily be adapted to different scales of events through relevant hyperparameters. Experiments on AudioSet, a large benchmark, shows that our model achieves state-of-the-art performance. Our code is available at github.com/AmirSh15/VAED HeterGraph.

Item Type: Conference Item (Paper)
Subjects: Q Science > QA Mathematics > QA76 Electronic computers. Computer science. Computer software
T Technology > TA Engineering (General). Civil engineering (General)
T Technology > TK Electrical engineering. Electronics Nuclear engineering
Divisions: Faculty of Science, Engineering and Medicine > Science > Computer Science
Library of Congress Subject Headings (LCSH): Neural networks (Computer science), Graph theory, Deep learning (Machine learning), Heterogeneous computing, Computer vision, Multimedia communications
Journal or Publication Title: INTERSPEECH proceedings
Official Date: September 2022
Dates:
DateEvent
September 2022Published
14 June 2022Accepted
Page Range: pp. 2428-2432
DOI: 10.21437/Interspeech.2022-10670
Status: Peer Reviewed
Publication Status: Published
Access rights to Published version: Open Access (Creative Commons)
Copyright Holders: Copyright © 2022 ISCA
Date of first compliant deposit: 1 July 2022
Date of first compliant Open Access: 14 October 2022
Conference Paper Type: Paper
Title of Event: 23rd INTERSPEECH Conference
Type of Event: Conference
Location of Event: Incheon, Korea
Date(s) of Event: 18-22 Sep 2022
Related URLs:
  • Organisation
  • Publisher
Open Access Version:
  • ArXiv
  • Publisher

Request changes or add full text files to a record

Repository staff actions (login required)

View Item View Item

Downloads

Downloads per month over past year

View more statistics

twitter

Email us: wrap@warwick.ac.uk
Contact Details
About Us