Skip to content Skip to navigation
University of Warwick
  • Study
  • |
  • Research
  • |
  • Business
  • |
  • Alumni
  • |
  • News
  • |
  • About

University of Warwick
Publications service & WRAP

Highlight your research

  • WRAP
    • Home
    • Search WRAP
    • Browse by Warwick Author
    • Browse WRAP by Year
    • Browse WRAP by Subject
    • Browse WRAP by Department
    • Browse WRAP by Funder
    • Browse Theses by Department
  • Publications Service
    • Home
    • Search Publications Service
    • Browse by Warwick Author
    • Browse Publications service by Year
    • Browse Publications service by Subject
    • Browse Publications service by Department
    • Browse Publications service by Funder
  • Help & Advice
University of Warwick

The Library

  • Login
  • Admin

An efficient approach for geo-multimedia cross-modal retrieval

Tools
- Tools
+ Tools

Zhu, Lei, Long, Jun, Zhang, Chengyuan, Yu, Weiren, Yuan, Xinpan and Sun, Longzhi (2019) An efficient approach for geo-multimedia cross-modal retrieval. IEEE Access, 7 . pp. 180571-180589. doi:10.1109/ACCESS.2019.2940055

[img]
Preview
PDF
WRAP-efficient-approach-geo-multimedia-cross-modal-Yu-2019.pdf - Accepted Version - Requires a PDF viewer.

Download (12Mb) | Preview
Official URL: http://dx.doi.org/10.1109/ACCESS.2019.2940055

Request Changes to record.

Abstract

Due to the rapid development of mobile Internet techniques, such as online social networking and location-based services, massive amount of multimedia data with geographical information is generated and uploaded to the Internet. In this paper, we propose a novel type of cross-modal multimedia retrieval, called geo-multimedia cross-modal retrieval, which aims to find a set of geo-multimedia objects according to geographical distance proximity and semantic concept similarity. Previous studies for cross-modal retrieval and spatial keyword search cannot address this problem effectively because they do not consider multimedia data with geo-tags (geo-multimedia). Firstly, we present the definition of kNN geo-multimedia cross-modal query and introduce relevant concepts such as spatial distance and semantic similarity measurement. As the key notion of this work, cross-modal semantic representation space is formulated at the first time. A novel framework for geo-multimedia cross-modal retrieval is proposed, which includes multi-modal feature extraction, cross-modal semantic space mapping, geo-multimedia spatial index and cross-modal semantic similarity measurement. To bridge the semantic gap between different modalities, we also propose a method named cross-modal semantic matching (CoSMat for shot) which contains two important components, i.e., CorrProj and LogsTran, which aims to build a common semantic representation space for cross-modal semantic similarity measurement. In addition, to implement semantic similarity measurement, we employ deep learning based method to learn multi-modal features that contains more high level semantic information. Moreover, a novel hybrid index, GMR-Tree is carefully designed, which combines signatures of semantic representations and R-Tree. An efficient GMR-Tree based kNN search algorithm called $k$ GMCMS is developed. Comprehensive experimental evaluations on real and synthetic datasets clearly demonstrate that our approach outperforms the...

Item Type: Journal Article
Subjects: P Language and Literature > P Philology. Linguistics
T Technology > TK Electrical engineering. Electronics Nuclear engineering
Divisions: Faculty of Science > Computer Science
Library of Congress Subject Headings (LCSH): Semantics, Video recordings, Internet, Indexing, Visualization
Journal or Publication Title: IEEE Access
Publisher: IEEE
ISSN: 2169-3536
Official Date: 9 September 2019
Dates:
DateEvent
9 September 2019Published
Date of first compliant deposit: 31 January 2020
Volume: 7
Page Range: pp. 180571-180589
DOI: 10.1109/ACCESS.2019.2940055
Status: Peer Reviewed
Publication Status: Published
Publisher Statement: © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Access rights to Published version: Restricted or Subscription Access
RIOXX Funder/Project Grant:
Project/Grant IDRIOXX Funder NameFunder ID
61702560[NSFC] National Natural Science Foundation of Chinahttp://dx.doi.org/10.13039/501100001809
61472450[NSFC] National Natural Science Foundation of Chinahttp://dx.doi.org/10.13039/501100001809
61972203[NSFC] National Natural Science Foundation of Chinahttp://dx.doi.org/10.13039/501100001809
2016JC2018Science and Technology Innovative Research Team in Higher Educational Institutions of Hunan Provincehttp://dx.doi.org/10.13039/501100012269
2018JJ3691Science and Technology Innovative Research Team in Higher Educational Institutions of Hunan Provincehttp://dx.doi.org/10.13039/501100012269
2018zzts177Innovation-Driven Project of Central South Universityhttp://dx.doi.org/10.13039/501100012486

Request changes or add full text files to a record

Repository staff actions (login required)

View Item View Item

Downloads

Downloads per month over past year

View more statistics

twitter

Email us: wrap@warwick.ac.uk
Contact Details
About Us