Skip to content Skip to navigation
University of Warwick
  • Study
  • |
  • Research
  • |
  • Business
  • |
  • Alumni
  • |
  • News
  • |
  • About

University of Warwick
Publications service & WRAP

Highlight your research

  • WRAP
    • Home
    • Search WRAP
    • Browse by Warwick Author
    • Browse WRAP by Year
    • Browse WRAP by Subject
    • Browse WRAP by Department
    • Browse WRAP by Funder
    • Browse Theses by Department
  • Publications Service
    • Home
    • Search Publications Service
    • Browse by Warwick Author
    • Browse Publications service by Year
    • Browse Publications service by Subject
    • Browse Publications service by Department
    • Browse Publications service by Funder
  • Help & Advice
University of Warwick

The Library

  • Login
  • Admin

Employing deep part-object relationships for salient object detection

Tools
- Tools
+ Tools

Liu, Yi, Zhang, Qiang, Zhang, Dingwen and Han, Jungong (2020) Employing deep part-object relationships for salient object detection. In: ICCV 2019. International conference on computer vision, Seoul, Korea, 27 Oct - 2 Nov 2019. Published in: 2019 IEEE/CVF International Conference on Computer Vision (ICCV) ISBN 9781728148045. ISSN 1550-5499. doi:10.1109/ICCV.2019.00132

[img]
Preview
PDF
WRAP-Employing-deep-relationships-salient-object-detection-Han-2019.pdf - Accepted Version - Requires a PDF viewer.

Download (1592Kb) | Preview
Official URL: https://doi.org/10.1109/ICCV.2019.00132

Request Changes to record.

Abstract

Despite Convolutional Neural Networks (CNNs) based methods have been successful in detecting salient objects, their underlying mechanism that decides the salient intensity of each image part separately cannot avoid inconsistency of parts within the same salient object. This would ultimately result in an incomplete shape of the detected salient object. To solve this problem, we dig into part-object relationships and take the unprecedented attempt to employ these relationships endowed by the Capsule Network (CapsNet) for salient object detection. The entire salient object detection system is built directly on a Two-Stream Part-Object Assignment Network (TSPOANet) consisting of three algorithmic steps. In the first step, the learned deep feature maps of the input image are transformed to a group of primary capsules. In the second step, we feed the primary capsules into two identical streams, within each of which low-level capsules (parts) will be assigned to their familiar high-level capsules (object) via a locally connected routing. In the final step, the two streams are integrated in the form of a fully connected layer, where the relevant parts can be clustered together to form a complete salient object. Experimental results demonstrate the superiority of the proposed salient object detection network over the state-of-the-art methods.

Item Type: Conference Item (Paper)
Subjects: Q Science > QA Mathematics > QA76 Electronic computers. Computer science. Computer software
Divisions: Faculty of Science > WMG (Formerly the Warwick Manufacturing Group)
Library of Congress Subject Headings (LCSH): Neural Networks (Computer), Computer vision, Image processing -- Digital techniques, Neural networks (Computer science) -- Computer simulation
Journal or Publication Title: 2019 IEEE/CVF International Conference on Computer Vision (ICCV)
Publisher: IEEE
ISBN: 9781728148045
ISSN: 1550-5499
Official Date: 27 February 2020
Dates:
DateEvent
27 February 2020Published
10 July 2019Accepted
DOI: 10.1109/ICCV.2019.00132
Status: Peer Reviewed
Publication Status: Published
Publisher Statement: © 2019 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.
Access rights to Published version: Restricted or Subscription Access
RIOXX Funder/Project Grant:
Project/Grant IDRIOXX Funder NameFunder ID
61773301[NSFC] National Natural Science Foundation of Chinahttp://dx.doi.org/10.13039/501100001809
201806960044China Scholarship Councilhttp://dx.doi.org/10.13039/501100004543
Conference Paper Type: Paper
Title of Event: ICCV 2019. International conference on computer vision
Type of Event: Conference
Location of Event: Seoul, Korea
Date(s) of Event: 27 Oct - 2 Nov 2019
Related URLs:
  • Organisation

Request changes or add full text files to a record

Repository staff actions (login required)

View Item View Item

Downloads

Downloads per month over past year

View more statistics

twitter

Email us: wrap@warwick.ac.uk
Contact Details
About Us