The Library
UNetFormer: A UNet-like transformer for efficient semantic segmentation of remote sensing urban scene imagery
Tools
Wang, Libo, Li, Rui, Zhang, Ce, Fang, Shenghui, Duan, Chenxi, Meng, Xiaoliang and Atkinson, Peter M. (2022) UNetFormer: A UNet-like transformer for efficient semantic segmentation of remote sensing urban scene imagery. ISPRS Journal of Photogrammetry and Remote Sensing, 190 . pp. 196-214. doi:10.1016/j.isprsjprs.2022.06.008 ISSN 0924-2716.
|
PDF
WRAP-UNetFormer-A-UNet-like-transformer-efficient-semantic-segmentation-remote-sensing-2022.pdf - Accepted Version - Requires a PDF viewer. Available under License Creative Commons Attribution Non-commercial No Derivatives 4.0. Download (3665Kb) | Preview |
Official URL: https://doi.org/10.1016/j.isprsjprs.2022.06.008
Abstract
Semantic segmentation of remotely sensed urban scene images is required in a wide range of practical applications, such as land cover mapping, urban change detection, environmental protection, and economic assessment. Driven by rapid developments in deep learning technologies, the convolutional neural network (CNN) has dominated semantic segmentation for many years. CNN adopts hierarchical feature representation, demonstrating strong capabilities for information extraction. However, the local property of the convolution layer limits the network from capturing the global context. Recently, as a hot topic in the domain of computer vision, Transformer has demonstrated its great potential in global information modelling, boosting many vision-related tasks such as image classification, object detection, and particularly semantic segmentation. In this paper, we propose a Transformer-based decoder and construct an UNet-like Transformer (UNetFormer) for real-time urban scene segmentation. For efficient segmentation, the UNetFormer selects the lightweight ResNet18 as the encoder and develops an efficient global–local attention mechanism to model both global and local information in the decoder. Extensive experiments reveal that our method not only runs faster but also produces higher accuracy compared with state-of-the-art lightweight models. Specifically, the proposed UNetFormer achieved 67.8% and 52.4% mIoU on the UAVid and LoveDA datasets, respectively, while the inference speed can achieve up to 322.4 FPS with a 512 × 512 input on a single NVIDIA GTX 3090 GPU. In further exploration, the proposed Transformer-based decoder combined with a Swin Transformer encoder also achieves the state-of-the-art result (91.3% F1 and 84.1% mIoU) on the Vaihingen dataset. The source code will be freely available at https://github.com/WangLibo1995/GeoSeg.
Item Type: | Journal Article | ||||||||
---|---|---|---|---|---|---|---|---|---|
Subjects: | Q Science > QA Mathematics > QA76 Electronic computers. Computer science. Computer software T Technology > TA Engineering (General). Civil engineering (General) |
||||||||
Divisions: | Faculty of Science, Engineering and Medicine > Engineering > Engineering | ||||||||
Library of Congress Subject Headings (LCSH): | Image processing -- Digital techniques, Remote sensing, Computer vision, Geographical perception | ||||||||
Journal or Publication Title: | ISPRS Journal of Photogrammetry and Remote Sensing | ||||||||
Publisher: | Elsevier | ||||||||
ISSN: | 0924-2716 | ||||||||
Official Date: | August 2022 | ||||||||
Dates: |
|
||||||||
Volume: | 190 | ||||||||
Page Range: | pp. 196-214 | ||||||||
DOI: | 10.1016/j.isprsjprs.2022.06.008 | ||||||||
Status: | Peer Reviewed | ||||||||
Publication Status: | Published | ||||||||
Access rights to Published version: | Restricted or Subscription Access | ||||||||
Date of first compliant deposit: | 2 August 2022 | ||||||||
Date of first compliant Open Access: | 24 June 2023 | ||||||||
RIOXX Funder/Project Grant: |
|
Request changes or add full text files to a record
Repository staff actions (login required)
View Item |
Downloads
Downloads per month over past year