We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
VEDAM: Urban Vegetation Extraction Based on Deep Attention Model from High-Resolution Satellite Images.
- Authors
Yang, Bin; Zhao, Mengci; Xing, Ying; Zeng, Fuping; Sun, Zhaoyang
- Abstract
With the rapid development of satellite and internet of things (IoT) technology, it becomes more and more convenient to acquire high-resolution satellite images from the ground. Extraction of urban vegetation from high-resolution satellite images can provide valuable suggestions for the decision-making of urban management. At present, deep-learning semantic segmentation has become an important method for vegetation extraction. However, due to the poor representation of context and spatial information, the effect of segmentation is not accurate. Thus, vegetation extraction based on Deep Attention Model (VEDAM) is proposed to enhance the context and spatial information representation ability in the scenario of vegetation extraction from satellite images. Specifically, continuous convolutions are used for feature extraction, and atrous convolutions are introduced to obtain more multi-scale context information. Then the extracted features are enhanced by the Spatial Attention Module (SAM) and the atrous spatial pyramid convolution functions. In addition, image-level feature obtained by image pooling encoding global context further improves the overall performance. Experiments are conducted on real datasets Gaofen Image Dataset (GID). From the comparative experimental results, it is concluded that VEDAM achieves the best mIoU (mIoU = 0.9136) of vegetation semantic segmentation.
- Subjects
VEDAS; REMOTE-sensing images; URBAN plants; FEATURE extraction; INTERNET of things
- Publication
Electronics (2079-9292), 2023, Vol 12, Issue 5, p1215
- ISSN
2079-9292
- Publication type
Article
- DOI
10.3390/electronics12051215