We found a match
Your institution may have access to this item. Find your institution then sign in to continue.
- Title
深度递归残差网络的遥感图像空谱融合.
- Authors
王, 芬; 郭, 擎; 葛, 小青
- Abstract
Pan-sharpening is a task in the field of remote sensing data fusion, in which multispectral (MS) images with rich spectral information but low spatial resolution and panchromatic (PAN) images with rich spatial details but only grey information are fused to yield images with high spatial and spectral resolution. Traditional Component Substitution (CS) methods replace a particular component of the MS image transformation with a PAN image, and then inversely transforms it to obtain the final fused image. The traditional MultiResolution Analysis (MRA) methods first extract spatial structures from the PAN image by using MRA transforms, and then the extracted spatial structure information is injected into the up-sampled MS images to obtain the fused image. The whole fusing process of the CS and MRA methods can be described as linear functions. However, the performance of such linear models are limited by their linearity, which often has spectral distortion. In recent years, many advanced nonlinear deep learning models have been proposed. However, those existing deep learning fusion models are relatively simple and pose difficultly in learn in-depth features. To overcome the shortcomings of the current models, we propose a deep recursive residual network that is specifically designed for the pan-sharpening task. Considering that the low-resolution input image and the high-resolution output image have high similarity, learning the relationship between input and output is highly redundant and difficult. If the sparse residual features between input and output are learned directly, then the network convergence can be significantly improved. Thus, the residual learning introduces the network structure, in which the introduced residuals include global residuals and local residuals. Such a structure is conducive to learning and not prone to overfitting. Moreover, the residual network can solve the problem of deep network gradient disappearance and gradient explosion well. Recursive network improves accuracy by increasing the number of network layers without increasing weight parameters. Specifically, as we use the residual network globally, recursive learning is introduced into residual learning by constructing recursive blocks structure, whereas multiple local residual units are stacked together in the recursive block. Through such an end-to-end network design, a better image fusion effect is obtained. Given that no ideal fusion result has been used as a label, we made a data set according to Wald's protocol using the original MS as the ideal fused image, downsampling and then upsampling the MS as the MS of the network input, and the downsampled PAN as the PAN of the network input. To comprehensively analyze our experimental results, we performed a large number of simulation experiments and real experiments on the 4-band GaoFen-1 data and 8-band WorldView-2 data with abundant feature types. We then generalized them to 4-band GeoEye data and 8-band WorldView-3 data. Experimental results are compared with traditional methods and the existing deep learning methods. The subjective visual analysis and objective evaluation indicators show that the proposed method reduces the spectral distortion phenomenon of traditional methods and preserves the spectrum of an image better than the existing deep learning method does. The deep network designed in this paper has learned more in-depth and more luxurious image features and has achieved better fusion effects than existing methods. It uses a residual network to solve profound network gradient disappearance, gradient explosion, and degradation problems. In addition, the weight parameters are reduced by the design of the recurrent recursive block, and the network speed is improved. The generalization experiment shows that our network has a good generalization ability.
- Subjects
DEEP learning; CONVOLUTIONAL neural networks; IMAGE fusion; REMOTE sensing
- Publication
Journal of Remote Sensing, 2021, Vol 25, Issue 6, p1244
- ISSN
1007-4619
- Publication type
Article
- DOI
10.11834/jrs.20219250