Document Type : Original Article
Authors
1 Babol Noshirvani University of Technology
2 Department of Communication Eng., Faculty of Electrical and Computer Eng., Babol Noshirvani University of Technology, Babol, Iran.
Abstract
In pansharpening, a low-resolution multispectral image (LRMS) and a high-resolution panchromatic image (PAN) are fused to produce a high-resolution multispectral image (HRMS). Recent studies have shown that convolutional neural networks can be used for sharpening remote sensing images with excellent results. However, there are still two major problems that need to be resolved. Despite the lack of ideal HRMS images for learning, most current methods require more effort to generate simulated data. On the other hand, these methods usually ignore the rich spatial information in panchromatic images. To address these issues, we have proposed an unsupervised fusion framework based on a multiscale dense network called UMP-GAN for pansharpening. This framework, which employs generative adversarial networks, can be trained directly on full-resolution images without requiring any preprocessing step. First, a multiscale dense generator network is proposed to extract features from original input images to generate HRMS images. In the following, two separate discriminator networks are used to protect the spectral information and spatial details of the input images in the final image. Therefore, the proposed method provides the possibility of training two discriminating networks, each of which has a different and complementary task. Finally, new lost functions are proposed to perform training under unsupervised settings. This method can deepen the exchange of gradient information between the generator network and the discriminator networks. The results from WorldView-2, GaoFen-2, and QuickBird satellite images show that the proposed method performs better than other state-of-the-art models in the fusion of remote sensing images.
Keywords
Main Subjects