We propose a Dual Attention-guided Residual U-Net (DARUNet) for Image denoising. This learning is based on the U-Net with several dual attention residual modules embedded in the network to realize the image denoising of Gaussian synthesis noise. Specifically, our method combines the advantages of the reconstruction and transformation of U-Net, the ability of strengthening model training of residual structure and the guiding role of attention mechanism. Among them, the attention mechanism adopts the dual attention parallel structure including spatial attention and channel attention, which effectively guides the model to reduce noise. The model can retain the detail features of the original image, and has excellent ability for image denoising. The resulting images are more natural and have higher quality. A large number of experimental results show that our method achieves advanced performance qualitatively and quantitatively.
This paper proposes a novel generation adversarial network (GAN) based on UNet++ architecture for infrared and visible image fusion. The idea of this method is to establish an adversarial game between the generator that generates the fusion image and the discriminator that determines whether the fusion image meets the standard. The generator uses the structure of UNet++, which does not have a deep network structure but establishes a dense connection at the shallow layer, so it has strong ability to obtain shallow features. As for the discriminator, it adopts a network structure similar to the Visual Geometry Group (VGG) network. The loss function uses the method of comparing the high-frequency part of the fused image with the two source images to make the fused image have more high-frequency information. In terms of high-frequency detail extraction of source images, two gradient extraction methods based on different combinations of directional extraction operators and high-frequency extraction are proposed in this paper. This paper compares the two methods above with other fusion networks, and proves that the fusion image generated by our network is highly similar to the infrared image, and has a lot of gradient information of the visible image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.