Recently, an unsupervised method [denoted as deep image prior (DIP)] that interprets images as the output of a convolutional network with randomly initialized inputs has received much attention, as the DIP can adapt its network parameters to any given noisy image without requiring training pairs, providing a great flexibility for handling those noisy images with complex structures and low degree of self-similarity. However, the DIP method still suffers from a crucial limitation that only the noisy image is used as target image, which implies that only internal prior information is used for noise removal and leads to poor denoising performance, compared with supervised denoising models. In this work, we aim to boost the DIP method by integrating external and internal image priors. Specifically, with a given noisy image, we first exploited a state-of-the-art supervised method to denoise it and obtain its corresponding denoised image called initial denoised image. Then the initial denoised image containing rich external prior information was used as extra target image together with the given noisy image within the standard DIP framework, resulting in a highly effective unsupervised recovery process. Next, to take full advantage of the uncertainty of the DIP network, the above DIP denoising routine with the different random inputs was executed multiple times to generate enough complementary denoised images (samples). Finally, with an unsupervised weight map generative network, the generated samples were fused in a pixel-wise manner. The fused image with better image quality was treated as the final denoised image. We had verified the denoising performance of our method on a large of images from several benchmark databases as well as real-world noisy images. The comparative results show that our method outperforms the original DIP method and other unsupervised methods by a large margin, and surpasses state-of-the-art supervised counterparts with comparable peak signal-to-noise rate. Source code is publicly available at https://github.com/chenxiaojun0101/BDIP.
A shallow yet effective convolutional neural network (CNN)-based detector was proposed for automatic detection of random-valued impulse noise (RVIN) from images. We guided the proposed CNN-based detector to automatically extract the implicit statistics and learn the detection mechanism with a large number of patches and their corresponding noise labels regarding center pixels. Compared with the reference RVIN detectors, the proposed CNN-based one takes advantage of the prior knowledge obtained in the training phase and shows impressive detection accuracy across different noise ratios.
We proposed a noniterative principal component analysis (PCA)-based noise level estimation (NLE) algorithm that addresses the problem of estimating the noise level with a two-step scheme. First, we randomly extracted a number of raw patches from a given noisy image and took the smallest eigenvalue of the covariance matrix of the raw patches as the preliminary estimation of the noise level. Next, the final estimation was directly obtained with a nonlinear mapping (rectification) function that was trained on some representative noisy images corrupted with different known noise levels. Compared with the state-of-art NLE algorithms, the experiment results show that the proposed NLE algorithm can reliably infer the noise level and has robust performance over a wide range of image contents and noise levels, showing a good compromise between speed and accuracy in general.
The performance of conventional denoising algorithms is usually controlled by one or several parameters whose optimal settings depend on the contents of the processed images and the characteristics of the noises. Among these parameters, noise level is a fundamental parameter that is always assumed to be known by most of the existing denoising algorithms (so-called nonblind denoising algorithms), which largely limits the applicability of these nonblind denoising algorithms in many applications. Moreover, these nonblind algorithms do not always achieve the best denoised images in visual quality even when fed with the actual noise level parameter. To address these shortcomings, in this paper we propose a new quality-aware features-based noise level estimator (NLE), which consists of quality-aware features extraction and optimal noise level parameter prediction. First, considering that image local contrast features convey important structural information that is closely related to image perceptual quality, we utilize the marginal statistics of two local contrast operators, i.e., the gradient magnitude and the Laplacian of Gaussian (LOG), to extract quality-aware features. The proposed quality-aware features have very low computational complexity, making them well suited for time-constrained applications. Then we propose a learning-based framework where the noise level parameter is estimated based on the quality-aware features. Based on the proposed NLE, we develop a blind block matching and three-dimensional filtering (BBM3D) denoising algorithm which is capable of effectively removing additive white Gaussian noise, even coupled with impulse noise. The noise level parameter of the BBM3D algorithm is automatically tuned according to the quality-aware features, guaranteeing the best performance. As such, the classical block matching and three-dimensional algorithm can be transformed into a blind one in an unsupervised manner. Experimental results demonstrate that the performance of the BBM3D algorithm is superior to its nonblind version both in visual qualitative evaluations and quantitative comparisons for various images and noise levels.
Surgical simulation is a major application in computer graphics and virtual reality, and most of the existing work indicates that interactive real-time cutting simulation of soft tissue is a fundamental but challenging research problem in virtual surgery simulation systems. More specifically, it is difficult to achieve a fast enough graphic update rate (at least 30 Hz) on commodity PC hardware by utilizing traditional triangle-based rendering algorithms. In recent years, point-based rendering (PBR) has been shown to offer the potential to outperform the traditional triangle-based rendering in speed when it is applied to highly complex soft tissue cutting models. Nevertheless, the PBR algorithms are still limited in visual quality due to inherent contrast distortion. We propose an adaptive image contrast enhancement algorithm as a postprocessing module for PBR, providing high visual rendering quality as well as acceptable rendering efficiency. Our approach is based on a perceptible image quality technique with automatic parameter selection, resulting in a visual quality comparable to existing conventional PBR algorithms. Experimental results show that our adaptive image contrast enhancement algorithm produces encouraging results both visually and numerically compared to representative algorithms, and experiments conducted on the latest hardware demonstrate that the proposed PBR framework with the postprocessing module is superior to the conventional PBR algorithm and that the proposed contrast enhancement algorithm can be utilized in (or compatible with) various variants of the conventional PBR algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.