In this paper, we propose a structure-based low-rank Retinex model for simultaneous low-light image enhancement and noise removal. Based on the traditional variational-based Retinex framework, in the proposed model, a smooth prior is forced on the illumination, and a gradient fidelity term and the weighted nuclear norm are used to suppress noise and enhance structural details in the reflectance. By considering that the manifold structure similarity is more effective than intensity similarity in describing the structural features of image patches, we further propose to use the manifold structure similarity in image patch grouping. Then, an alternating direction minimization algorithm is used to solve the reflectance estimating model. The entire process for solving the proposed model uses a sequential optimization. The final enhancement results is obtained by combining the reflectance and the Gamma corrected illumination. Experiment show that, the proposed method can simultaneously enhance and denoise the low-light image, and produce better or comparable results compared with the state-of-the-art methods
Blind deconvolution is known as a challenging low-level vision problem due to the diverse blur scenarios in real-world imaging. Another attempt is made with critical thoughts on existing image priors for nonparametric blur kernel estimation, proposing an alternative approach to blind deconvolution via complementarily structure-aware image smoothing (CSIS). Similar to most state-of-the-art blind deblurring methods, the proposed approach partly builds on the naïve L0-based sparse model, but the core contribution here is to additionally advocate a type of redescending potential (RDP) functions as a more elegant way for boosting blind deblurring. With the RDP element, the new approach is capable of easily achieving the discrimination between clear and blurred images. Meanwhile, the performance of the proposed method can be better ensured due to the complementary smoothing behavior induced by the RDP functions. Specifically speaking, it is known that L0-based smoothing plays a critical role in pursuing salient step-edges from blurry images especially in the early period of blur kernel estimation, whereas the RDP-based smoothing is found particularly significant in the later stage when the fine pursuit of salient roof-edges or ramp-edges are dominantly critical for more precise and robust kernel estimation. In this sense, the proposed CSIS-based blind deblurring algorithm is more intuitive than previous L0-based methods. Not only that, numerous experimental results on benchmark blurred images, no matter synthetic or realistic, also demonstrate the comparable or even better performance of the proposed approach in terms of both effectiveness and robustness.
This paper proposes a deep convolutional neural network-based low-light image enhancement method. In order to adaptively enhance the image brightness, a convolutional neural network with convolutional modules is designed. Lowlight image is firstly down-sampled into sub-images. Then an illumination map is obtained from the input image to provide additional information to the network. The network works on a tensor that consists of sub-images and illumination map, achieving a good performance in brightness increasing and structure preservation. The enhanced result is reconstructed from the output sub-images. Experimental results demonstrate the effectiveness of the proposed method in low-light image enhancement.
Persons captured in real-life scenarios are generally in non-uniform scales. However, most generally acknowledged person re-identification (Re-ID) methods lay emphasis on matching normal-scale high-resolution person images. To address this problem, the ideas of existing image reconstruction techniques are incorporated which are expected contribute to recover accurate appearance information for low-resolution person Re-ID. In specific, this paper proposes a joint deep learning approach for Scale-Adaptive person Super-Resolution and Re-identification (SASR2 ). It is for the first time that scale-adaptive learning is jointly implemented for super-resolution and re-identification without any extra post-processing process. With the super-resolution module, the high-resolution appearance information can be automatically reconstructed from scales of low-resolution person images, bringing a direct beneficial impact on the subsequent Re-ID thanks to the joint learning nature of the proposed approach. It deserves noting that SASR2 is not only simple but also flexible, since it can be adaptable to person Re-ID on both multi-scale LR and normal-scale HR datasets. A large amount of experimental analysis demonstrates that SASR2 achieves competitive performance compared with previous low-resolution Re-ID methods especially on the realistic CAVIAR dataset.
This work addresses the problem of single image dehazing particularly towards the goal of better visibility restoration. Athough extensive studies have been performed, almost all of them are heavily built on the atmospheric scattering model. What is worse, they usually fail to restore the visibility of dense hazy images convincingly. Inspired by the potentials of deep learning, a new end-to-end approach is presented to restore a clear image directly from a hazy image, while with an emphasis on the real-world weather conditions. In specific, an Encoder-Decoder is exploited as a generator for restoring the dehazed image in an attempt of preserve more image details. Interestingly, it is further found that the performance of the Encoder-Decoder can be largely boosted via our advocated dual principles of discriminativeness in this paper. On the one hand, the dark channel is re-explored in our framework resulting in a discriminative prior formulated specifically for the dehazing problem. On the other hand, a critic is incorporated for adversarial training against the autoencoding-based generator, implemented via the Wasserstein GAN (generative adverarial networks) regularized by the Liptchitiz penalty. The proposed approach is trained on a synthetic dataset of hazy images, while evaluated on both synthetic and real hazy images. The objective evaluation has shown that the proposed approach performs competitively with the state-of-the-art approaches, but outperforms them in terms of the visibility restoration especially in the scenarios of dense haze.
In the traditional uniform blind deblurring methods, we have witnessed the great advances by utilizing various image priors which are expected to favor clean images than blurred images and act through regularizing the solution space. However, these methods failed in dealing with non-uniform blind deblurring because of the inaccuracy in kernel estimation. Learning-based methods can generate clear images in an end-to-end way potentially without an intermediate step of blur kernel estimation. To better deal with the non-uniform deblurring problem in dynamic scenes, in this paper we present a new type of image priors complementary to the deep learning-based blind estimation framework. Specifically, inspired by the interesting discovery of dark and bright channels in dehazing, the opposite-channel-based discriminative priors are developed and directly integrated to the loss of our advocated deep deblurring model, so as to achieve more accurate and robust blind deblurring performance. It deserves noticing that, our deep model is formulated in the framework of the Wasserstein generative adversarial networks regularized by the Liptchitz penalty (WGAN-LP), and the network structures are relatively simpler yet more stable than other deep deblurring methods. We evaluate the proposed method on a large scale blur dataset with complex non-uniform motions. Experimental results show that it achieves state-of-the-art non-uniform blind deblurring performance not only quantitatively but also qualitatively
KEYWORDS: Super resolution, Visualization, Algorithm development, Image segmentation, Image processing, Telecommunications, Communication engineering, Current controlled current source, Image quality, Matrices
This paper proposes a new variational model for deblurring low-resolution images, a.k.a. single image nonparametric blind super-resolution. In specific, a type of new adaptive heavy-tailed image priors are presented incorporating both the model discriminativeness and effectiveness of salient edge pursuit for accurate and reliable blur kernel estimation. With the assistance of appropriate non-blind super-resolution approaches, nonparametric blind super-resolution can be cast as a regularized functional minimization problem. An efficient numerical algorithm is derived by harnessing the alternating direction method of multipliers as well as the conjugate gradient method, with which alternatingly iterative estimations for kernel and image are finally implemented in a multi-scale manner. Numerous experiments are conducted along with comparisons made among the proposed approach and two recent state-of-the-art ones, demonstrating that the proposed approach is able to better deal with low-resolution images which are blurred by various possible kernels, e.g., Gaussianshaped kernels of varying sizes, ellipse-shaped kernels of varying orientations, curvilinear kernels of varying trajectories.
It is known that actual performance of most previous face hallucination approaches will drop dramatically as a very low-resolution tiny face is provided. Inspired by the latest progress in deep unsupervised learning, this paper works on tiny faces of size 16×16 pixels and magnifies them into their 8× upsampling ones by exploiting the boundary equilibrium generative adverarial networks (BEGAN). Besides imposing a pixel-wise L2 regularization term to the generative model, it is found that our targeted auto-encoding generator with residual blocks and skip connections is a key component for BEGAN achieving state-of-the-art hallucination performance. The cropped CelebA face dataset is preliminarily used in our experiments. The results demonstrate that the proposed approach is not only of fast and stable convergence, but also robust to pose, expression, illuminance and occluded variations.
To reduce cupping artifacts and enhance contrast resolution in cone-beam CT (CBCT), in this paper, we introduce a new
approach which combines blind deconvolution with a level set method. The proposed method focuses on the
reconstructed image without requiring any additional physical equipment, is easily implemented on a single-scan
acquisition. The results demonstrate that the algorithm is practical and effective for reducing the cupping artifacts and
enhance contrast resolution on the images, preserves the quality of the reconstructed image, and is very robust.
Image deconvolution is an ill-posed, low-level vision task, restoring a clear image from the blurred and noisy observation. From the perspective of statistics, previous work on image deconvolution has been formulated as a maximum a posteriori or a general Bayesian inference problem, with Gaussian or heavy-tailed non-Gaussian prior image models (e.g., a student's t distribution). We propose a Parseval frame-based nonconvex image deconvolution strategy via penalizing the l0-norm of the coefficients of multiple different Parseval frames. With these frames, flexible filtering operators are provided to adaptively capture the point singularities, the curvilinear edges and the oscillating textures in natural images. The proposed optimization problem is implemented by borrowing the idea of recent penalty decomposition method, resulting in a simple and efficient iteration algorithm. Experimental results show that the proposed deconvolution scheme is highly competitive among state-of-the-art methods, in both the improvement of signal-to-noise ratio and visual perception.
Many magnification algorithms have been proposed in
past decades, most of which concentrate on the smooth reconstruction
of edge structures. Edge reconstruction, however, can destroy
corners, thus producing perceptually unpleasant rounded corner
structures. In this work, corner shock filtering is designated for enhancing
corners relative to the known edge shock filtering, based on
a new measure of corner strength and the theory of level-sets motion
under curvature. By combining directional diffusion, edge shock
filtering, and corner shock filtering, a regularized partial differential
equation (PDE) approach for magnification is proposed to simultaneously
reconstruct the edges and preserve the corners. Moreover,
the proposed PDE approach is also robust to random noises. Experimental
results in both cases of grayscale and color images confirm
the effectiveness of our approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.