Open Access
1 November 2011 Retinal image restoration by means of blind deconvolution
Andres G. Marrugo, Maria S. Millan, Michal Sorel, Filip Sroubek
Author Affiliations +
Abstract
Retinal imaging plays a key role in the diagnosis and management of ophthalmologic disorders, such as diabetic retinopathy, glaucoma, and age-related macular degeneration. Because of the acquisition process, retinal images often suffer from blurring and uneven illumination. This problem may seriously affect disease diagnosis and progression assessment. Here we present a method for color retinal image restoration by means of multichannel blind deconvolution. The method is applied to a pair of retinal images acquired within a lapse of time, ranging from several minutes to months. It consists of a series of preprocessing steps to adjust the images so they comply with the considered degradation model, followed by the estimation of the point-spread function and, ultimately, image deconvolution. The preprocessing is mainly composed of image registration, uneven illumination compensation, and segmentation of areas with structural changes. In addition, we have developed a procedure for the detection and visualization of structural changes. This enables the identification of subtle developments in the retina not caused by variation in illumination or blur. The method was tested on synthetic and real images. Encouraging experimental results show that the method is capable of significant restoration of degraded retinal images.

1.

Introduction

A fundus imaging device or retinal camera is a specialized low-power microscope with an attached camera designed to photograph the interior of the eye in association with the optical system of the eye. Retinal imaging is acknowledged to be an important tool for both detection and monitoring the progression of diseases affecting the eye, such as diabetic retinopathy, glaucoma, and age-related macular degeneration.1 The digital format provides a permanent record of the appearance of the retina at any point in time.2

The imaging procedure is usually carried in two separate steps: Image acquisition and diagnostic interpretation. Image quality is subjectively evaluated by the person capturing the images, and they can sometimes mistakenly accept a low-quality image.3 Low-quality image occurrence rate has been reported at 3.7–19.7% in clinical studies,4, 5, 6 which is not a minor fact. A recent study by Abràmoff 7 using an automated system for detection of diabetic retinopathy found that from 10,000 exams 23% had insufficient image quality. A major source of retinal image quality degradation are aberrations of the human eye, imperfections in the fundus camera optics, and improper camera adjustment, flash lighting, or focusing during the exam.8 Moreover, regardless of how well controlled the aforementioned parameters are, in practice it may not always be possible to obtain good enough image quality as a result of additional factors such as lens opacities in the examined eye, scattering, insufficient pupil dilation or patient difficulty in steady fixating a target in the camera (such as in patients suffering from amblyopia).3 Out of all possible retinal image degradations, some can be properly compensated via enhancement or restoration techniques (e.g., low-contrast, nonuniform illumination, noise, and blur).2 However, this compensation is also dependent on the extent of the degradation. Regarding retinal image blurring, its main causes are relative camera-eye motion, inherent optical aberrations in the eye, and improper focusing.

In the past decade, many wavefront technologies—with its origins in astronomy—such as adaptive optics (AO)9 and deconvolution from wavefront sensing (DWFS),10 gave rise to the correction of monochromatic aberrations of the eye and also created new opportunities to image the retina at unprecedented spatial resolution. However, AO-corrected and DWFS-based fundus imagers usually aim at resolving details at the level of individual photoreceptors, thus have a field of view (FOV) of a couple degrees and a high resolution on the order of 1 or 2 μm.11 Greater FOVs can be achieved ( [TeX:] \documentclass[12pt]{minimal}\begin{document}${\sim} 5\deg$\end{document} 5deg )12, 13 with additional hardware constraints, beside the fact that diffraction limited imaging is not guaranteed due to an increase in aberrations.14 Nevertheless, it is still a considerably narrow FOV and a major disadvantage with clinical subjects because of the need to examine larger areas of the retina. On the other hand, regular non-AO corrected fundus imagers used for routine checkups have a large FOV (typically, [TeX:] \documentclass[12pt]{minimal}\begin{document}$30\deg$\end{document} 30deg ) at the expense of lower spatial resolution, but still sufficient for practical detection and progression of observable clinical signs, such as microaneurysms, dot and blot hemorrhages, and exudates, among others. Consequently, large FOV fundus imagers are the major imaging modality available to patients visiting an eye-care clinic. The method proposed herein aims to restore images from conventional large FOV fundus imagers.

Among the normal retinal features, the blood vessel distribution exhibits a unique pattern in each individual and is highly stable in time. It is quite difficult to forge, and most common diseases do not change the pattern in a way that its topology is affected. For that reason, much effort has been put into the development of security systems based on the blood vessel distribution as a biometric signal for authentication purposes.15 From this consideration, it is reasonable to assume the hypothesis that a pair of fundus images of the same retina, taken at different moments in time, contain enough common information to restore any of them by existing multichannel deconvolution techniques. We will demonstrate this fact later.

1.1.

Overview of Proposed Approach

In this paper, we propose a new strategy for retinal image deblurring where we consider the most general image degradation case: blurred retinal images acquired in different moments in time, ranging from minutes to months; hence, disease progression is also considered. The main reason for this general image degradation case that considers long time lapses comes from the potential need to restore a degraded image acquired in the past being the only one available at that stage of the disease. This problem arises quite often in clinical practice. A correct assessment of a patient's state evolution requires sharp images from all moments in time; the method proposed here enables such opportunity. Disease progression characterization is embedded in the algorithm with the identification of areas of structural change (see Sec. 3.3).

Our restoration method is based on a technique called blind deconvolution (BD).16, 17 The goal of BD is to recover the original scene from a single image or a set of blurred images in the presence of a poorly determined or unknown point-spread function (PSF). The main assumption is that the blur can be described by a convolution of a sharp image with the unknown PSF. Restoration by deconvolution improves contrast and resolution of digital images (i.e., it is easier to resolve and distinguish features in the restored image). To avoid confusion with super-resolution, we briefly describe what we mean by resolution improvement. Digital deconvolution can be described as any scheme that sharpens up the PSF, while the spatial frequency bandwidth remains unchanged. This means that the spatial frequency response and the two-point resolution is improved, but the cutoff frequency is unchanged;18 in the super-resolution context, the goal is to increase the cutoff frequency.

BD algorithms can be of single input [single-image blind deconvolution (SBD)] or of multiple images [multichannel blind deconvolution (MBD)]. Despite the fact that SBD is one of the most ill-posed problems, there are several reliable SBD algorithms,19 although most of them require that the blurred image be governed by relatively strong edges, which is not case here. In Sec. 4.1 we compare our approach to a recent state-of-the-art SBD method.20 The computational overhead from MBD (all of the preprocessing to adjust the time-sequence of images) in comparison to SBD is practically negligible, and the robustness of MBD is far superior and worth applying because SBD fails to produce a valid restoration. By the same token, the additional processing enables the identification of structural changes in the retina over time—a central task in medical practice. As a result, we have chosen a multichannel approach for the restoration of blurred retinal images.

An overview of the proposed approach is described in Fig. 1. We consider as input two-color retinal images acquired with a conventional fundus camera within a time lapse that can span from several minutes to months given by routine patient checkups. The images correspond to the same retina but can differ with respect to illumination distribution, blur, and local structural changes given by pathological developments. These differences cannot solely be accounted for by the convolutional model described in Sec. 2. For that reason, the images must be preprocessed before the blind deconvolution stage can take place. We register the images and compensate for interimage illumination variation and structural changes. In fact, this preprocessing work becomes a great opportunity to meet one of the main concerns of ophthalmologists when they visually compare fundus images of the same retina over time: To identify true structural or morphological changes pertaining to possible pathological damage and, consequently, disregarding other changes merely caused by variation of illumination or blur. Ours is a two-stage blind deconvolution strategy. The first stage consists in the estimation of the PSFs following a multichannel scheme, and the second stage is the image deconvolution, where we restore every image with its corresponding PSF, independently. This has several advantages that will be explained in detail Sec. 3.5. The multichannel scheme is based on the method described in Ref. 21, which has proved to work well in practice with sufficient experimental data. It is an alternating minimization scheme based on a maximum a posteriori (MAP) estimation, with a priori distribution of blurs derived from the multichannel framework and a priori distribution of the ideal sharp image defined by regularization with the total variation of the image.22 MAP is formulated as an optimization problem, where regularization terms are directly related to priors. Regularization involves the introduction of additional information in order to solve an ill-posed problem in the form of a penalty or restriction in the minimization routine (see Sec. 3.4). This provides good quality of restoration—significantly better than, for example, Lucy–Richardson algorithm,23 still widely used in biomedical applications. We have modified the algorithm in Ref. 21 to leave out regions where the eye fundus has structurally changed (it only takes into account one image in these regions) with the use of a masking operator, similarly to the solution proposed in Ref. 24 within the super-resolution context. This enabled us to restore both degraded input images.

Fig. 1

Block diagram illustrating the proposed method. [TeX:] \documentclass[12pt]{minimal}\begin{document}$\breve{z}_i$\end{document} ži are the unregistered degraded input images, and [TeX:] \documentclass[12pt]{minimal}\begin{document}$\hat{u}_i$\end{document} ûi are their restored versions. The other variables are intermediate outputs of every stage; their meaning is given in the text.

116016_1_116016_1_1.jpg

In this work, our novel contributions to the retinal image processing task are twofold. First, we propose a degradation model for time-series retinal images, which captures the underlying distortions resulting from instrument limitations and changes between patient visits; we are also able to identify and highlight such changes. Second, we propose a restoration strategy based on blind deconvolution that is able to obtain image enhancement and resolution improvement using inexpensive digital methods applied to images acquired with a conventional fundus camera.

2.

Mathematical Model of Image Degradation

The unregistered input images, as shown in Fig. 1, are [TeX:] \documentclass[12pt]{minimal}\begin{document}$\breve{z}_1$\end{document} ž1 and [TeX:] \documentclass[12pt]{minimal}\begin{document}$\breve{z}_2$\end{document} ž2 . After registration, we obtain two degraded registered images z 1 and z 2, which we model as originating from an ideal sharp image u. Mathematically, the degradation model is stated as

Eq. 1

[TeX:] \documentclass[12pt]{minimal}\begin{document} \begin{eqnarray} z_1 &=& u \ast h_1 + n_1,\nonumber \\ z_2 &=& (u k^{- 1}) \ast h_2 + n_2 \hspace{5.0pt}, \end{eqnarray} \end{document} z1=uh1+n1,z2=(uk1)h2+n2,
where the asterisk is the standard convolution, h i are called convolution kernels or PSFs, and k is a function accounting for relative local illumination change between images z 1 and z 2. For pixels where no illumination changes occur, k ≈ 1. The noise n i is assumed Gaussian additive with zero mean in both images. In our case, the PSFs and k comprise all radiometric degradations described above except structural changes in the eye, which is treated in Sec. 3.3. Despite the fact that we consider the PSFs to vary in time between the two image acquisitions, we assume them to be spatially invariant within each image. Because the FOV is of [TeX:] \documentclass[12pt]{minimal}\begin{document}$30\deg$\end{document} 30deg or less, this assumption can be accepted in the first approach. This ideal sharp image u is actually unknown, and its estimation is the purpose of this paper. Thus to avoid confusion, the estimated (restored) image is denoted by [TeX:] \documentclass[12pt]{minimal}\begin{document}$\hat{u}$\end{document} û . In Sec. 4.1, we test the performance of our method with synthetically degraded images, which means that we know u.

3.

Description of the Method

In this section, we describe every stage of the proposed method. To illustrate each stage we use the images shown in Fig. 2. They were acquired using a nonmydriatic digital fundus camera system with conventional xenon flash lighting source (in the visible spectrum). The fundus images are from a patient that suffered from age-related macular degeneration and were captured within a seven-month time lapse. They are color RGB 24 bit-depth fundus images of size 1500 × 1200 digitized in TIFF format. This is a general example where both images do not correspond exactly to the same object field, the illumination distribution across both images is not exactly the same, and there are some structural differences between them given by the pathological development in the macula (centered yellowish region).

Fig. 2

Color fundus images of a human eye affected by age-related macular degeneration. Images (a) and (b) were captured within a seven-month time lapse, and (a) was captured before (b).

116016_1_116016_1_2.jpg

3.1.

Image Registration

Image registration is a procedure that consists of spatial alignment of two or more images. General and application-specific image registration, such as in retinal imaging, has been investigated from the beginning of image-processing research. The interested reader is referred to the image registration review of Zitová and Flusser25 and the recent work by Lee 26 for objective validation of several retinal image registration algorithms. Image-registration techniques are usually divided into two groups: intensity-based and feature-based methods. Intensity-based methods have the drawback of poor performance under varying illumination conditions. Feature-based methods are robust to such effects but rely on accurate and repeatable extraction of the features. The retinal vasculature is known to provide a stable set of features for registration.

For registering the images, we use the robust dual-bootstrap iterative closest-point algorithm. We briefly describe it here; for a full description, of the method the reader is referred to Ref. 27. The vasculature from each image is automatically traced; starting from initial seed points extracted from a 1-D edge detection and, later, recursively tracking the vessels using directional templates. The vessel branching and crossover points are used as landmarks to register the images to subpixel accuracy. The registration algorithm starts from initial low-order estimates that are accurate only in small image regions called bootstrap regions. The transformation is then refined using constraints in the region, and the bootstrap region is expanded iteratively. The algorithm stops when the bootstrap region expands to cover the overlap between the images, and uses 12-dimensional quadratic mapping. This transformation model includes rotation, scale, translation, a shearing term, and a quadratic term that describes the spherical shape of the retina. We refer the interested reader to Ref. 28 for details on the model derivation. This registration algorithm is very robust to local changes and low overlap between images as demonstrated by its high success rate on test images with at least one common landmark point and overlaps even as low as 35%.27 Even though the reported accuracy in Ref. 27 is of subpixel accuracy, in our case of degraded images this can be slightly worse without compromising the outcome. Minor local misregistration errors may occur when landmark points do not match precisely, but they will not be taken into account in the restoration because they will be masked out before the PSF estimation and image deconvolution stages (see Sec. 3.3).

To confirm the registration outcome, the pair of images before and after registration are shown in Fig. 3 in checkerboard representation, where the images are merged together in a chesslike pattern, where each square alternates information from one image to the other. Note how after registration the images have been correctly aligned, especially the blood vessel distribution.

Fig. 3

Registration of images from Fig. 2 in checkerboard representation. (a) Before and (b) after registration.

116016_1_116016_1_3.jpg

3.2.

Compensation of Uneven Illumination

Despite controlled conditions in retinal image acquisition, such as optical stops to prevent glares and provide a diffuse illumination, there are many patient-dependent aspects that are difficult to control and mainly affect the illumination component with gradual nonuniform spatial variations. Some of the contributing factors are (i) the curved surface of the retina (as a consequence, all regions cannot be illuminated uniformly); (ii) imaging requires either a naturally or an artificially dilated pupil (The degree of dilation is highly variable across patients); (iii) unexpected movements of the patient's eye; and (iv) presence of diseases. This nonuniform illumination across the image results in shading artifacts and vignetting. This effect hinders both quantitative image analysis and the reliable operation of subsequent global operators.

In our model, described by Eq. 1, the relative changes in intensity between the two fundus images cannot be described exclusively by convolution with different PSFs and must be compensated by k. A number of general-purpose techniques have been investigated to attenuate the variation of illumination. However, most techniques are oriented toward single-image compensation,2 for instance, using the red channel to estimate background illumination.29 Therefore, no consistency between two images is guaranteed. For our case, this uneven illumination can be compensated by properly adjusting the intensity values on one image to approximately match that of the other while satisfying a predetermined illumination model. This can be carried out if the blurring is not too large and the illumination changes smoothly, which is usually the case for fundus images. This assumption can be expressed mathematically as

[TeX:] \documentclass[12pt]{minimal}\begin{document} \[ (k^{-1} \cdot u)\ast h \approx k^{-1} (u\ast h) . \] \end{document} (k1·u)hk1(uh).
The illumination of the fundus is formed by a slowly varying light field over a smooth surface, thus it can be modeled by a low-order parametric surface. In Ref. 30 they used a fourth-order polynomial to effectively model the light pattern formed by an illumination source passing through the attenuating ocular media. Here, we use a similar approach, but fitting the surface with respect to both images. The parametric surface fitting equation can then be formulated as

Eq. 2

[TeX:] \documentclass[12pt]{minimal}\begin{document} \begin{equation} \arg \min _k \, \Vert z_1(x,y) - k(x,y) z_2(x,y)\Vert, \end{equation} \end{document} argminkz1(x,y)k(x,y)z2(x,y),
where k(x, y) = α15 y 4 + α14 y 3 x + ⋅⋅⋅ + α2 y + α1, and z 1, z 2 are the registered fundus images. We minimize Eq. 2 in the least-squares sense to estimate the 15 parameters. This procedure can be both carried out using the luminance channel or the green channel as usual in retinal image processing.31 Here, we have used the green channel. Owing to the fact that the illumination can be compensated globally by the polynomial function k, it is important to realize that the structural changes remain unaffected. The interpretation of k from Eq. 2 is straightforward. If the registered images z 1 and z 2 had neither illumination changes nor structural changes, then k ≈ 1 throughout the common object field. In Fig. 4, we show the resulting k(x, y) for the images in Fig. 2. The different shades of gray indicate the average contrast and intensity difference between the two images. From the image, it can be seen that most areas have similar intensity values except for the upper left part (dark region).

Fig. 4

Illumination compensation function k(x, y).

116016_1_116016_1_4.jpg

3.3.

Segmentation of Areas with Structural Changes

The pathological region is actually a structural change and cannot be taken as a variation of illumination. Image change analysis is of interest in various fields and many algorithms have been developed for change detection.32, 33 A survey of change detection methods can be found in Ref. 34. An initial step in order to identify these changes comes from computing the difference from the two registered images including the illumination compensation as

Eq. 3

[TeX:] \documentclass[12pt]{minimal}\begin{document} \begin{equation} \Delta z (x,y) = z_1 (x,y) - k(x,y) z_2(x,y) \hspace{5.0pt}. \end{equation} \end{document} Δz(x,y)=z1(x,y)k(x,y)z2(x,y).
The difference image is shown in absolute value Fig. 5a. To better understand this result, in Fig. 5b we show one of the retinal images in gray scale, where the pixels related to structural changes are highlighted in pseudocolor. This image constitutes an important output of our algorithm. The structural changes can now be visualized and detected from the difference image Δz(x, y) by taking a statistical significance test, in the same fashion as in Ref. 30. First, structural changes are often associated with a group of pixels; thus, the change decision at a given pixel j should be based on a small block of pixels in the neighborhood of j denoted as w j. Second, in the absence of any change, the difference can be assumed to be due to noise alone. Therefore, the decision as to whether or not a change has occurred corresponds to choosing one of two competing hypothesis: the null hypothesis [TeX:] \documentclass[12pt]{minimal}\begin{document}$\mathcal {H}_0$\end{document} H0 or the alternative hypothesis [TeX:] \documentclass[12pt]{minimal}\begin{document}$\mathcal {H}_1$\end{document} H1 , corresponding to no-change and change decisions, respectively. Assuming a Gaussian distribution for the difference values, the changes can be identified by comparing the normalized sum square of the differences within the neighborhood w j to a predetermined threshold τ as described by Aach and Kaup.32 The test is carried out as follows:

Eq. 4

[TeX:] \documentclass[12pt]{minimal}\begin{document} \begin{equation} \Omega _j = \frac{1}{\sigma ^{2}_{n}} \sum\limits_{(x,y) \in w_j} \Delta z (x,y)^2 \mathop{\gtrless}^{\mathcal{H}_1}_{{\mathcal{H}_0}} \tau, \end{equation} \end{document} Ωj=1σn2(x,y)wjΔz(x,y)2H0H1τ,
where σn is the noise standard deviation of the difference in the no-change regions. The threshold τ is derived from the fact that Ωj follows a χ2 distribution with N degrees of freedom, where N is the number of pixels in the window w j. It can be obtained for a particular false-positive rate α from the χ2 tables. The choice of an appropriate α is both guided by mathematical considerations (a 5% level for statistical significance is commonplace35) and the consequences that false alarms and misses might have. In this case, the effect of false alarms is unimportant because there would still be a large number of remaining pixels from where to compute the PSFs. On the other hand, misses do have a considerable impact in view of the fact that these pixels do not fulfill the convolutional model. As a result, α values of <0.05 might yield a more accurate change detection at the expense of possible undesirable misses. For all experiments, we use a 3 × 3 window (N = 9) and set α = 0.05. The parameter σn was estimated by manually picking out no-change regions from a training set of images, computing Eq. 3 and the standard deviation inside these regions. Using Eq. 4 at each pixel, we can determine a change mask between the images or conversely a no-change mask. Given that, for the MBD procedure, we are interested in estimating the PSF from the no-change regions, the masking function m is obtained directly from the no-change mask of the significance test. The mask is shown in Fig 5c. Note that the pathological region is the main cause of structural changes.

Fig. 5

Intermediate outputs from the algorithm: (a) image difference Δz(x, y) in absolute value, (b) image difference in pseudocolor on top of gray-scale fundus image, and (c) mask m for avoiding areas with structural changes.

116016_1_116016_1_5.jpg

3.4.

Point-Spread Function Estimation

In this section, we describe the basic principles of the blind deconvolution method used for the estimation of the PSFs. For this purpose, we have chosen one of the best working MBD methods.21 MATLAB implementation of this method is available on the web of the authors.36 The algorithm can be viewed as a Bayesian MAP estimation of the most probable sharp image and blur kernels. For our purposes, we used a modification of the original method that ignores regions affected by structural changes, which improves stability and precision of the computation. Without this modification, represented by the mask m in Eq. 5, the algorithm does not work reliably. The algorithm can be described as a minimization of the functional

Eq. 5

[TeX:] \documentclass[12pt]{minimal}\begin{document} \begin{eqnarray} && {\rm arg} \min _{u,h_1,h_2} \: \left(\frac{1}{2} \| u\ast h_1 - z_1 \|^2+ \frac{1}{2} \|m(u\ast h_2 - kz_2)\|^2 \nonumber \right. \\ && \quad \left. +\, \lambda _u \int | \nabla u| \, \mathrm{d}x\, \mathrm{d}y+\lambda _h \| m(z_1 \ast h_2-kz_2\ast h_1)\|^2 \right),\nonumber \\ && \quad \ h_1,h_2\ge 0, \quad \end{eqnarray} \end{document} argminu,h1,h212uh1z12+12m(uh2kz2)2+λu|u|dxdy+λhm(z1h2kz2h1)2,h1,h20,
with respect to the latent image u and blur kernels h 1 and h 2. The first and second terms measure the difference between the input blurred images and the searched image u blurred by kernels h 1 and h 2. The size of this difference is measured by L 2 norm ‖.‖ and should be small for the correct solution; ideally, it should correspond to the noise variance in the given image. Function k compensates for uneven illumination as described in Sec. 3.2. The value of the masking function m is 1 in the valid points [white in Fig. 5c] and 0 in the pixels where the eye fundus has structurally changed. Any of the first two terms could be masked, but not both at the same time. This is because the latent image u cannot have pixels with no value at all; hence, these pixels must take values from any of the two images. In this case, z 2 is masked. As a result, these pixels take values from the first term. The two remaining terms are regularization terms with positive weighting constants λu and λh. The third term is nothing else than the total variation of image u. It improves stability of the minimization and from the statistical viewpoint incorporates prior knowledge about the solution. The last term is a condition linking the PSFs h 1 and h 2 of both images, which also improves the numerical stability of the minimization.

The functional is alternately minimized in the subspaces corresponding to the image and the PSFs. The advantage of this scheme lies in its simplicity, this alternating minimization approach is actually a variation of the steepest-descent algorithm. The minimization in the PSF subspace is equivalent to the solution of a system of linear equations in the least-squares sense with the non-negativity constraint, in our implementation solved by the MATLAB fmincon function. The nonblind deconvolution realized by the minimization in the image subspace, is solved by half-quadratic iterative scheme,37 replacing the total variation by [TeX:] \documentclass[12pt]{minimal}\begin{document}$\int \sqrt{|\nabla u|^2+\epsilon ^2}$\end{document} |u|2+ε2 , where ε is an auxiliary variable in the range 0 < ε ≪ 1. It is a small relaxation parameter that makes total variation differentiable around zero. A typical value for ε is 10−1.

The main difference with respect to the original method21 is the introduction of the masking function m, which is computed in the beginning of the algorithm as described in Sec. 3.3. During the minimization, the multiplication by m is included in the operator corresponding to the convolution with u (in the PSF minimization step) and in the operator corresponding to the convolution with h 2 (in the image minimization step). Because of the simplicity of this masking operation, the speed is practically the same as the speed of the original algorithm. In addition, even though we work with a complicated set of pixels, we can use the standard operation of convolution, which can eventually be speeded up using Fast Fourier transform (FFT).

3.5.

Image Restoration

The aim of our algorithm is to restore both images as much as possible. Note that from Eq. 5 the restored version of z 1 ( [TeX:] \documentclass[12pt]{minimal}\begin{document}$\hat{u}_1$\end{document} û1 ) is obtained because z 2 is masked; [TeX:] \documentclass[12pt]{minimal}\begin{document}$\hat{u}_2$\end{document} û2 could be obtained by minimizing Eq. 5 again with fixed PSFs and masking z 1. This procedure has the disadvantage that both images are restored only within the common object field. Therefore, an appropriate solution is to restore each image z i via single-channel deconvolution with their corresponding PSF h i (estimated from the previous step) by the minimization of the functional

Eq. 6

[TeX:] \documentclass[12pt]{minimal}\begin{document} \begin{equation} {\rm arg} \min _{u_i} \: \left(\| u_i \ast h_i - z_i \|^2+\lambda _u \int | \nabla u_i | \, \mathrm{d}x\, \mathrm{d}y \right) . \end{equation} \end{document} argminuiuihizi2+λu|ui|dxdy.
This approach provides a further advantage in that the PSF estimation can be computed from a relatively small area of the common object field, provided that there are retinal structures within, thus greatly reducing the computational cost of the combined PSF estimation plus image deconvolution.

Finally, it should also be noted that the whole process of PSF estimation plus deconvolution can be computed for every channel of the RGB fundus image. However, in spite of the increase in computational burden, tests showed no real advantage to estimate the PSF for each channel. Moreover, the most suitable channel for PSF estimation is the green because it provides the best contrast. Whereas the blue channel encompasses the wavelengths most scattered and absorbed by the optical media of the eye; hence, the image has very low energy and a relatively high level of noise. As a result, the RGB deconvolved fundus image was computed by deconvolving every R, G, and B channel from the green channel PSF.

4.

Experiments and Results

4.1.

Synthetic Images

In this section, we use synthetically degraded retinal images to test the performance of the proposed method. We use blurred signal-to-noise ratio (BSNR) to measure the noise contained in the degraded image, and improvement in signal-to-noise ratio (ISNR) to measure the quality of restored images.38 They are defined as follows:

[TeX:] \documentclass[12pt]{minimal}\begin{document} \begin{eqnarray*} \mathrm{BSNR} &=& 20 \,\log _{10} \left(\frac{\Vert z \Vert }{\Vert n\Vert } \right), \nonumber \\ \mathrm{ISNR} &=& 20 \,\log _{10} \left(\frac{\Vert u - z\Vert }{\Vert u - \hat{u}\Vert } \right), \nonumber \end{eqnarray*} \end{document} BSNR=20log10zn,ISNR=20log10uzuû,
where u, z, [TeX:] \documentclass[12pt]{minimal}\begin{document}$\hat{u}$\end{document} û , and n are the original image, degraded image, restored image, and noise vector, respectively. For ISNR, higher means better restoration; whereas for BSNR, lower means noisier degraded image. These metrics are mainly used to provide an objective standard for comparison to other techniques and they can only be used for simulated cases.

The first example is shown in Fig. 6, where the degraded images are synthesized from a sharp real image and the kernels shown in Fig. 6c and 6d plus Gaussian noise with zero mean and variance σ2 = 10−6 (BSNR=40 dB). The recovered image and PSFs are shown in Fig. 7. The restoration provides an ISNR=4.45 dB. In this case, for synthetically degraded images the masking operation of Sec. 3.3 was not applied. Visual inspection of the details shown in Fig. 8 clearly reveal the accuracy of the method. Under these circumstances, the algorithm is able to produce a significant restoration of fine details like small blood vessels around the optic disc.

Fig. 6

(a, b) Degraded images (BSNR = 40 dB) and (c, d) PSFs.

116016_1_116016_1_6.jpg

Fig. 7

(a) Restored image (ISNR = 4.45 dB) and (b) Estimated PSFs.

116016_1_116016_1_7.jpg

Fig. 8

Details from (a) degraded image, (b) restored image, and (c) original image.

116016_1_116016_1_8.jpg

To further test our approach under a more realistic degradation, we produced an initial geometrical distortion, via a quadratic model26, 28 as the one used for registration (Fig. 9). After the geometric distortion, the degradation (blur plus noise) is produced on both images (BSNR=40 dB). They are then registered, and the restored image is recovered via MBD. The restored image and the estimated PSFs are shown in Fig. 10. The ISNR is slightly less (4.11 dB) than in the previous case, but still sufficient to produce a significant restoration. To corroborate our assumption that MBD methods seem better suited for this type of images, we tried to restore the image with a recent SBD method proposed in Ref. 20. The result is shown in Fig. 10e and visually reveals that it does not follow the true nature of the blurring with artifacts around the blood vessels, thus being prone to produce a poor restoration evidenced by an ISNR=−0.72 dB.

Fig. 9

(a) Original image and (b) geometrically distorted image.

116016_1_116016_1_9.jpg

Fig. 10

Image restoration from degraded and geometrically distorted images: (a) restored image by the proposed method (ISNR = 4.11 dB); (b) estimated PSFs; and (c) image detail, restored image by the method in Ref. 20 (ISNR = −0.72 dB); and (d) image detail.

116016_1_116016_1_10.jpg

Concerning parameter setting, in Fig. 11 we show the sensitivity of the two parameters λu and λh for the minimization of Eq. 5 in ISNR of the restored images. In Fig. 11a, we fix the value of λh to 10 and check the ISNR of the restored images for different initial values of λu = {100, 101, 102, 103, 104, 105}. The best restoration is obtained with λu = 103; thus, in Fig. 11b we carried out the same procedure by fixing the value of λu to 103 and checking the ISNR of the restored image for different values of λh = {1, 10, 20, 30, 40, 50}. The best restoration was obtained with an initial value of λh = 30. For this type of image, when scaled to the interval 〈0, 1〉, we find 20 < λh < 40 to be a suitable range to produce an optimal restoration.

Fig. 11

Test on parameter setting (BSNR = 40 dB). Average ISNR with respect to different initial values of (a) λu and (b) λh.

116016_1_116016_1_11.jpg

4.2.

Real Images

The experiments shown in this section aim to demonstrate the applicability of the proposed method for retinal image deblurring in real scenarios. Three different cases are shown in Fig. 12, including the retinal images that were used to illustrate the method (Fig. 2). The estimated PSFs are shown at the bottom of the restored images. All images contain some pathological damage and have been acquired within considerable lapses of time (several months). In all examples, the resolution improvement can be visually assessed by the clear distinction of details, such as small blood vessels or the increase in sharpness of edges, especially in the pathological areas. We emphasize the fact that these images correspond to real routine patient follow-up and were not intentionally degraded. From a clinical viewpoint, the enhancement can be used for a more precise assessment of a patient's state. Likewise, the images are more suitable for subsequent processing such as for the detection of retinal pathology.29, 39

Fig. 12

Original and restored color retinal images; (a–c) indicate three separate cases arranged from left to right following our notation for degraded (z i) and restored ( [TeX:] \documentclass[12pt]{minimal}\begin{document}$\hat{u}_i$\end{document} ûi ) images. The images are cropped to represent the region of interest given by the pathological area. The estimated PSF is shown at the bottom of the restored image. Video files are also included for change detection in cases (a) and (b). (Video 1, Quicktime, 0.5 MB); (Video 2, Quicktime, 0.4 MB)

116016_1_116016_1_12.jpg
10.1117/1.3652709.110.1117/1.3652709.2

In Fig. 13, the same images are shown but in gray scale to highlight the areas of structural change in pseudocolor. As mentioned earlier, this is an important result for its potential impact in medical practice. Subtle changes can be identified by this approach, such as the ones in Fig. 13b and the hemorrhage in the region of the optic disk in Fig. 13c. Another technique to rapidly identify changes from the two images is by alternating both restored images in a video sequence. Videos 1 and 2 (Fig. 12) correspond to the first two real cases.

Fig. 13

Visualization of structural changes in pseudo-color for the images of Fig. 12.

116016_1_116016_1_13.jpg

5.

Conclusion

The main purpose of this paper has been to investigate a new approach for retinal image restoration based on multichannel blind deconvolution. In addition, we developed a strategy for identifying and highlighting areas of structural change with possible relation to pathological damage. We have verified that fundus images of the same retina over time contain enough common information to be restored with the proposed method. The method consists of a series of preprocessing steps to adjust the images so they comply with the convolutional model, followed by the final stages of PSF estimation and deconvolution. The synthetically degraded images enabled us to test the performance of the proposed approach and also to compare with a state-of-the-art single-channel blind deconvolution method. Results showed a remarkable enhancement evidenced by the increased visibility of details such as small blood vessels or pathological areas. The proposed method provides a novel practical approach for retinal image enhancement and, equally important the analysis of retinal changes over time. Central to the task of determining disease progression is the distinction of true change from variability.

The results of this study open several new avenues for research and applications. A possible application is found in the restoration of stereo retinal images for depth estimation. Most stereo images do not satisfy the brightness constancy assumption along with the expected blurring of some parts of the images because photographers find it difficult to focus two images simultaneously. Finally, research can also be conducted to compare to deconvolution from wavefront-sensing fundus imagers to determine if our method could be a suitable and inexpensive alternative.

Acknowledgments

This research has been partly funded by the Spanish Ministerio de Ciencia e Innovación y Fondos FEDER (Project No. DPI2009-08879). Financial support was also provided by the Czech Ministry of Education under the Project No. 1M0572 (Research Center DAR). The authors are also grateful to the ophthalmologist Jordi Monés, M. D., from the Institut de la Màcula i la Retina, Barcelona, for providing the images. The first author also thanks the Spanish Ministerio de Educación for an FPU doctoral scholarship.

References

1. 

J. Kanski, Diseases of the Ocular Fundus, Elsevier/Mosby, New York (2005). Google Scholar

2. 

R. Winder, P. Morrow, I. McRitchie, J. Bailie, and P. Hart, “Algorithms for digital image processing in diabetic retinopathy,” Comput. Med. Imag. Graphics, 33 (8), 608 –622 (2009). https://doi.org/10.1016/j.compmedimag.2009.06.003 Google Scholar

3. 

H. Bartling, P. Wanger, and L. Martin, “Automated quality evaluation of digital fundus photographs,” Acta Ophthalmol., 87 (6), 643 –647 (2009). https://doi.org/10.1111/j.1755-3768.2008.01321.x Google Scholar

4. 

A. Agrawal and M. A. McKibbin, “Technical failure in photographic screening for diabetic retinopathy,” Diabet. Med., 20 (9), 777 –780 (2003). https://doi.org/10.1046/j.1464-5491.2003.01032.x Google Scholar

5. 

M. C. Boucher, J. A. Gresset, K. Angioi, and S. Olivier, “Effectiveness and safety of screening for diabetic retinopathy with two nonmydriatic digital images compared with the seven standard stereoscopic photographic fields,” Can. J. Ophthalmol., 38 (7), 557 –568 (2003). Google Scholar

6. 

H. M. Herbert, K. Jordan, and D. W. Flanagan, “Is screening with digital imaging using one retinal view adequate?,” Eye, 17 (4), 497 –500 (2003). https://doi.org/10.1038/sj.eye.6700409 Google Scholar

7. 

M. D. Abràmoff, M. Niemeijer, M. S. A. Suttorp-Schulten, M. A. Viergever, S. R. Russell, and B. van Ginneken, “Evaluation of a system for automatic detection of diabetic retinopathy from color fundus photographs in a large population of patients with diabetes,” Diabetes Care, 31 (2), 193 –198 (2008). https://doi.org/10.2337/dc07-1312 Google Scholar

8. 

A. V. Larichev, N. G. Irochnikov, P. Ivanov, and A. V. Kudryashov, “Deconvolution of color retinal images with wavefront sensing,” Proc. SPIE, 4251 102 –116 (2001). https://doi.org/10.1117/12.427879 Google Scholar

9. 

J. C. Christou, A. Roorda, and D. R. Williams, “Deconvolution of adaptive optics retinal images,” J. Opt. Soc. Am. A, 21 1393 –1401 (2004). https://doi.org/10.1364/JOSAA.21.001393 Google Scholar

10. 

J. Primot, G. Rousset, and J. C. Fontanella, “Deconvolution from wave-front sensing: a new technique for compensating turbulence-degraded images,” J. Opt. Soc. Am. A, 7 1598 –1608 (1990). https://doi.org/10.1364/JOSAA.7.001598 Google Scholar

11. 

D. Catlin and C. Dainty, “High-resolution imaging of the human retina with a Fourier deconvolution technique,” J. Opt. Soc. Am. A, 19 1515 –23 (2002). https://doi.org/10.1364/JOSAA.19.001515 Google Scholar

12. 

W. Wang, M. C.W. Campbell, M. L. Kisilak, and S. R. Boyd, “Optical design considerations when imaging the fundus with an adaptive optics correction,” Proc. SPIE, 7099 70990C (2008). https://doi.org/10.1117/12.807121 Google Scholar

13. 

S. A. Burns, R. Tumbar, A. E. Elsner, D. Ferguson, and D. X. Hammer, “Large-field-of-view, modular, stabilized, adaptive-optics-based scanning laser ophthalmoscope,” J. Opt. Soc. Am. A, 24 1313 –1326 (2007). https://doi.org/10.1364/JOSAA.24.001313 Google Scholar

14. 

P. Bedggood, M. Daaboul, R. Ashman, G. Smith, and A. Metha, “Characteristics of the human isoplanatic patch and implications for adaptive optics retinal imaging,” J. Biomed. Opt., 13 024008 (2008). https://doi.org/10.1117/1.2907211 Google Scholar

15. 

M. S. Millán, E. Pérez-Cabré, and B. Javidi, “Multifactor authentication reinforces optical security,” Opt. Lett., 31 (6), 721 –723 (2006). https://doi.org/10.1364/OL.31.000721 Google Scholar

16. 

D. Kundur and D. Hatzinakos, “Blind image deconvolution,” IEEE Signal Process. Mag., 13 (3), 43 –64 (1996). https://doi.org/10.1109/79.489268 Google Scholar

17. 

E. P. Campisi and K. Egiazarian, Blind Image Deconvolution: Theory and Applications, CRC Press, Boca Raton (1996). Google Scholar

18. 

C. Sheppard, “Fundamentals of superresolution,” Micron, 38 (2), 165 –169 (2007). https://doi.org/10.1016/j.micron.2006.07.012 Google Scholar

19. 

A. Levin, Y. Weiss, F. Durand, and W. Freeman, “Understanding and evaluating blind deconvolution algorithms,” 1964 –1971 (2009). Google Scholar

20. 

L. Xu and J. Jia, “Two-phase kernel estimation for robust motion deblurring,” Lect. Notes Comp. Sci., 6311 157 –170 (2010). Google Scholar

21. 

F. Šroubek and J. Flusser, “Multichannel blind deconvolution of spatially misaligned images,” IEEE Trans. Image Process., 14 (7), 874 –883 (2005). https://doi.org/10.1109/TIP.2005.849322 Google Scholar

22. 

L. Rudin, S. Osher, and E. Fatemi, “Nonlinear total variation based noise removal algorithms,” Physica D, 60 259 –268 (1992). https://doi.org/10.1016/0167-2789(92)90242-F Google Scholar

23. 

W. H. Richardson, “Bayesian-based iterative method of image restoration,” J. Opt. Soc. Am., 62 (1), 55 –59 (1972). https://doi.org/10.1364/JOSA.62.000055 Google Scholar

24. 

F. Šroubek, J. Flusser, and M. Šorel, “Superresolution and blind deconvolution of video,” 1 –4 (2008). Google Scholar

25. 

B. Zitová and J. Flusser, “Image registration methods: a survey,” Image Vis. Comput., 11 (21), 977 –1000 (2003). https://doi.org/10.1016/S0262-8856(03)00137-9 Google Scholar

26. 

S. Lee, J. Reinhardt, P. Cattin, and M. Abramoff, “Objective and expert-independent validation of retinal image registration algorithms by a projective imaging distortion model,” Med. Image Anal., 14 (4), 539 –549 (2010). https://doi.org/10.1016/j.media.2010.04.001 Google Scholar

27. 

C. Stewart, C.-L. Tsai, and B. Roysam, “The dual-bootstrap iterative closest point algorithm with application to retinal image registration,” IEEE Trans. Med. Imaging, 22 (11), 1379 –1394 (2003). https://doi.org/10.1109/TMI.2003.819276 Google Scholar

28. 

A. Can, C. Stewart, B. Roysam, and H. Tanenbaum, “A feature-based, robust, hierarchical algorithm for registering pairs of images of the curved human retina,” IEEE Trans. Pattern Anal., 24 (3), 347 –364 (2002). https://doi.org/10.1109/34.990136 Google Scholar

29. 

C. Muramatsu, Y. Hayashi, A. Sawada, Y. Hatanaka, T. Hara, T. Yamamoto, and H. Fujita, “Detection of retinal nerve fiber layer defects on retinal fundus images for early diagnosis of glaucoma,” J. Biomed. Opt., 15 016021 (2010). https://doi.org/10.1117/1.3322388 Google Scholar

30. 

H. Narasimha-Iyer, A. Can, B. Roysam, C. Stewart, H. Tanenbaum, A. Majerovics, and H. Singh, “Robust detection and classification of longitudinal changes in color retinal fundus images for monitoring diabetic retinopathy,” IEEE Trans. Biomed. Eng., 53 (6), 1084 –1098 (2006). https://doi.org/10.1109/TBME.2005.863971 Google Scholar

31. 

M. Foracchia, E. Grisan, and A. Ruggeri, “Luminosity and contrast normalization in retinal images,” Med. Image Anal., 9 (3), 179 –190 (2005). https://doi.org/10.1016/j.media.2004.07.001 Google Scholar

32. 

T. Aach and A. Kaup, “Bayesian algorithms for change detection in image sequences using markov random fields,” Signal Process. Image Commun., 7 147 –160 (1995). https://doi.org/10.1016/0923-5965(95)00003-F Google Scholar

33. 

C.-C. Chang, T.-L. Chia, and C.-K. Yang, “Modified temporal difference method for change detection,” Opt. Eng., 44 027001 (2005). https://doi.org/10.1117/1.1839893 Google Scholar

34. 

R. Radke, S. Andra, O. Al-Kofahi, and B. Roysam, “Image change detection algorithms: a systematic survey,” IEEE Trans. Image Process., 14 (3), 294 –307 (2005). https://doi.org/10.1109/TIP.2004.838698 Google Scholar

35. 

S. Stigler, “Fisher and the 5% level,” Chance, 21 12 (2008). https://doi.org/10.1007/s00144-008-0033-3 Google Scholar

36. 

MATLAB application for multichannel blind deconvolution, (2008) http://zoi.utia.cas.cz/download Google Scholar

37. 

A. Chambolle and P. Lions, “Image recovery via total variation minimization and related problems,” Numer. Math., 76 (2), 167 –188 (1997). https://doi.org/10.1007/s002110050258 Google Scholar

38. 

V. K. Ananthashayana, “Super resolution blind reconstruction of low resolution images using wavelets based fusion,” World Acad. Sci. Eng. Technol., 40 177 –181 (2008). Google Scholar

39. 

L. Xu and S. Luo, “Optimal algorithm for automatic detection of microaneurysms based on receiver operating characteristic curve,” J. Biomed. Opt., 15 065004 (2010). https://doi.org/10.1117/1.3523367 Google Scholar
© 2011 Society of Photo-Optical Instrumentation Engineers (SPIE) 1083-3668/2011/16(11)/116016/11/$25.00
Andres G. Marrugo, Maria S. Millan, Michal Sorel, and Filip Sroubek "Retinal image restoration by means of blind deconvolution," Journal of Biomedical Optics 16(11), 116016 (1 November 2011). https://doi.org/10.1117/1.3652709
Published: 1 November 2011
Lens.org Logo
CITATIONS
Cited by 44 scholarly publications and 1 patent.
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Image restoration

Deconvolution

Point spread functions

Image registration

Image quality

Eye

Retina

RELATED CONTENT


Back to Top