In images of the corneal endothelium (CE) acquired by specular microscopy, endothelial cells are commonly only visible in a part of the image due to varying contrast, mainly caused by challenging imaging conditions as a result of a strongly curved endothelium. In order to estimate the morphometric parameters of the corneal endothelium, the analyses need to be restricted to trustworthy regions – the region of interest (ROI) – where individual cells are discernible. We developed an automatic method to find the ROI by Dense U-nets, a densely connected network of convolutional layers. We tested the method on a heterogeneous dataset of 140 images, which contains a large number of blurred, noisy, and/or out of focus images, where the selection of the ROI for automatic biomarker extraction is vital. By using edge images as input, which can be estimated after retraining the same network, Dense U-net detected the trustworthy areas with an accuracy of 98.94% and an area under the ROC curve (AUC) of 0.998, without being affected by the class imbalance (9:1 in our dataset). After applying the estimated ROI to the edge images, the mean absolute percentage error (MAPE) in the estimated endothelial parameters was 0.80% for ECD, 3.60% for CV, and 2.55% for HEX.
The attenuation coefficient (AC) is a property related to the microstructure of tissue on a wavelength scale that can be
estimated from optical coherence tomography (OCT) data. Since the OCT signal sensitivity is affected by the finite
spectrometer/detector resolution called roll-off and the shape of the focused beam in the sample arm, ignoring these
effects leads to severely biased estimates of AC. Previously, the signal intensity dependence on these factors has been
modeled. In this paper, we study the dependence of the estimated AC on the beam-shape and focus depth experimentally.
A method is presented to estimate the axial point spread function model parameters by fitting the OCT signal model for
single scattered light to the averaged A-lines of multiple B-scans obtained from a homogeneous single-layer phantom.
The estimated model parameters were used to compensate the signal for the axial point spread function and roll-off in
order to obtain an accurate estimate of AC. The result shows a significant improvement in the accuracy of the estimation
of AC after correcting for the shape of the OCT beam.
Automated detection and quantification of spatio-temporal retinal changes is an important step to objectively assess disease progression and treatment effects for dynamic retinal diseases such as diabetic retinopathy (DR). However, detecting retinal changes caused by early DR lesions such as microaneurysms and dot hemorrhages from longitudinal pairs of fundus images is challenging due to intra and inter-image illumination variation between fundus images. This paper explores a method for automated detection of retinal changes from illumination normalized fundus images using a deep convolutional neural network (CNN), and compares its performance with two other CNNs trained separately on color and green channel fundus images. Illumination variation was addressed by correcting for the variability in the luminosity and contrast estimated from a large scale retinal regions. The CNN models were trained and evaluated on image patches extracted from a registered fundus image set collected from 51 diabetic eyes that were screened at two different time-points. The results show that using normalized images yield better performance than color and green channel images, suggesting that illumination normalization greatly facilitates CNNs to quickly and correctly learn distinctive local image features of DR related retinal changes.
Optical coherence tomography (OCT) yields high-resolution, three-dimensional images of the retina. A better
understanding of retinal nerve fiber bundle (RNFB) trajectories in combination with visual field data may be used for
future diagnosis and monitoring of glaucoma. However, manual tracing of these bundles is a tedious task. In this work, we
present an automatic technique to estimate the orientation of RNFBs from volumetric OCT scans. Our method consists of
several steps, starting from automatic segmentation of the RNFL. Then, a stack of en face images around the posterior
nerve fiber layer interface was extracted. The image showing the best visibility of RNFB trajectories was selected for
further processing. After denoising the selected en face image, a semblance structure-oriented filter was applied to probe
the strength of local linear structure in a discrete set of orientations creating an orientation space. Gaussian filtering along
the orientation axis in this space is used to find the dominant orientation. Next, a confidence map was created to supplement
the estimated orientation. This confidence map was used as pixel weight in normalized convolution to regularize the
semblance filter response after which a new orientation estimate can be obtained. Finally, after several iterations an
orientation field corresponding to the strongest local orientation was obtained. The RNFB orientations of six macular scans
from three subjects were estimated. For all scans, visual inspection shows a good agreement between the estimated
orientation fields and the RNFB trajectories in the en face images. Additionally, a good correlation between the orientation
fields of two scans of the same subject was observed. Our method was also applied to a larger field of view around the
macula. Manual tracing of the RNFB trajectories shows a good agreement with the automatically obtained streamlines
obtained by fiber tracking.
Scanning laser ophthalmoscopy is a confocal imaging technique that allows high-contrast imaging of retinal structures. Rapid, involuntary eye movements during image acquisition are known to cause artefacts and high-speed imaging of the retina is crucial to avoid them. To reach higher imaging speeds we propose to illuminate the retina with multiple parallel lines simultaneously within the whole field of view (FOV) instead of a single focused line that is raster-scanned. These multiple line patterns were generated with a digital micro-mirror device (DMD) and by shifting the line pattern, the whole FOV is scanned. The back-scattered light from the retinal layers is collected via a beam-splitter and imaged onto an area camera. After every pattern from the sequence is projected, the final image is generated by combining these back-reflected illumination patterns. Image processing is used to remove the background and out-of-focus light. Acquired pattern images are stacked, pixels sorted according to intensity and finally bottom layer of the stack is subtracted from the top layer to produce confocal image. The obtained confocal images are rich in structure, showing the small blood vessels around the macular avascular zone and the bow tie of Henle's fiber layer in the fovea. In the optic nerve head images the large arteries/veins, optic cup rim and cup itself are visualized. Images have good contrast and lateral resolution with a 10°×10° FOV. The initial results are promising for the development of high-speed retinal imaging using spatial light modulators such as the DMD.
Optical coherence tomography (OCT) is used to produce high-resolution three-dimensional images of the retina, which permit the investigation of retinal irregularities. In dry age-related macular degeneration (AMD), a chronic eye disease that causes central vision loss, disruptions such as drusen and changes in retinal layer thicknesses occur which could be used as biomarkers for disease monitoring and diagnosis. Due to the topology disrupting pathology, existing segmentation methods often fail. Here, we present a solution for the segmentation of retinal layers in dry AMD subjects by extending our previously presented loosely coupled level sets framework which operates on attenuation coefficients. In eyes affected by AMD, Bruch’s membrane becomes visible only below the drusen and our segmentation framework is adapted to delineate such a partially discernible interface. Furthermore, the initialization stage, which tentatively segments five interfaces, is modified to accommodate the appearance of drusen. This stage is based on Dijkstra's algorithm and combines prior knowledge on the shape of the interface, gradient and attenuation coefficient in the newly proposed cost function. This prior knowledge is incorporated by varying the weights for horizontal, diagonal and vertical edges. Finally, quantitative evaluation of the accuracy shows a good agreement between manual and automated segmentation.
Recently, a method to determine the retinal nerve fiber layer (RNFL) attenuation coefficient, based on normalization on the retinal pigment epithelium, was introduced. In contrast to conventional RNFL thickness measures, this novel measure represents a scattering property of the RNFL tissue. In this paper, we compare the RNFL thickness and the RNFL attenuation coefficient on 10 normal and 8 glaucomatous eyes by analyzing the correlation coefficient and the receiver operator curves (ROCs). The thickness and attenuation coefficient showed moderate correlation (r=0.82). Smaller correlation coefficients were found within normal (r=0.55) and glaucomatous (r=0.48) eyes. The full separation between normal and glaucomatous eyes based on the RNFL attenuation coefficient yielded an area under the ROC (AROC) of 1.0. The AROC for the RNFL thickness was 0.9875. No statistically significant difference between the two measures was found by comparing the AROC. RNFL attenuation coefficients may thus replace current RNFL thickness measurements or be combined with it to improve glaucoma diagnosis.
We demonstrate significantly different scattering coefficients of the retinal nerve fiber layer (RNFL) between normal and
glaucoma subjects. In clinical care, SD-OCT is routinely used to assess the RNFL thickness for glaucoma management.
In this way, the full OCT data set is conveniently reduced to an easy to interpret output, matching results from older (non-
OCT) instruments. However, OCT provides more data, such as the signal strength itself, which is due to backscattering in
the retinal layers. For quantitative analysis, this signal should be normalized to adjust for local differences in the intensity
of the beam that reaches the retina. In this paper, we introduce a model that relates the OCT signal to the attenuation
coefficient of the tissue. The average RNFL signal (within an A-line) was then normalized based on the observed RPE
signal, resulting in normalized RNFL attenuation coefficient maps. These maps showed local defects matching those found
in thickness data. The average (normalized) RNFL attenuation coefficient of a fixed band around the optic nerve head was
significantly lower in glaucomatous eyes than in normal eyes (3.0mm-1 vs. 4.9mm-1, P<0.01, Mann-Whitney test).
Volumetric scans of current SD-OCT devices can contain on the order of 50 million pixels. Due to this size and because
quantitative measurements in these scans are often needed, automatic segmentation of these scans is required. In this
paper, a fully automatic retinal layer segmentation algorithm is presented, based on pixel-classification. First, each pixel
is augmented by intensity and gradient data from a local neighborhood, thereby producing a feature vector. These feature
vectors are used as inputs for a support vector machine, which classifies each pixel as above or below each interface. Finally,
a level set method regularizes the result, producing a smooth surface within the three-dimensional space. Volumetric
scans of 10 healthy and 8 glaucomatous subjects were acquired with a Spectralis OCT. Each scan consisted of 193 B-scans,
512 A-lines per B-scan (5 times averaging) and 496 pixels per A-line. Two B-scans of each healthy subject were
manually segmented and used to train the support vector machine. One B-scan of each glaucomatous subjects was manually
segmented and used only for performance assessment of the algorithm. The root-mean-square errors for the normal eyes
were 3.7, 15.4, 15.0 and 5.5 μm for the vitreous/retinal nerve fiber layer (RNFL), RNFL/ganglion cell layer, inner plexiform
layer/inner nuclear layer and retinal pigment epithelium/choroid interfaces, respectively, and 5.5, 11.5, 9.5 and 6.2 μm for
the glaucomatous eyes. Based on the segmentation, retinal and RNFL thickness maps and blood vessel masks were
produced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.