Purpose: This work aims to automatically identify the fovea on 2-dimensional fundus-autofluorescence (FAFs) images in patients with age-related-macular-degeneration (AMD) using the definitions from 3-dimensional spectral-domain-optical-coherence-tomography (SD-OCT) imaging. Segmenting the fovea, a highly specialized area of the retina, in vicinity of hypo-autofluorescence in FAF images will aid in objective evaluation of AMD based structural disease features with respect to distance from fovea. Methods: A semi-automated software was used to create fovea-annotations in volumetric SD-OCT images. Acquired FAF images for the same SD-OCT visits were registered to the enface SD-OCT projections, to create a pixel-to-pixel overlap between registered FAFs and SD-OCTs. A U-Net based segmentation network, trained using OCT-registered-FAFs and corresponding foveal-annotations from SD-OCTs, was used to automatically segment foveas from the registered 2D FAF images. Results: The dataset consisted of multimodal-images from AMD patients with 900 (80%) images used for training and 222 (20%) images used in the test-set. The mean euclidean-distance-error for the test-set w.r.t the OCT-determined-ground-truth was found to be 103.5±81.4 µm, and which improved to 83.4±57.9 µm with data-augmentation-based-training. Fovea-identification in FAF images with advanced-AMD disease consisting of geographic-atrophy (GA) test subset were compared between 3 sources and the OCTdetermined-ground-truth: (1) the U-Net algorithm (using the GA test subset (111.7±46.7 μm)), (2) readers at the Wisconsin-reading-center (165±77.5 μm) and a (3) retina-physician (169.9±109.4 μm). Conclusion: Our work demonstrates the potential of using 2D FAF images to predict foveal-locations, especially in visuallychallenging-scenarios where hypo-autofluorescent fovea is surrounded with advanced-disease that alters the normal autofluorescence patterns. The results demonstrate that the developed algorithm has clinically useful performance in segmenting the fovea in FAF images which will enable critical correlation with visual-acuity and the basis for referencing the standardized measures of features relative to the fovea – such as monitoring and tracking the growth of GA and other retina-disease related changes.
Reticular pseudodrusen (RPD) are subretinal drusenoid deposits that represent an important disease feature in age-related macular degeneration (AMD). RPD are of particular interest because their presence is a strong predictor of progression to advanced AMD. RPD features can be characterized using volumetric spectral-domain optical coherence tomography (SD-OCT). In this work, we curated a dataset from the Age-Related Eye Diseases Study 2 (AREDS2) ancillary OCT study. The dataset included 826 SD-OCT scans, with RPD present in 222 SD-OCT scans. Binary RPD labels were transferred from fundus autofluorescence (FAF) images taken at the same visits as the SD-OCT scans. The dataset was split at the participant level into training (70%), validation (10%), and test sets (20%). We proposed a 3D classification network to detect RPD from SD-OCT scans. We compared it to a baseline 2D network with average bagging and a 3D network with multi-tasking. The proposed network achieved the highest accuracy of 0.7784, area under receiver characteristic operating curve of 0.8689, and mean average precision of 0.7706 for detecting RPD from SD-OCT scans.
Purpose: This work aims to identify areas with sub-retinal-pigment-epithelium (sub-RPE) accumulations on 2- dimensional (2D) color-fundus-photographs (CFPs) in patients with age-related macular degeneration (AMD) using the definitions in spectral-domain-optical-coherence-tomography (SD-OCT) imaging. Detecting and quantifying areas of RPE elevations (most notably drusen) in CFPs will aid in objective evaluation of AMDseverity-scores as well as patient selection and monitoring in clinical trials. Methods: A retinal-layersegmentation algorithm for SD-OCTs was used to automatically identify areas with RPE elevations and build the ground-truth 2D binary maps for training. CFP was registered to the enface projection images of SD-OCT to overlay OCT-defined drusen areas on CFP images. A 2D-UNet segmentation network was trained using bilateral stereo CFP pairs in a Siamese architecture that share OCT-defined drusen areas as ground-truth. Results: Dataset consists of AMD patients with 127 train and 23 test eyes. Dice-similarity-coefficient for the predictions on CFPs was found to be 0.70±0.13 (mean±std), and overall accuracy was 0.73. 89% of test eyes exhibited drusen area prediction error <1mm2 compared to reading-center measures. Conclusion: Our work demonstrates the potential of using 2D CFP images to predict areas of sub-RPE elevations as defined in 3D-SDOCT imaging. Qualitative evaluation of the mismatch between the two imaging modalities shows regions with complementary features in a subset of the cases making it challenging to achieve optimal segmentation. However, the results show clinically useful performance in CFPs that can be used to quantify accumulations in the sub-RPE space which are the key pathologic biomarkers of AMD relevant to patient selection and trial outcome measure designs.
Geographic atrophy (GA) is the defining lesion of advanced atrophic age-related macular degeneration (AMD). GA can be detected and characterized most accurately using spectral-domain optical coherence tomography (SDOCT), which provides detailed 3D information about changes in multiple retinal layers. Existing methods are limited to 2D convolutional neural networks (CNNs). Therefore, they do not capture the 3D context between adjacent 2D slices of the OCT scan and also require a large inference time. We propose 3D CNNs with 3D attention mechanisms for the automated detection of GA on SDOCT scans using scan-level labels. The best network achieved an accuracy of 88%, and its visualizations suggest the interpretability of its predictions.
Purpose: This work investigates a semi-supervised approach for automatic detection of hyperreflective foci (HRF) in spectral-domain optical coherence tomography (SD-OCT) imaging. Starting with a limited annotated data set containing HRFs, we aim to build a larger data set and then a more robust detection model. Methods: Faster RCNN model for object detection was trained in a semi-supervised manner whereby high confidence detections from the current iteration are added to the training set in subsequent iterations after manual verification. With each iteration the size of the training set is increased by including model detected additional cases. We expect the model to be more accurate and robust as the number of training iterations increase. We performed experiments in a data set consisting over 170,000 SD-OCT B scans. The models were tested in a data set consisting of 30 patients (3630 B scans). Results: Across iterations the model performance improved with final model yielding precision=0.56, recall=0.99, and F1-score=0.71. As the number of training example increases the model detects cases with more confidence. The high false positive rate is associated with additional detections that capture instances of elevated reflectivity which upon review were found to represent questionable cases rather than definitive HRFs due to confounding factors. Conclusion: We demonstrate that by starting with a small data set of HRFs we are able to search the occurrences of other HRFs in the data set in a semi-supervised fashion. This method provides an objective, time, and cost-effective alternative to laborious manual inspection of B-scans for HRF occurrences.
KEYWORDS: Image segmentation, 3D modeling, Retina, Image processing algorithms and systems, Detection and tracking algorithms, Signal to noise ratio, 3D image processing, Image contrast enhancement, Medical image reconstruction, Medical image processing
Purpose: Spectral Domain Optical Coherence Tomography (SD-OCT) is a much utilized imaging modality in retina clinics to inspect the integrity of retinal layers in patients with age related macular degeneration. Spectralis and Cirrus are two of the most widely used SD-OCT vendors. Due to the stark difference in intensities and signal to noise ratio’s between the images captured by the two instruments, a model trained on images from one instrument performs poorly on the images of the other instrument. Methods: In this work, we explore the performance of an algorithm trained on images obtained from the Heidelberg Spectralis device on Cirrus images. Utilizing a dataset containing Heidelberg images and Cirrus images, we address the problem of accurately segmenting images on one domain with an algorithm developed on another domain. In our approach we use unpaired CycleGAN based domain adaptation network to transform the Cirrus volumes to the Spectralis volumes, before using our trained segmentation network. Results: We show that the intensity distribution shifts towards the Spectralis domain when we domain adapt Cirrus images to Spectralis images. Our results show that the segmentation model performs significantly better on the domain translated volumes (Total Retinal Volume Error: 0.17±0.27mm3, RPEDC Volume Error: 0.047±0.05mm3) compared to the raw volumes (Total Retinal VolumeError: 0.26±0.36mm3, RPEDC Volume Error: 0.13±0.15mm3) from the Cirrus domain and that such domain adaptation approaches are feasible solutions. Conclusions: Both our qualitative and quantitative results show that CycleGAN domain adaptation network can be used as an efficient technique to perform unpaired domain adaptation between SD-OCT images generated from different devices. We show that a 3D segmentation model trained on Spectralis volume performs better on domain adapted Cirrus volumes, compared to raw Cirrus volumes.
Purpose: Spectral Domain Optical Coherence Tomography (SD-OCT) images are a series of Bscans which capture the volume of the retina and reveal structural information. Diseases of the outer retina cause changes to the retinal layers which are evident on SD-OCT images, revealing disease etiology and risk factors for disease progression. Quantitative thickness information of the retina layers provide disease relevant data that reveal important aspects of disease pathogenesis. Manually labeling these layers is extremely laborious, time consuming and costly. Recently, deep learning algorithms have been used for automating the process of segmentation. While retinal volumes are inherently 3 dimensional, state-of-the-art segmentation approaches have been limited in their utilization of the 3 dimensional nature of the structural information. Methods: In this work, we train a 3D-UNet using 150 retinal volumes and test using 191 retinal volumes from a hold out test set (with AMD severity grade ranging from no disease through the intermediate stages to the advanced disease, and presence of geographic atrophy). The 3D deep features learned by the model captures spatial information simultaneously from all the three volumetric dimensions. Since unlike the ground truth, the output of 3D-UNet is not single pixel wide, we perform a column wise probabilistic maximum operation to obtain single pixel wide layers, for quantitative evaluations. Results: We compare our results to the publicly available OCT Explorer and deep learning based 2D-UNet algorithms and observe a low error within 3.11 pixels with respect to the ground truth locations (for some of the most challenging or advanced stage AMD eyes with AMD severity score: 9 and 10). Conclusion: Our results show that both qualitatively and quantitatively there is a significant advantage of extracting and utilizing 3D features over the traditionally used OCT Explorer or 2D-UNet.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.