Paper
11 August 2023 Practical volumetric speckle reduction in OCT using deep learning
Author Affiliations +
Abstract
Speckle reduction has been an active topic of interest in the Optical Coherence Tomography (OCT) community and several techniques have been developed ranging from hardware-based methods, conventional image-processing and deep-learning based methods. The main goal of speckle reduction is to improve the diagnostic utility of OCT images by enhancing the image quality, thereby enhancing the visual interpretation of anatomical structures. We have previously introduced a probabilistic despeckling method based on non-local means for OCT—Tomographic Non-local-means despeckling (TNode). We demonstrated that this method efficiently suppresses speckle contrast while preserving tissue structures with dimensions approaching the system resolution. Despite the merits of this method, it is computationally very expensive: processing a typical retinal OCT volume takes a few hours. A much faster version of TNode with close to real-time performance, while keeping with the open source nature of TNode, could find much greater use in the OCT community. Deep learning despeckling methods have been proposed in OCT, including variants of conditional Generative Adversarial Networks (cGAN) and convolutional neural networks CNN. However, most of these methods have used B-scan compounding as a ground truth, which presents significant limitations in terms of speckle reduced tomograms with preservation of resolution. In addition, all these methods have focused on speckle suppression of individual B-scans, and their performance on volumetric tomograms is unclear: the expectation is that three-dimensional manipulations of these processed tomograms (i.e., en face projections) will contain artifacts due to the B-scan-wise processing, disrupting the continuity of tissue structures along the slow-scan axis. In addition, speckle suppression based on individual B-scans cannot provide the neural network with information on volumetric structures in the training data, and thus is expected to perform poorly on small structures. Indeed, most deep-learning despeckling works have focused on image quality metrics based on demonstrating strong speckle suppression, rather than focusing on preservation of contrast and small tissue structures. To overcome these problems, we propose an entire workflow to enable the wide-spread use of deep-learning speckle suppression in OCT: the ground-truth is generated using volumetric TNode despeckling, and the neural network uses a new cGAN that receives OCT partial volumes as inputs to utilize the three-dimensional structural information for speckle reduction. Because of its reliance on TNode for generating ground-truth data, this hybrid deep-learning–TNode (DL-TNode) framework will be made available to the OCT community to enable easy training and implementation in a multitude of OCT systems without relying on specialty-acquired training data.
© (2023) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Bhaskara Rao Chintada, Sebastián Ruiz-Lopera, René Restrepo, Martin Villiger, Brett E. Bouma, and Néstor Uribe-Patarroyo "Practical volumetric speckle reduction in OCT using deep learning", Proc. SPIE 12632, Optical Coherence Imaging Techniques and Imaging in Scattering Media V, 126321F (11 August 2023); https://doi.org/10.1117/12.2670781
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Optical coherence tomography

Speckle

Deep learning

Tomography

Image enhancement

Image quality

Neural networks

Back to Top