At present, high myopia has become a hot spot for eye diseases worldwide because of its increasing prevalence. Linear lesion is an important clinical signal in the pathological changes of high myopia. ICGA is considered to be the “Ground Truth” for the diagnosis of linear lesions, but it is invasive and may cause adverse reactions such as allergy, dizziness, and even shock in some patients. Therefore, it is urgent to find a non-invasive imaging modality to replace ICGA for the diagnosis of linear lesions. Multi-color scanning laser (MCSL) imaging is a non-invasive imaging technique that can reveal linear lesion more richly than other non-invasive imaging technique such as color fundus imaging and red-free fundus imaging and some other invasive one such as fundus fluorescein angiography (FFA). To our best knowledge, there are no studies focusing on the linear lesion segmentation based on MCSL images. In this paper, we propose a new U-shape based segmentation network with multi-scale and global context fusion (SGCF) block named as SGCNet to segment the linear lesion in MCSL images. The features with multi-scales and global context information extracted by SGCF block are fused by learnable parameters to obtain richer high-level features. Four-fold cross validation was adopted to evaluate the performance of the proposed method on 86 MCSL images from 57 high myopia patients. The IoU coefficient, Dice coefficient, Sensitivity coefficient and Specialty are 0.494±0.109, 0.654±0.104, 0.676±0.131 and 0.998±0.002, respectively. Experiment results indicate the effectiveness of the proposed network.
Pathologic myopia (PM) is a major cause of legal blindness in the world. Linear lesions are closely related to PM, which include two types of lesions in the posterior fundus of pathologic eyes in optical coherence tomography (OCT) images: retinal pigment epithelium-Bruch's membrane-choriocapillaris complex (RBCC) disruption and myopic stretch line (MSL). In this paper, a fully automated method based on U-shape network is proposed to segment RBCC disruption and MSL in retinal OCT images. Compared with the original U-Net, there are two main improvements in the proposed network: (1) We creatively propose a new downsampling module named as feature aggregation pooling module (FAPM), which aggregates context information and local information. (2) Deep supervision module (DSM) is adopted to help the network converge faster and improve the segmentation performance. The proposed method was evaluated via 3-fold crossvalidation strategy on a dataset composed of 667 2D OCT B-scan images. The mean Dice similarity coefficient, Sensitivity and Jaccard of RBCC disruption and MSL are 0.626, 0.665, 0.491 and 0.739, 0.814, 0.626, respectively. The primary experimental results show the effectiveness of our proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.