At present, high myopia has become a hot spot for eye diseases worldwide because of its increasing prevalence. Linear lesion is an important clinical signal in the pathological changes of high myopia. ICGA is considered to be the “Ground Truth” for the diagnosis of linear lesions, but it is invasive and may cause adverse reactions such as allergy, dizziness, and even shock in some patients. Therefore, it is urgent to find a non-invasive imaging modality to replace ICGA for the diagnosis of linear lesions. Multi-color scanning laser (MCSL) imaging is a non-invasive imaging technique that can reveal linear lesion more richly than other non-invasive imaging technique such as color fundus imaging and red-free fundus imaging and some other invasive one such as fundus fluorescein angiography (FFA). To our best knowledge, there are no studies focusing on the linear lesion segmentation based on MCSL images. In this paper, we propose a new U-shape based segmentation network with multi-scale and global context fusion (SGCF) block named as SGCNet to segment the linear lesion in MCSL images. The features with multi-scales and global context information extracted by SGCF block are fused by learnable parameters to obtain richer high-level features. Four-fold cross validation was adopted to evaluate the performance of the proposed method on 86 MCSL images from 57 high myopia patients. The IoU coefficient, Dice coefficient, Sensitivity coefficient and Specialty are 0.494±0.109, 0.654±0.104, 0.676±0.131 and 0.998±0.002, respectively. Experiment results indicate the effectiveness of the proposed network.
Retinal detachment (RD) refers to the separation of the retinal neuroepithelium layer (RNE) and retinal pigment epithelium (RPE), and retinoschisis (RS) is characterized by the RNE splitting into multiple layers. Retinal detachment and retinoschisis are the main complications leading to vision loss in high myopia. Optical coherence tomography (OCT) is the main imaging method for observing retinal detachment and retinoschisis. This paper proposes a U-shaped convolutional neural network with a cross-fusion global feature module (CFCNN) to achieve automatic segmentation of retinal detachment and retinoschisis. Main contributions include: (1) A new cross-fusion global feature module (CFGF) is proposed. (2) The residual block is integrated into the encoder of the U-Net network to enhance the extraction of semantic information. The method was tested on a dataset consisting of 540 OCT B-scans. With the proposed CFCNN method, the mean Dice similarity coefficient of retinal detachment and retinoschisis segmentation reached 94.33% and 90.29% and were better than some existing advanced segmentation networks.
Pathologic myopia (PM) is a major cause of legal blindness in the world. Linear lesions are closely related to PM, which include two types of lesions in the posterior fundus of pathologic eyes in optical coherence tomography (OCT) images: retinal pigment epithelium-Bruch's membrane-choriocapillaris complex (RBCC) disruption and myopic stretch line (MSL). In this paper, a fully automated method based on U-shape network is proposed to segment RBCC disruption and MSL in retinal OCT images. Compared with the original U-Net, there are two main improvements in the proposed network: (1) We creatively propose a new downsampling module named as feature aggregation pooling module (FAPM), which aggregates context information and local information. (2) Deep supervision module (DSM) is adopted to help the network converge faster and improve the segmentation performance. The proposed method was evaluated via 3-fold crossvalidation strategy on a dataset composed of 667 2D OCT B-scan images. The mean Dice similarity coefficient, Sensitivity and Jaccard of RBCC disruption and MSL are 0.626, 0.665, 0.491 and 0.739, 0.814, 0.626, respectively. The primary experimental results show the effectiveness of our proposed method.
KEYWORDS: Optical coherence tomography, Image segmentation, Global system for mobile communications, Retina, Eye, Image fusion, Visualization, Convolution, Ophthalmology, Network architectures
The choroid is an important structure of the eye and choroid thickness distribution estimated from optical coherence tomography (OCT) images plays a vital role in analysis of many retinal diseases. This paper proposes a novel group-wise attention fusion network (referred to as GAF-Net) to segment the choroid layer, which can effectively work for both normal and pathological myopia retina. Currently, most networks perform unified processing of all feature maps in the same layer, which leads to not satisfactory choroid segmentation results. In order to improve this , GAF-Net proposes a group-wise channel module (GCM) and a group-wise spatial module (GSM) to fuse group-wise information. The GCM uses channel information to guide the fusion of group-wise context information, while the GSM uses spatial information to guide the fusion of group-wise context information. Furthermore, we adopt a joint loss to solve the problem of data imbalance and the uneven choroid target area. Experimental evaluations on a dataset composed of 1650 clinically obtained B-scans show that the proposed GAF-Net can achieve a Dice similarity coefficient of 95.21±0.73%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.