Research Papers: General

Automated detection of the choroid boundary within OCT image data using quadratic measure filters

[+] Author Affiliations
Marcus Wagner

Max Planck Institute for Mathematics in the Sciences, Leipzig, Germany

Patrick Scheibe

Leipzig University, Saxonian Incubator for Clinical Translation, Leipzig, Germany

Mike Francke, Beatrice Zimmerling

Leipzig University, Saxonian Incubator for Clinical Translation, Leipzig, Germany

Leipzig University, Paul-Flechsig-Institute of Brain Research, Leipzig, Germany

Katharina Frey

Ernst-Abbe-Hochschule Jena—University of Applied Sciences, Department SciTec, Jena, Germany

Mandy Vogel

Leipzig University, LIFE Leipzig Research Center for Civilization Diseases, Leipzig, Germany

Stephan Luckhaus

Leipzig University, Saxonian Incubator for Clinical Translation, Leipzig, Germany

Leipzig University, Institute of Mathematics, Leipzig, Germany

Peter Wiedemann, Franziska G. Rauscher

Leipzig University Hospital, Department of Ophthalmology, Leipzig, Germany

Wieland Kiess

Leipzig University, LIFE Leipzig Research Center for Civilization Diseases, Leipzig, Germany

University Hospital for Children and Adolescents and Center for Pediatric Research, Department of Women and Child Health, Leipzig University, Leipzig, Germany

Leipzig University, Integrated Research and Treatment Center Adiposity Diseases, Leipzig, Germany

J. Biomed. Opt. 22(2), 025004 (Feb 11, 2017). doi:10.1117/1.JBO.22.2.025004
History: Received September 1, 2016; Accepted January 3, 2017
Text Size: A A A

Open Access Open Access

Abstract.  A novel method for the automated detection of the outer choroid boundary within spectral-domain optical coherence tomography image data, based on an image model within the space of functions of bounded variation and the application of quadratic measure filters, is presented. The same method is used for the segmentation of retinal layer boundaries and proves to be suitable even for data generated without special imaging modes and moderate line averaging. Based on the segmentations, an automated determination of the central fovea region and choroidal thickness measurements for this and two adjacent 1-mm regions are provided. The quality of the method is assessed by comparison with manual delineations performed by five trained graders. The study is based on data from 50 children of the ages 8 to 13 that were obtained in the framework of the LIFE Child study at Leipzig University.

Figures in this Article

The choroid, a vascular structure between the retina and sclera, is primarily responsible for oxygenation and metabolic activity of the retinal pigment epithelium layer (RPE) and the outer retina. Its main substructures are (i) the Bruch’s membrane; (ii) the choriocapillaris, a capillary network adjacent to the Bruch’s membrane; (iii) the choroidal stroma, layers with intermediate and large vessels that drain the choriocapillaris; (iv) the suprachoroid, a transitional zone between the choroid and sclera (30  μm thick) composed of numerous compactly arranged lamellae of melanocytes, fibroblasts, collagen bundles, and nerve fibers (for more details, see Refs. 123). Long-term variations of choroidal thickness occur in the process of infantile ocular development and healthy aging of adults4,5 as well as in connection with retinal diseases, which can affect the choroid.6,7 In the literature, short-term (even diurnal) thickness variations have been reported as well, cf. Ref. 2, pp. 154 ff.; Ref. 8. Differences in choroidal thickness have also been observed in relation to different levels of myopia.9 Consequently, there is a considerable interest in accurate measurements of choroidal thickness.

Historically, the standard procedure for choroid imaging was indocyanine green angiography, which allows for two-dimensional (2-D) imaging of the choroid pattern and detection of leakage and vessel wall abnormalities.10 Although optical coherence tomography (OCT) has become a standard tool for noninvasive imaging of the retinal layers since its introduction in 1991, its application for the investigation of the choroid has largely been unsuccessful for more than a decade. In OCT images, the (visual as well as automated) identification of the highly reflective RPE/choroid boundary is easy, but the detection of the outer choroid boundary (OCB) is a much more challenging task for several reasons. First, the position of the choroid posterior to the highly scattering layers of the cellular/subcellular structures of the outer retina causes a relatively weak signal from the region as a whole. Second, the observed reflectivity jumps are weak and strongly affected by noise. Only recent advances in OCT imaging, namely real-time eye-tracking (allowing for averaging of large numbers of A-scans), enhanced depth imaging (EDI SD-OCT) (as introduced in Ref. 11), swept-source OCT (SS-OCT)12 or additional use of polarization information13 led to high-quality in vivo visualizations of the choroidal vasculature, providing reliable depth information for choroidal thickness measurements. A third, principal difficulty results from the heterogeneous structure of the suprachoroidal layer, which consists of tightly interlaced components of both low and high reflectivity (melanocytes resp. collagen bands and fibroblasts).3 As a consequence, it may be questioned whether the border between the suprachoroid and the collagen bundles of the sclera, which is histologically well-defined, is always clearly resolved by OCT imaging. Instead, the positive contrast jump visible in the data, which seems to be related to regions with sufficiently high concentrations of melanocytes, is used for the operational definition of the OCB in the present study (as well as in the vast majority of the literature), thus providing a reliable inner approximation of the histological choroid/sclera border.

The built-in segmentation routines in OCT devices are not suitable for scientific measurements as they are designed mainly for diagnostical purposes and, in general, are poorly documented (if at all). For the automated segmentation of retinal layers in OCT images, in particular for the detection of the RPE/choroid boundary, a large variety of scientifically documented methods is available by now. These include denoising by diffusion and subsequent edge detection,14 denoising and simultaneous edge detection by variational methods,15 active contour methods,16 graph-theoretical approaches,1720 and classification by support vector machines.21 Furthermore, in Ref. 22, a statistical segmentation method that is essentially based on the application of a neural network was proposed. Note that in almost all aforementioned cases a preceding denoising step is included even when not explicitly mentioned.

In contrast to this situation, the majority of studies concerned with the determination of choroidal thickness rely on manual or semi-manual detection of the OCB.47,9,12,2335 Unfortunately, manual delineation procedures are quite time-consuming and show considerable intra- and interobserver variability even when based on EDI SD-OCT or SS-OCT data.28,36,37 Therefore, an automated detection of the OCB is highly desirable but has been realized only in a small number of studies until now, applying successful approaches for retinal layer segmentation to OCB detection. The studies36,3842 pursue graph-theoretical approaches while43 adapted their statistical segmentation technique from their previous work.22

In the present study, we establish a novel method for the automated detection of the OCB within SD-OCT image data. In difference to the studies mentioned above, our approach is based on an image model within the space of functions of bounded variation (BV) and combines the application of quadratic measure filters4449 with elementary feature recognition. To the best of the authors’ knowledge, the method has not been used in the context of medical image processing as yet. It applies to the detection of retinal layer boundaries as well and is particularly suitable for OCT data generated without special imaging modes (as EDI or SS-OCT) and moderate line averaging. To demonstrate the capability of the new approach, only such data (acquired without EDI module, 10 times averaged) were used in the present study. Note that a considerable number of OCT studies employ solely this low-grade image quality (certainly, the method presented applies to data of higher quality just as well, as will be shown in a subsequent publication). Within the B-scans, three layer boundaries were automatically recognized: the inner limiting membrane (ILM) boundary, the RPE/choroid boundary and the OCB. Based on these segmentations, we provide an automated determination of the central fovea region.50,51 Automated measurements of mean choroidal thickness are given for this and two adjacent 1 mm regions, as defined in more detail in Sec. 2.6. The quality of the automated procedures will be assessed by comparison with manual segmentations performed by five trained graders and subsequent thickness derivations. Moreover, we shortly discuss whether the obtained OCB segmentations correspond to the histological structure of the transitional zone between the choroid and sclera, cf. Ref. 2, p. 147; Ref. 3. Our study is based on data from 50 children aged 8 to 13 years, which have been collected in the framework of the LIFE Child study at the Leipzig University.

The outline of the paper is as follows. In Sec. 2, we describe material and methods, particularly providing a detailed description of the automated segmentation method. Section 3 is devoted to the presentation and discussion of the results, Sec. 4 presents our conclusions. For convenience of the reader, the mathematical background of the presented method is briefly summarized in the 1. Detailed information about the OCT data is contained in Table 1.

Table Grahic Jump Location
Table 1OCT data. Right eye imaged; size of B-scans is 496×512  px; pixel depth is 3.87  μm.
Selection of Probands within the Framework of the LIFE Child Study

The LIFE Child study is a population-based longitudinal cohort study conducted in Leipzig, a city in central Germany with more than 500,000 inhabitants. With recruitment age ranging between the 24th week of gestation and 16 years of age and annual follow-ups, the study combines a cross-sectional and a longitudinal design. Participants stem from the city of Leipzig and its close proximity. Recruitment started in 2011 and is planned to continue until 2021. By the end of 2015, more than 3000 children have already participated. Study participants are recruited via advertisement at different institutions such as university hospitals, local clinics, public health centers, kindergartens, schools, and partner study centers. The study program covers the collection of biological samples, examination of body functions and assessment of skills, traits, and habits (for more details, see Ref. 52). For the present work, OCT data from 50 children (25 male, 25 female) aged 8 to 13 have been randomly selected. The use of data was approved by the institutional review board of the Leipzig Research Center for Civilization Diseases (LIFE).

The LIFE Child study follows the tenets of the Declaration of Helsinki. Its outline was approved by the Ethics Committee of the Leipzig University (Reg. No. 264-10-19042010). The study has been registered with the trial number NCT02550236. The parents of the participating children were informed about the study program, the long-term use of data, potential risks of participation and the right to withdraw from the study. Written informed consent is obtained from the parents of all individual participants included in the study.

Acquisition of SD-OCT Image Data

Chorio-retinal images were obtained with a commercially available spectral-domain (SD) OCT (Spectralis HRA + OCT, equipped with camera head with serial number 04514, software modules Heidelberg Eye Explorer 1.7.0.0, Aquisition Module 5.4.7.0 and Viewing Module 5.4.6.0; Heidelberg Engineering, Heidelberg, Germany). The device uses a super luminescent diode with a central wavelength of 870 nm and acquires 40,000 A-scans/s. The scan depth in tissue is 1.9 mm with 496 pixels per A-scan, resulting in a pixel depth of 3.87  μm while the axial optical resolution amounts to 7  μm, cf. Ref. 53, p. 273. For each child, volume scans of the macular area with a field size of 20  deg(temporal-nasal)×20  deg(superior-inferior) were acquired. Each volume scan contains 97 equally spaced averaged B-scans consisting of 496×512  pixels. Additionally, the device utilizes a confocal scanning laser ophthalmoscope at a wavelength of 815 nm to capture an overview image of the retina. Herein points are mapped to track eye movements using the overview image as a reference. This built-in function for real-time eye-tracking was active in order to obtain an average of 10 B-scans per line. EDI module was not used for the current analysis. For all children, both eyes were measured but the subsequent analysis was carried out on right eye data only. Corneal radii were measured with the Lenstar instrument (LS900 Biometer, Haag-Streit, Wedel, Germany). The median of three to six corneal biometry measurements was used for calculating the mean anterior corneal radius (included in Table 1) by averaging the steep and flat anterior meridians. Axial length was obtained as the mean of three to six measurements per eye, performed by the above mentioned Lenstar instrument. Measurement precision for corneal radii and axial length is about 0.01 mm, cf. Ref. 54, p. 13.

For the purpose of segmentation, exclusively the raw OCT data were exported and used.55 Additionally, for convenience, a horizontal strip of 16×512  pixels with zero values was joined with each original B-scan. Using the results of the ILM and RPE/choroid boundary detection, from every volume scan, a B-scan defining the foveal center has been singled out (see Sec. 2.5). These scans have been further used for choroidal thickness measurements (see Table 1). A typical B-scan (Table 1, line 45) is shown in Fig. 1(a).

Graphic Jump Location
Fig. 1
F1 :

Typical example of automated layer detection. (a) Original SD-OCT data (see Table 1, subject No. 45, B-scan No. 46 with foveal center) visualized as I4. (b) Automatically detected ILM, RPE, and outer choroid boundaries superimposed to the B-scan as red curves. (c) Normalized gradient field of (I*ϕϵ10) visualized as colorful orientation plot, cf. Ref. 56, p. 1197. The gradient direction is coded by the color of a pixel; the correspondence between color and orientation can be read from the colored border as a legend. (d) Boundaries from (b) superimposed to the orientation plot (c) as black curves.

Description and Implementation of the Edge Detection Method

Basic approach. In order to detect a layer boundary within the OCT data I, the following five steps were performed: (1) smoothing of I by calculation of (I*ϕϵ), (2) calculation of discretized edge detectors dϵ,k(i,j), (3) evaluation of the edge detectors by columnwise local maxima search (testing of the alternative expressed in Eq. (34), see 1), (4) preliminary estimation of the layer position according to its definition, and (5) final designation of the position by calculation of barycenters and cubic spline interpolation. In the following, the different steps will be reported in full detail.

Step 1. Convolution. The convolution (I*ϕϵ) with the scaled Gauss kernel ϕϵ(s)=(4πϵ2)1·exp[|s|2/(4ϵ2)] is calculated as the numerical solution of the heat equation on the domain Q×[0,ϵ2] with initial values I, cf. Ref. 57, p. 47, Theorem 1. Specifying Q=[1,1]2, the edge length of a pixel is 1/256=0.003906. Note that, at the same time, ϕϵ is the density of the 2-D normal distribution with mean value o2 and standard deviation σ=2ϵ. Consequently, the 3σ region of ϕϵr coincides with the ball centered in the origin o2 and radius r/256 for ϵr=r/(256·18), where rN.

Step 2. Discretized edge detectors. Assuming that Q=[1,1]2 is subdivided into pixels Qi,j, 1i512, 1j512, the test functions ψ5(i,j)(s), ψ7(i,j)(s):QR are specified through Display Formula

ψ5(i,j)(s)={s2+2/256|(j1256)/256s1(j256)/256,  (256i2)/256s2(256i+0.5)/256;3/256s2|(j1256)/256s1(j256)/256,  (256i+0.5)/256s2(256i+3)/256;0|else,3i510,1j512;(1)
Display Formula
ψ7(i,j)(s)={s2+3/256|(j1256)/256s1(j256)/256,  (256i3)/256s2(256i+0.5)/256;4/256s2|(j1256)/256s1(j256)/256,  (256i+0.5)/256s2(256i+4)/256;0|else,4i509,1j512(2)
ψ5(i,j) and ψ7(i,j) are supported on features Ω5(i,j) and Ω7(i,j) consisting of five or seven vertically adjacent pixels, respectively. Following the strategy described in eca, six different edge detectors d5,8, d5,9, d5,10, d7,10, d7,11, and d7,12 (“quadratic measure filters") are defined as discretizations of the integrals Display Formula
dk,r(i,j)Ωk(i,j)ψk(i,j)(s)·ϵr|(I*ϕϵr)(s)|2ds,(3)
with k{5,7}, r{8,9,10,11,12}, and ϵr{0.007366,0.008286,0.009207,0.010128,0.011049}.

Steps 3–5. Recognition of the ILM boundary. The vertical column of pixels (A-scan) at the position j is denoted by A(j)=1i512Qi,j. For each of the three detectors d5,r, within the sets A(j){sQ|(I*ϕϵr)(s)/s20}, all local maxima are determined. Their positions are stored within sets S5,r+. Next, every marked pixel Qi,j within S5,r+ is replaced by the vertical three-pixel feature Qi1,jQi,jQi+1,j. Then the ILM boundary will be preliminary defined as the topmost adjacent feature within S5,8+S5,9+S5,10+ of sufficiently large size, marking a positive contrast jump. Denoting this feature by F+Q, for the three edge detectors, the barycenters over A(j)F+ are calculated, and the mean of the three quantities is formed. The final position ILMa(j) is obtained by cubic spline interpolation of the refined data.

Steps 3–5. Recognition of the RPE/choroid boundary. For each of the six detectors dk,r, the local maxima over the sets A(j){sQ|(I*ϕϵr)(s)/s2<0} are determined. Their positions are stored within sets Sk,r. Every marked pixel Qi,j within Sk,r is replaced by a cross-shaped feature Qi1,jQi,j1Qi,jQi,j+1Qi+1,j. Then the RPE/choroid boundary will be preliminary defined as the lowermost adjacent feature within S7,12S7,11S7,10S5,10S5,9S5,8 of sufficiently large size, marking a negative contrast jump at the same time. Denoting this feature by FQ, for every edge detector, the barycenters over A(j)F are calculated. Forming the mean of the six quantities, a refined boundary position is obtained. Again, the final position RPEa(j) is found by cubic spline interpolation of these data.

Steps 3–5. Detection of the OCB. Within the usual visualization of the B-scans, this boundary can be visually recognized as a fairly weak positive contrast jump below the RPE/choroid boundary [see Fig. 1(a)]. One may observe that this jump corresponds to a (possibly not connected) feature within the normalized gradient field of (I*ϕϵ) where the second component of the gradient vector is nonnegative [see Figs. 1(c) and 1(d)]. Consequently, the maximization of the edge detector will be restricted to a set Qc, which is defined as the union of topmost connected features below the RPE/choroid boundary with Display Formula

s2(I*ϕϵr)(s)·|(I*ϕϵr)(s)|1η,(4)
where r=10 and η=1/10. For the detector d5,10, the local maxima over the sets A(j)Qc are determined (these are situated below the RPE/choroid boundary by definition), and their positions are stored within a set S5,10c. Again, every marked pixel Qi,j within S5,10c is replaced by a vertical three-pixel feature Qi1,jQi,jQi+1,j. Subsequently, all adjacent features smaller than 16 px are removed from S5,10c. Thus, a possibly unconnected feature Fc, which represents the coarse position of the lower choroid boundary, is obtained [see Fig. 2(a)]. This position is refined in two steps: after calculating the barycenters of d5,10 over A(j)Fc, the final position OCBa(j) is obtained again by cubic spline interpolation.

Remarks about the implementation. The complete edge detection procedure was implemented as a series of MATLAB® tools. It was performed with MATLAB® 8.4.0.150421 (R2014b) and required the Image Processing and Signal Processing toolboxes, as documented in Refs. 585960. For the numerical solution of the heat equation, after the introduction of a spatial mesh whose nodes are the centers of the pixels Qi,j, the ADI method proposed by Peaceman/Rachford was employed, cf. Ref. 61, p. 412 f. Note that the missing smoothness of the initial values I does not influence the calculations. For the calculation of dk,r and feature recognition, the imfilter and bwlabel procedures were used while the maxima search was realized with the aid of the findpeaks procedure. The typical CPU time for the segmentation of a single B-scan amounts to 7.5 s (steps 2 to 5). No particular attempts for tuning have been made.

Graphic Jump Location
Fig. 2
F2 :

Typical example of automated layer detection, continued. (a) Three steps of OCB detection visualized. The pixels satisfying inequality (4) are shown in light gray, the subset Qc of topmost connected features below the RPE/choroid boundary is shown in dark gray, and the preliminary position Fc of the OCB is shown in black. (b) Plot of the relative positions of ILM and OCB with respect to the straightened RPE boundary line; results of automated detection. (c) Superposition of the five manual segmentations (black) and automated segmentation (red) of ILM, RPE, and OCB boundaries. (d) Superimposition of the original SD-OCT data, the five manual segmentations (white) and the automated segmentation (red) of the OCB. Same data as in Fig. 1(a) but differently visualized as I8, thus revealing a significant level of speckle noise within the data.

Validation of the Computer-Generated Results

The accuracy of the automated detection of the boundary layer positions has been validated by comparison with manual segmentation. To this end, within each of the 50 B-scans from Table 1, the ILM boundary, the RPE/choroid boundary, and the OCB were manually delineated by five trained graders (Anna Xenia Bestehorn, Mike Francke, Anja Schirmer, Sophia Scheibe and Beatrice Zimmerling) independently from each other, employing the usual visualization I4 of the OCT data I, cf. Ref. 55, p. 11. The graders were allowed to enhance the contrast within the images if necessary. The manually obtained boundary positions will be denoted by ILM1(j),,ILM5(j), RPE1(j),,RPE5(j), and OCB1(j),,OCB5(j), respectively. In Sec. 3.1, the comparison between manually and automatically obtained data is described in detail.

Definition of the Foveal Region within the B-Scans

Within each volume, the segmentation procedure was applied to the B-scans # 42 … # 57. Subsequently, from each volume the minimal distance ILMa(j)RPEa(j) of the ILM and RPE/choroid boundaries, compared over the A-scans with numbers 200j312, was extracted together with its position, thus defining the foveal center. The respective scan was singled out for further analysis (see Table 1).

In order to define within each of these scans the central fovea region, we rely on a fovea model introduced in Ref. 50. Following the procedure described there, a model function M(r) for the fovea shape was generated from the automatically extracted ILM and RPE boundary data and the position of the foveal center. The extent of the fovea bowl left and right from the center is given by two radii rleft and rright that can be determined from the model M(r) (see Fig. 3). As pointed out in Ref. 51, p. 5, instead of using of the highest point of the foveal rim for this purpose, it is more advantageous to solve the equation Display Formula

rleft·M(rleft)0rleftM(r)dr=p·Abowl,left,(5)
with a percentage of p=0.95 where the (one-sided) foveal bowl area Display Formula
Abowl,left=0rrim,leftM(r)dr,(6)
is calculated by integrating the model function until the highest point rrim,left of the left foveal rim. Equation (5) was solved by bisection. Analogously, rright has been found. The radii rleft and rright, which have been derived from the automatically generated boundary positions ILMa and RPEa, were used throughout the whole following analysis.

Graphic Jump Location
Fig. 3
F3 :

Choroid thickness quantification procedure. Inscribed RPE and OCB layers were automatically extracted. ILM layer replaced by foveal model M(r) left and right from the fovea center, which has been computed as in 50. Based on the fovea model, left and right bowl radii were calculated as described in the text. The region between the radii defines an individual central foveal zone for each subject. Additionally, two outer 1-mm regions were defined, thus obtaining three zones where the mean choroidal thickness was assessed.

Three choroid measurement regions were defined: (1) the central foveal zone, which is the region between rleft and rright, (2) the 1-mm region left of the central foveal zone (corresponding to temporal retina for the right eyes examined), and (3) the 1-mm region right of the foveal zone (referring to the nasal retina). As discussed in Ref. 51 in detail, one may observe a strong variability of foveal geometry between individuals, requiring the determination of a measurement region of variable width [thus the use of the standard 1 mm circle, which has been introduced in the context of fundus photography and is widely used in retinal thickness measurements (see Ref. 62, Chapter 18; Ref. 63), would be completely inappropriate]. Outside of the foveal rim, the individual shape variation is far less prominent, allowing for the definition of measurement regions of fixed dimensions. In order to obtain the magnification factor q that is directly used to calculate the pixel width in mm, we employed the formula published in Ref. 64, p. 649, Display Formula

q=17.455·[300.3(rcorn)1+21.76+fscan]1,(7)
instead of the built-in but undocumented routine of the OCT device. Within Eq. (7), rcorn denotes the mean corneal radius in mm and fscan the scan focus in dpt.

Definition and Measurement of Choroidal Thickness

The (pointwise) choroidal thickness is understood as the vertical distance from the hyperreflective line of the RPE/Bruch’s membrane boundary to the positive contrast jump below this curve, which operationally defines the OCB as mentioned above. Within the three foveal regions defined in Sec. 2.5, the mean choroidal thicknesses are defined as Display Formula

CHLa=1  mm1(rleft1  mm)rleft|RPEa(s1)OCBa(s1)|ds1(8)
(left 1-mm region), Display Formula
CHCa=(rrightrleft)1rleftrright|RPEa(s1)OCBa(s1)|ds1(9)
(central region) and Display Formula
CHRa=1  mm1rright(rright+1  mm)|RPEa(s1)OCBa(s1)|ds1,(10)
(right 1-mm region). Numerical values for the above integrals were obtained by interpolating the data smoothly and applying a numerical integration procedure.

Histological Examination of the Transitional Zone between Choroid and Sclera

After enucleation, human donor eyes were fixed in 4% paraformaldehyde phosphate-buffered solution (PBS) for more than 24 h. Choroidal and scleral tissues were stored in PBS and prepared for light microscopy. Layers of choroidal stroma (Sattler’s and Haller’s layer) were partly removed to get a plane view onto the inner surface of the outer choroid. Plane view and cross-sectional images were obtained using a digital reflected-light microscope (VHX-600, Keyence Deutschland GmbH, Neu-Isenburg, Germany). Use of human tissue was approved by the Ethics Committee of the Leipzig University.

Results of the Automated ILM, RPE, and OCB Boundary Detection

In Table 2, we display the results of automated ILM and RPE detection. For every B-scan examined, the pointwise mean and the pointwise standard deviation of the five manual delineations is calculated as ILMm(j)=k=15ILMk(j)/5, RPEm(j)=k=15RPEk(j)/5, ILMσ(j)=k=15ILMk(j)2/5ILMm(j)2, and RPEσ(j)=k=15RPEk(j)2/5RPEm(j)2, respectively. With the aid of these quantities, for every B-scan, the averaged standard deviations Display Formula

ILM¯σ=j=17496ILMσ(j)/480(11)
andDisplay Formula
RPE¯σ=j=17496RPEσ(j)/480(12)
of the five manual delineations are defined (columns 1 and 4). They will be compared with the mean absolute errors Display Formula
ILM¯-|Δ|am=j=17496|ILMm(j)ILMa(j)|/480(13)
andDisplay Formula
RPE¯-|Δ|am=j=17496|RPEm(j)RPEa(j)|/480(14)
(columns 2 and 5) as well as with the mean errors Display Formula
ILM¯-Δam=j=17496[ILMm(j)ILMa(j)]/480(15)
andDisplay Formula
RPE¯-Δam=j=17496[RPEm(j)RPEa(j)]/480(16)
(columns 3 and 6), thus examining within the OCT data in all cases the A-scans with numbers 17j496.

Table Grahic Jump Location
Table 2Results of automated ILM and RPE detection. Entries defined in Eqs. (11)–(16).

The results for the OCB are presented in Table 3. Again, for every B-scan examined, the pointwise mean and the pointwise standard deviation of the five manual delineations is calculated as OCBm(j)=k=15OCBk(j)/5 and OCBσ(j)=k=15OCBk(j)2/5OCBm(j)2. Then for every B-scan the averaged standard deviation Display Formula

OCB¯σ=j=17496OCBσ(j)/480,(17)
of the five manual segmentations is defined (column 1), which is compared with the mean absolute error Display Formula
OCB¯-|Δ|am=j=17496|OCBm(j)OCBa(j)|/480,(18)
and the absolute error Display Formula
OCB¯-Δam=j=17496[OCBm(j)OCBa(j)]/480(19)
(columns 2 and 3). The percentage of the A-scans with numbers 17j496 satisfying the inclusion Min1k5OCBk(j)OCBa(j)Max1k5OCBk(j) is given in column 4. Further, in columns 5 to 9, the mean absolute errors Display Formula
OCB¯-|Δ|ak=j=17496|OCBk(j)OCBa(j)|/480,1k5,(20)
for each single manual delineation are tabulated.

Table Grahic Jump Location
Table 3Results of automated OCB detection. Entries defined in Eqs. (17)–(20).
Results of the Automated Choroidal Thickness Measurements

In Table 4, the choroidal thickness measurements for the central foveal region are presented. For comparison, we determined for every B-scan from the five manual delineations RPEk and OCBk the thicknesses Display Formula

CHCk=(rrightrleft)1rleftrright|RPEk(s1)OCBk(s1)|ds1,1k5,(21)
using the radii rleft and rright obtained from the automated segmentations and the same procedures as mentioned in Sec. 2.5. For every B-scan, the five values CHCk obtained from the manual delineations are tabulated in columns 1 to 5. Columns 6 and 7 contain the mean Display Formula
CHCm=k=15CHCk/5,(22)
and the standard deviation Display Formula
CHCσ=k=15CHCk2/5CHCm2(23)
of the five values before. The result CHCa of the automated measurement and the error Display Formula
CHC-Δam=CHCmCHCa(24)
are given in columns 8 and 9. Table 5 contains the choroidal thickness measurements for the left and right 1-mm regions. Analogous to the computations before, we calculated for every B-scan the thicknesses CHLk=1  mm1(rleft1  mm)rleft|RPEk(s1)OCBk(s1)|ds1 (left 1-mm region) and CHRk=1  mm1rright(rright+1  mm)|RPEk(s1)OCBk(s1)|ds1 (right 1-mm region), 1k5, and tabulate for every B-scan the mean Display Formula
CHLm=k=15CHLk/5(25)
and the standard deviation Display Formula
CHLσ=k=15CHLk2/5CHLm2(26)
of the manually generated data, the automated measurement CHLa and the error Display Formula
CHL-Δam=CHLmCHLa(27)
for the left 1-mm region (columns 1 to 4) as well as the respective values Display Formula
CHRm=k=15