Research Papers: Imaging

Combined image-processing algorithms for improved optical coherence tomography of prostate nerves

[+] Author Affiliations
Shahab Chitchian

University of North Carolina at Charlotte, Department of Physics and Optical Science, Charlotte, North Carolina 28223

Thomas P. Weldon

University of North Carolina at Charlotte, Department of Electrical and Computer Engineering, Charlotte, North Carolina 28223

Michael A. Fiddy

University of North Carolina at Charlotte, Department of Physics and Optical Science, Charlotte, North Carolina 28223

Nathaniel M. Fried

University of North Carolina at Charlotte, Department of Physics and Optical Science, Charlotte, North Carolina 28223 and Johns Hopkins Medical Institutions, Department of Urology, Baltimore, Maryland 21287

J. Biomed. Opt. 15(4), 046014 (August 20, 2010). doi:10.1117/1.3481144
History: Received December 02, 2009; Revised May 21, 2010; Accepted June 21, 2010; Published August 20, 2010; Online August 20, 2010
Text Size: A A A

Open Access Open Access

* Address all correspondence to: Shahab Chitchian, University of North Carolina at Charlotte, Department of Physics and Optical Science, Charlotte, NC 28223. Tel:704-687-8152; Fax:704-687-8197; E-mail: schitchi@uncc.edu

Cavernous nerves course along the surface of the prostate gland and are responsible for erectile function. These nerves are at risk of injury during surgical removal of a cancerous prostate gland. In this work, a combination of segmentation, denoising, and edge detection algorithms are applied to time-domain optical coherence tomography (OCT) images of rat prostate to improve identification of cavernous nerves. First, OCT images of the prostate are segmented to differentiate the cavernous nerves from the prostate gland. Then, a locally adaptive denoising algorithm using a dual-tree complex wavelet transform is applied to reduce speckle noise. Finally, edge detection is used to provide deeper imaging of the prostate gland. Combined application of these three algorithms results in improved signal-to-noise ratio, imaging depth, and automatic identification of the cavernous nerves, which may be of direct benefit for use in laparoscopic and robotic nerve-sparing prostate cancer surgery.

Figures in this Article

Preservation of cavernous nerves during radical prostatectomy for prostate cancer is critical for preserving sexual function after surgery. These nerves are at risk of injury during dissection and removal of a cancerous prostate gland because of the close proximity of the nerves to the prostate surface (Fig. 1). Their microscopic nature also makes it difficult to predict the true course and location of these nerves from one patient to another. These observations may explain in part the wide variability in reported potency rates (9 to 86%) following prostate cancer surgery.1 Therefore, any technology capable of providing improved identification, imaging, and visualization of the cavernous nerves during prostate cancer surgery would aid the preservation of the nerves and improve postoperative sexual potency.

Graphic Jump LocationF1 :

(a) Cross sectional diagram of the human prostate showing the location of the neurovascular bundles and their close proximity to the prostate surface. The dotted line indicates the route of dissection between the prostatic capsule and the neurovascular bundle. (b) Image of human prostate during surgery. Arrows indicate the surgical dissection plane, and the dashed line indicates the position of the periprostatic neurovascular bundle under a superficial layer of fascia.

OCT is a noninvasive optical imaging technique that can be used to perform high-resolution, cross sectional in vivo and in situ imaging of microstructures in biological tissues.2 OCT imaging of cavernous nerves in rat and human prostate has recently been demonstrated.36 However, improvements in the quality of the OCT images for identification of the cavernous nerves are necessary before clinical use.

For the present work, OCT images were acquired in vivo using a clinical endoscopic OCT system (Imalux, Cleveland, Ohio) based on an all single-mode fiber common-path interferometer-based scanning system (Optiphase, Van Nuys, California). An 8-Fr (2.6-mm-OD) probe was used with the OCT system. The system is capable of acquiring real-time images at 200×200pixels with 11-μm axial and 25-μm lateral resolutions in tissue.

The following study describes a step-by-step approach that employs three complementary image processing algorithms (Fig. 2) for improving identification and imaging of the cavernous nerves during OCT of the prostate gland. In previous work, a segmentation approach was successfully used to identify the cavernous nerves.7 However, it has proven challenging to image deeper prostate tissues with OCT. Therefore, the segmentation system in the left branch of Fig. 2 is augmented by the denoising and edge detection systems in the right branch of Fig. 2. This edge detection system is later shown to improve OCT imaging of deeper prostate tissue structures.

Graphic Jump LocationF2 :

Flow chart describing a step-by-step application of complementary image processing algorithms for OCT of the prostate nerves.

In the left branch of Fig. 2, 2-D OCT images of rat prostate are segmented to differentiate the cavernous nerves from the prostate gland. It should be noted that ultrasound image segmentation of the prostate, which allows clinicians to design an accurate brachytherapy treatment plan for prostate cancer, has been previously reported.8 Various alternative segmentation approaches have also recently been applied in retinal OCT imaging.916 However, large irregular voids in prostate OCT images require a segmentation approach different than that used for segmentation of the more regular structure of retinal layers. Therefore, to detect cavernous nerves, three image features are employed: a Gabor filter, Daubechies wavelet, and Laws filter. The Gabor feature is applied with different standard deviations in the x and y directions. In the Daubechies wavelet feature, an eight-tap Daubechies orthonormal wavelet is implemented, and the low-pass subband is chosen as the filtered image. Finally, Laws feature extraction is applied to the images. The features are segmented using a nearest-neighbor classifier. N-ary morphological postprocessing is used to remove small voids.

As a next step to improve OCT imaging of the prostate gland, wavelet denoising is applied. Recently, wavelet techniques have been employed successfully in speckle noise reduction for MRI, ultrasound, and OCT images.1719 A locally adaptive denoising algorithm is applied before edge detection to reduce speckle noise in OCT images of the prostate.20 The denoising algorithm is illustrated using the dual-tree complex wavelet transform. After wavelet denoising, an edge detection algorithm based on thresholding and spatial first-order differentiation is implemented to provide deeper imaging of the prostate gland. This algorithm addresses one of the main limitations in OCT imaging of the prostate tissue, which is the inability to image deep into the prostate. Currently, OCT is limited to an image depth of approximately 1mm in most opaque soft tissues. In the following sections, a segmentation approach is first described, followed by details of denoising and edge detection approaches.

The input image is first processed to form three feature images. The prostate image is then segmented into nerve, prostate, and background classes using a k-nearest neighbors classifier and the three feature images. Finally, N-ary morphology is used for postprocessing. The generation of the feature images are first described, followed by descriptions of the classifier and postprocessing.

Gabor Filter

The first feature image is generated by a Gabor filter with impulse response h(x,y),21Display Formula

1h(x,y)=g(x,y)exp[j2π(Ux+Vy)],
whereDisplay Formula
2g(x,y)=12πσxσyexp[12(x2σx2+y2σy2)].

The Gabor function is essentially a bandpass filter centered about frequency (U,V) with bandwidth determined by σx,σy. The Gabor feature center frequency of (0.2,0.2)cyclespixel is applied with standard deviations of 3 and 6 in the x and y directions, respectively, based on experimental observation of minimum segmentation error.

Daubechies Wavelet Transform

The second feature is generated by the eight-tap Daubechies orthonormal wavelet transform. The discrete wavelet transform (DWT) converts a signal to its wavelet representation. In a one-level DWT, the image c0 is split into an approximation part c1 and detail parts d11, d12, and d13 for horizontal, vertical, and diagonal orientations, respectively. In a multilevel DWT, each subsequent ci is split into an approximation ci+1 and details di+11, di+12, and di+13. In the present work, the approximation part c1 is chosen as the filtered image for the second feature.

Laws Filter

The third feature is generated by the Laws feature extraction method. Laws 2 mask h(x,y)22 is convolved with the image to accentuate its microstructure. The microstructure image m(x,y) is defined asDisplay Formula

3m(x,y)=f(x,y)h(x,y),
whereDisplay Formula
4h=112(101202101).

Then, standard deviation computation is performed after the Laws mask filtering to complete the Laws feature extraction.

K-Nearest Neighbor Classifier and Postprocessing

The k-nearest neighbors algorithm (k-NN) is a method for classifying objects where classification is based on the k closest training samples in the feature space. It is implemented by training, parameter selection, and classification steps, followed by the N-ary morphological postprocessing method for eliminating small misclassified regions.7

Wavelet shrinkage is denoising by shrinking (nonlinear soft thresholding) in the wavelet transform domain. The observed image X is modeled as an uncorrupted image S and multiplicative speckle noise N. On a logarithmic scale, speckle is converted to additive noise X=S+N. The wavelet shrinkage denoising algorithm requires the following four-step procedure,20Display Formula

Y=W(X),λ=d(Y),Z=D(Y,λ),S=W1(Z),
where operator W(.) relates to the wavelet transform, operator d(.) selects a data-adaptive threshold, D(.,λ) denotes the denoising operator with threshold λ, and W1 relates the inverse wavelet transform.

Two-Dimensional Dual-Tree Complex Wavelet Transform

In the proposed method, the dual-tree complex wavelet transform (CDWT) calculates the complex transform of a signal using two separate DWT decompositions. If the filters used in one are specifically designed differently from those in the other, it is possible for one DWT to produce the real coefficients and the other the imaginary coefficients. This redundancy of two provides extra information for analysis at the expense of extra computational power.

In the proposed CDWT, wavelet coefficients are calculated from the Farras nearly symmetric wavelet.23

Shrinkage Denoising

Bivariate shrinkage with a local variance estimation algorithm24 is applied for shrinkage denoising. After estimating the signal components of the noisy coefficients in the wavelet domain, the inverse wavelet transform is taken to reconstruct the noise-free image.

A block diagram of the edge detection system is shown in Fig. 3. After luminance thresholding on the input image f(x,y), a first-order spatial differentiator of orthogonal gradient is performed to produce the differential image g(x,y) with accentuated spatial amplitude changes. Morphological postprocessing is then used to accentuate edges.

Graphic Jump LocationF3 :

Edge detection system block diagram.

Luminance Thresholding

In this section, the glandular structure of the prostate is judged present if the luminance exceeds the threshold level of the background. The center of the glandular structures, below the boundary, in the denoised prostate image represents the background threshold level, because the boundaries of these glandular structures can be located at a superficial level.

Orthogonal Gradient Generation

After applying the threshold level to the denoised image f(x,y), a form of spatial first-order differentiation is performed in two orthogonal directions. In the discrete domain, the gradient in each direction is generated by22Display Formula

6gr,c(x,y)=f(x,y)hr,c(x,y),
whereDisplay Formula
7hr=14(101202101),hc=14(121000121),
are the row and column impulse response arrays for the 3×3 Sobel orthogonal gradient operator.

The gradient amplitude is approximated by the magnitude combinationDisplay Formula

8g(x,y)=|gr(x,y)|+|gc(x,y)|.

Morphological Postprocessing

Morphological postprocessing for accentuating edges proceeds by close operation. It is implemented by dilation followed by erosion.

Figure 2 shows the order of the combined algorithms. The segmentation algorithm was applied to differentiate the cavernous nerves from the prostate gland. This algorithm is independent of the denoising process. However, the edge detection algorithm to provide deeper imaging of the prostate gland based on thresholding and spatial first-order differentiation is dependent on the denoising process. In other words, edges are sensitive to the noise. First, the input image was denoised, then the edge detection was implemented. With a noisy image, threshold selection becomes a tradeoff between missing valid edges and creating noise-induced false edges.

The algorithms were executed on a Core 2 Duo, 1.86-GHz desktop personal computer. There were two parallel processes of Fig. 2, 8-s denoising and edge detection and 10-s segmentation. The total time for the combined processing algorithms was 10s.

The unprocessed time-domain (TD)-OCT images of the cavernous nerves at different orientations (longitudinal, oblique, and cross sectional) along the surface of the rat prostate are shown in Figs. 4. Histologic sections of the cavernous nerves were previously processed for comparison.20

Graphic Jump LocationF4 :

OCT images of the rat cavernous nerve: (a) and (b) longitudinal section; (c) and (d) cross section; (e) and (f) oblique section. (a), (c), and (e) show before; and (b), (d), and (f) show after denoising.

Figures 4 show the images after denoising using CDWT. The global signal-to-noise ratio (SNR) is calculated asDisplay Formula

9SNR=10×log[max(Xlin)2σlin2],
where Xlin is the 2-D matrix of pixel values in the OCT image and σlin2 is the noise variance, both on linear intensity scales.25 The mean value of SNR for nine sample images before and after denoising was measured to be 26.65 and 40.87, respectively. Therefore, a SNR increase of approximately 14dB was attained.

Figures 5 show the same OCT images of Figs. 4 after segmentation. The cavernous nerves could be differentiated from the prostate gland using the segmentation algorithm. The error rate was calculated by: error=(numberoferrorpixels)(numberoftotalpixels), where (numberoferrorpixels)=(numberoffalsepositives+numberoffalsenegatives). The overall error rate for the segmentation of the nerves was 0.058 with a standard deviation of 0.019, indicating the robustness of our technique. The error rate was measured as a mean of error measurements for three different sample images at different orientations (longitudinal, cross sectional, and oblique). A different image was used for training. The error rate was determined by comparing manually segmented images to the automatically segmented images of the nerves. These manually segmented images of the cavernous nerves were previously created according to histologic correlation with OCT images.20 Figures 5 combine edge detection of the denoised images and the segmentation results. Manual segmentation of the prostate gland was implemented to calculate performance of the edge detection algorithm. The overall error rate for the segmentation of the prostate gland was 0.076 with a standard deviation of 0.022.

Graphic Jump LocationF5 :

OCT images of the rat cavernous nerve: (a) and (b) longitudinal section; (c) and (d) cross section; (e) and (f) oblique section. (a), (c), and (e) show segmented and (b), (d), and (f) show edge detected images.

The proposed edge detection approach was successful in accentuating prostate structures deeper in the tissue, and the cavernous nerves could be differentiated from the prostate gland using the segmentation algorithm. The glandular structure of the prostate could be observed to a depth of approximately 1.6mm in Figs. 5 in comparison with an approximately 1-mm depth in the unprocessed OCT images in Figs. 4. Overall, the edge detection technique enhanced structures deeper in the prostate gland, and the proposed image segmentation algorithm performed well for identification of the cavernous nerves in the prostate.

It should also be noted that the rat model used in this study represents an idealized version of the prostate anatomy, because the cavernous nerve lies on the surface of the prostate, and is therefore directly visible. However, in human anatomy, there can be an intervening layer of fascia (Fig. 1) between the OCT probe and the nerves, making identification more difficult. Since one major limitation of OCT is its superficial imaging depth in opaque tissues, an important advantage of these image processing algorithms is that the final OCT image should be able to provide deeper imaging in the tissue and locate the cavernous nerve when it lies at various depths beneath periprostatic tissues.

The segmentation technique is applied to differentiate cavernous nerves from the prostate gland in rat prostate. The wavelet shrinkage denoising technique using a dual-tree complex wavelet transform is used for speckle noise reduction, and by using edge detection, deeper imaging of the prostate gland is accomplished. These algorithms for image segmentation, denoising, and edge detection of the prostate may be of direct benefit for implementation in clinical endoscopic OCT systems currently being studied for use in laparoscopic and robotic nerve-sparing prostate cancer surgery.

This research was supported in part by the Department of Defense Prostate Cancer Research Program, grant number PC073709, and the Department of Energy, grant number DE-FG02-06CH11460. The authors thank Paul Amazeen and Nancy Tresser of Imalux Corporation (Cleveland, Ohio) for providing the OCT system used in these studies.

Burnett  A., , Aus  G., , Canby-Hagino  E., , Cookson  M., , D’Amico  A., , Dmochowski  R., , Eton  D., , Forman  J., , Goldenberg  S., , Hernandez  J., , Higano  C., , Kraus  S., , Liebert  M., , Moul  J., , Tangen  C., , Thrasher  J., , and Thompson  I., “ Function outcome reporting after clinically localized prostate cancer treatment. ,” J. Urol..  0022-5347 178, , 597–601  ((2007)).
Huang  D., , Swanson  E., , Lin  C., , Schuman  J., , Stinson  W., , Chang  W., , Hee  M., , Flotte  T., , Gregory  K., , Puliafito  C., , and Fujimoto  J., “ Optical coherence tomography. ,” Science.  0036-8075 254, , 1178–1181  ((1991)).
Aron  M., , Kaouk  J., , Hegarty  N., , Colombo  J., , Haber  G., , Chung  B., , Zhou  M., , and Gill  I., “ Preliminary experience with the niris optical coherence tomography system during laparoscopic and robotic prostatectomy. ,” J. Endourol.  0892-7790 21, , 814–818  ((2007)).
Fried  N., , Rais-Bahrami  S., , Lagoda  G., , Chuang  A., , Burnett  A., , and Su  L., “ Imaging the cavernous nerves in rat prostate using optical coherence tomography. ,” Lasers Surg. Med..  0196-8092 39, , 36–41  ((2007)).
Fried  N. M., , Rais-Bahrami  S., , Lagoda  G. A., , Chuang  A.-Y., , Su  L.-M., , and Burnett  A. L., “ Identification and imaging of the nerves responsible for erectile function in rat prostate, in vivo, using optical nerve stimulation and optical coherence tomography. ,” IEEE J. Sel. Top. Quantum Electron..  1077-260X 13, , 1641–1645  ((2007)).
Rais-Bahrami  S., , Levinson  A. W., , Fried  N. M., , Lagoda  G. A., , Hristov  A., , Chuang  Y., , Burnett  A. L., , and Su  L.-M., “ Optical coherence tomography of cavernous nerves: A step toward real-time intraoperative imaging during nerve-sparing radical prostatectomy. ,” Urology.  0090-4295 72, , 198–204  ((2008)).
Chitchian  S., , Weldon  T., , and Fried  N., “ Segmentation of optical coherence tomography images for differentiation of the cavernous nerves from the prostate gland. ,” J. Biomed. Opt..  1083-3668 14, (4 ), 044033  ((2009)).
Noble  J., and Boukerroui  D., “ Ultrasound image segmentation: a survey. ,” IEEE Trans. Med. Imaging.  0278-0062 25, , 987–1010  ((2006)).
Cabrera Fernández  D., , Salinas  H. M., , and Puliafito  C. A., “ Automated detection of retinal layer structures on optical coherence tomography images. ,” Opt. Express.  1094-4087 13, , 10200–10216  ((2005)).
Szkulmowski  M., , Wojtkowski  M., , Sikorski  B., , Bajraszewski  T., , Srinivasan  V., , Szkulmowska  A., , Kaluzny  J., , Fujimoto  J., , and Kowalczyk  A., “ Analysis of posterior retinal layers in spectral optical coherence tomography images of the normal retina and retinal pathologies. ,” J. Biomed. Opt..  1083-3668 12, (4 ), 041207  ((2007)).
Haeker  M., , Sonka  M., , Kardon  R., , Shah  V., , Wu  X., , and Abramoff  M., “ Automated segmentation of intraretinal layers from macular optical coherence tomography images. ,” Proc. SPIE.  0277-786X 6512, , 651214  ((2007)).
Garvin  M., , Abramoff  M., , Kardon  R., , Russell  S., , Wu  X., , and Sonka  M., “ Intraretinal layer segmentation of macular optical coherence tomography images using optimal 3-D graph search. ,” IEEE Trans. Med. Imaging.  0278-0062 27, , 1495–1505  ((2008)).
Gotzinger  E., , Pircher  M., , Geitzenauer  W., , Ahlers  C., , Baumann  B., , Michels  S., , Schmidt-Erfurth  U., , and Hitzenberger  C., “ Retinal pigment epithelium segmentation by polarization sensitive optical coherence tomography. ,” Opt. Express.  1094-4087 16, , 16410–16422  ((2008)).
Ahlers  C., , Simader  C., , Geitzenauer  W., , Stock  G., , Stetson  P., , Dastmalchi  S., , and Schmidt-Erfurth  U., “ Automatic segmentation in three-dimensional analysis of fibrovascular pigmentepithelial detachment using high-definition optical coherence tomography. ,” Br. J. Ophthamol..  0007-1161 92, , 197–203  ((2008)).
Fabritius  T., , Makita  S., , Miura  M., , Myllyla  R., , and Yasuno  Y., “ Automated segmentation of the macula by optical coherence tomography. ,” Opt. Express.  1094-4087 17, , 15659–15669  ((2009)).
Mishra  A., , Wong  A., , Bizheva  K., , and Clausi  D., “ Intra-retinal layer segmentation in optical coherence tomography images. ,” Opt. Express.  1094-4087 17, , 23719–23728  ((2009)).
Adler  D., , Ko  T., , and Fujimoto  J., “ Speckle reduction in optical coherence tomography images by use of a spatially adaptive wavelet filter. ,” Opt. Lett..  0146-9592 29, , 2878–2880  ((2004)).
Pizurica  A., , Wink  A., , Vansteenkiste  E., , Philips  W., , and Roerdink  J., “ A review of wavelet denoising in MRI and ultrasound brain imaging. ,” Curr. Med. Imag. Rev.. 2, , 247–260  ((2006)).
Pizurica  A., , Jovanov  L., , Huysmans  B., , Zlokolica  V., , Keyser  P., , Dhaenens  F., , and Philips  W., “ Multiresolution denoising for optical coherence tomography: a review and evaluation. ,” Curr. Med. Imag. Rev.. 4, , 270–284  ((2008)).
Chitchian  S., , Fiddy  M., , and Fried  N., “ Denoising during optical coherence tomography of the prostate nerves via wavelet shrinkage using dual-tree complex wavelet transform. ,” J. Biomed. Opt..  1083-3668 14, (1 ), 014031  ((2009)).
Weldon  T., , Higgins  W., , and Dunn  D., “ Efficient Gabor filter design for texture segmentation. ,” Pattern Recogn..  0031-3203 29, , 2005–2015  ((1996)).
Pratt  W.,  Digital Image Processing. ,  Wiley ,  New York  ((2007)).
Abdelnour  A., and Selesnick  I., “ Design of 2-band orthogonal near-symmetric CQF. ,” in  Proc. IEEE Intl. Conf. Acoust. Speech, Sign. Process. (ICASSP). , pp. 3693  ((2001)).
Sendur  L., and Selesnick  I., “ Bivariate shrinkage with local variance estimation. ,” IEEE Signal Process. Lett..  1070-9908 9, , 438–441  ((2002)).
Xiang  S., , Zhou  L., , and Schmitt  J., “ Speckle noise reduction for optical coherence tomography. ,” Proc. SPIE.  0277-786X 3196, , 79–88  ((1998)).
© 2010 Society of Photo-Optical Instrumentation Engineers

Citation

Shahab Chitchian ; Thomas P. Weldon ; Michael A. Fiddy and Nathaniel M. Fried
"Combined image-processing algorithms for improved optical coherence tomography of prostate nerves", J. Biomed. Opt. 15(4), 046014 (August 20, 2010). ; http://dx.doi.org/10.1117/1.3481144


Figures

Graphic Jump LocationF1 :

(a) Cross sectional diagram of the human prostate showing the location of the neurovascular bundles and their close proximity to the prostate surface. The dotted line indicates the route of dissection between the prostatic capsule and the neurovascular bundle. (b) Image of human prostate during surgery. Arrows indicate the surgical dissection plane, and the dashed line indicates the position of the periprostatic neurovascular bundle under a superficial layer of fascia.

Graphic Jump LocationF2 :

Flow chart describing a step-by-step application of complementary image processing algorithms for OCT of the prostate nerves.

Graphic Jump LocationF3 :

Edge detection system block diagram.

Graphic Jump LocationF4 :

OCT images of the rat cavernous nerve: (a) and (b) longitudinal section; (c) and (d) cross section; (e) and (f) oblique section. (a), (c), and (e) show before; and (b), (d), and (f) show after denoising.

Graphic Jump LocationF5 :

OCT images of the rat cavernous nerve: (a) and (b) longitudinal section; (c) and (d) cross section; (e) and (f) oblique section. (a), (c), and (e) show segmented and (b), (d), and (f) show edge detected images.

Tables

References

Burnett  A., , Aus  G., , Canby-Hagino  E., , Cookson  M., , D’Amico  A., , Dmochowski  R., , Eton  D., , Forman  J., , Goldenberg  S., , Hernandez  J., , Higano  C., , Kraus  S., , Liebert  M., , Moul  J., , Tangen  C., , Thrasher  J., , and Thompson  I., “ Function outcome reporting after clinically localized prostate cancer treatment. ,” J. Urol..  0022-5347 178, , 597–601  ((2007)).
Huang  D., , Swanson  E., , Lin  C., , Schuman  J., , Stinson  W., , Chang  W., , Hee  M., , Flotte  T., , Gregory  K., , Puliafito  C., , and Fujimoto  J., “ Optical coherence tomography. ,” Science.  0036-8075 254, , 1178–1181  ((1991)).
Aron  M., , Kaouk  J., , Hegarty  N., , Colombo  J., , Haber  G., , Chung  B., , Zhou  M., , and Gill  I., “ Preliminary experience with the niris optical coherence tomography system during laparoscopic and robotic prostatectomy. ,” J. Endourol.  0892-7790 21, , 814–818  ((2007)).
Fried  N., , Rais-Bahrami  S., , Lagoda  G., , Chuang  A., , Burnett  A., , and Su  L., “ Imaging the cavernous nerves in rat prostate using optical coherence tomography. ,” Lasers Surg. Med..  0196-8092 39, , 36–41  ((2007)).
Fried  N. M., , Rais-Bahrami  S., , Lagoda  G. A., , Chuang  A.-Y., , Su  L.-M., , and Burnett  A. L., “ Identification and imaging of the nerves responsible for erectile function in rat prostate, in vivo, using optical nerve stimulation and optical coherence tomography. ,” IEEE J. Sel. Top. Quantum Electron..  1077-260X 13, , 1641–1645  ((2007)).
Rais-Bahrami  S., , Levinson  A. W., , Fried  N. M., , Lagoda  G. A., , Hristov  A., , Chuang  Y., , Burnett  A. L., , and Su  L.-M., “ Optical coherence tomography of cavernous nerves: A step toward real-time intraoperative imaging during nerve-sparing radical prostatectomy. ,” Urology.  0090-4295 72, , 198–204  ((2008)).
Chitchian  S., , Weldon  T., , and Fried  N., “ Segmentation of optical coherence tomography images for differentiation of the cavernous nerves from the prostate gland. ,” J. Biomed. Opt..  1083-3668 14, (4 ), 044033  ((2009)).
Noble  J., and Boukerroui  D., “ Ultrasound image segmentation: a survey. ,” IEEE Trans. Med. Imaging.  0278-0062 25, , 987–1010  ((2006)).
Cabrera Fernández  D., , Salinas  H. M., , and Puliafito  C. A., “ Automated detection of retinal layer structures on optical coherence tomography images. ,” Opt. Express.  1094-4087 13, , 10200–10216  ((2005)).
Szkulmowski  M., , Wojtkowski  M., , Sikorski  B., , Bajraszewski  T., , Srinivasan  V., , Szkulmowska  A., , Kaluzny  J., , Fujimoto  J., , and Kowalczyk  A., “ Analysis of posterior retinal layers in spectral optical coherence tomography images of the normal retina and retinal pathologies. ,” J. Biomed. Opt..  1083-3668 12, (4 ), 041207  ((2007)).
Haeker  M., , Sonka  M., , Kardon  R., , Shah  V., , Wu  X., , and Abramoff  M., “ Automated segmentation of intraretinal layers from macular optical coherence tomography images. ,” Proc. SPIE.  0277-786X 6512, , 651214  ((2007)).
Garvin  M., , Abramoff  M., , Kardon  R., , Russell  S., , Wu  X., , and Sonka  M., “ Intraretinal layer segmentation of macular optical coherence tomography images using optimal 3-D graph search. ,” IEEE Trans. Med. Imaging.  0278-0062 27, , 1495–1505  ((2008)).
Gotzinger  E., , Pircher  M., , Geitzenauer  W., , Ahlers  C., , Baumann  B., , Michels  S., , Schmidt-Erfurth  U., , and Hitzenberger  C., “ Retinal pigment epithelium segmentation by polarization sensitive optical coherence tomography. ,” Opt. Express.  1094-4087 16, , 16410–16422  ((2008)).
Ahlers  C., , Simader  C., , Geitzenauer  W., , Stock  G., , Stetson  P., , Dastmalchi  S., , and Schmidt-Erfurth  U., “ Automatic segmentation in three-dimensional analysis of fibrovascular pigmentepithelial detachment using high-definition optical coherence tomography. ,” Br. J. Ophthamol..  0007-1161 92, , 197–203  ((2008)).
Fabritius  T., , Makita  S., , Miura  M., , Myllyla  R., , and Yasuno  Y., “ Automated segmentation of the macula by optical coherence tomography. ,” Opt. Express.  1094-4087 17, , 15659–15669  ((2009)).
Mishra  A., , Wong  A., , Bizheva  K., , and Clausi  D., “ Intra-retinal layer segmentation in optical coherence tomography images. ,” Opt. Express.  1094-4087 17, , 23719–23728  ((2009)).
Adler  D., , Ko  T., , and Fujimoto  J., “ Speckle reduction in optical coherence tomography images by use of a spatially adaptive wavelet filter. ,” Opt. Lett..  0146-9592 29, , 2878–2880  ((2004)).
Pizurica  A., , Wink  A., , Vansteenkiste  E., , Philips  W., , and Roerdink  J., “ A review of wavelet denoising in MRI and ultrasound brain imaging. ,” Curr. Med. Imag. Rev.. 2, , 247–260  ((2006)).
Pizurica  A., , Jovanov  L., , Huysmans  B., , Zlokolica  V., , Keyser  P., , Dhaenens  F., , and Philips  W., “ Multiresolution denoising for optical coherence tomography: a review and evaluation. ,” Curr. Med. Imag. Rev.. 4, , 270–284  ((2008)).
Chitchian  S., , Fiddy  M., , and Fried  N., “ Denoising during optical coherence tomography of the prostate nerves via wavelet shrinkage using dual-tree complex wavelet transform. ,” J. Biomed. Opt..  1083-3668 14, (1 ), 014031  ((2009)).
Weldon  T., , Higgins  W., , and Dunn  D., “ Efficient Gabor filter design for texture segmentation. ,” Pattern Recogn..  0031-3203 29, , 2005–2015  ((1996)).
Pratt  W.,  Digital Image Processing. ,  Wiley ,  New York  ((2007)).
Abdelnour  A., and Selesnick  I., “ Design of 2-band orthogonal near-symmetric CQF. ,” in  Proc. IEEE Intl. Conf. Acoust. Speech, Sign. Process. (ICASSP). , pp. 3693  ((2001)).
Sendur  L., and Selesnick  I., “ Bivariate shrinkage with local variance estimation. ,” IEEE Signal Process. Lett..  1070-9908 9, , 438–441  ((2002)).
Xiang  S., , Zhou  L., , and Schmitt  J., “ Speckle noise reduction for optical coherence tomography. ,” Proc. SPIE.  0277-786X 3196, , 79–88  ((1998)).

Some tools below are only available to our subscribers or users with an online account.

Related Content

Customize your page view by dragging & repositioning the boxes below.

Related Book Chapters

Topic Collections

PubMed Articles
Advertisement
  • Don't have an account?
  • Subscribe to the SPIE Digital Library
  • Create a FREE account to sign up for Digital Library content alerts and gain access to institutional subscriptions remotely.
Access This Article
Sign in or Create a personal account to Buy this article ($20 for members, $25 for non-members).
Access This Proceeding
Sign in or Create a personal account to Buy this article ($15 for members, $18 for non-members).
Access This Chapter

Access to SPIE eBooks is limited to subscribing institutions and is not available as part of a personal subscription. Print or electronic versions of individual SPIE books may be purchased via SPIE.org.