Pediatric diffuse midline glioma (DMG) is a rare but fatal pediatric brain tumor. Tumor MRI features, extracted from segmented DMG, have shown promise for predicting DMG progression and overall survival. The data and knowledge accumulated from the more common adult brain tumors cannot be directly applied to DMG due to different tumor locations and appearances. The purpose of this work is to develop a transfer learning-based approach to automatically preprocess and segment sub-regions of DMG from multisequence MRIs. We retrospectively collected T1, contrastenhanced T1, T2 and T2 FLAIR images of 45 children diagnosed with DMG. MRI images at two timepoints were considered: at diagnosis and after completion of radiation therapy (RT). This generated a DMG dataset of 82 cases. Manual segmentation of two labels were created: the enhancing region (ER) and the whole tumor (WT). We modified the SegResNet model developed by NVIDIA and pre-trained it on BraTS 2021 challenge dataset, which contains 1,251 subjects with adult glioblastoma multiforme. DMG data was automatically preprocessed to have the same resolution and format as the input data in the BraTS challenge. A 5-fold cross-validation was performed using the preprocessed DMG data to finetune and validate the model. The proposed method resulted in mean Dice scores of 0.831 and 0.840 for the ER and WT segmentations, respectively. The method produced decent segmentation results for a small dataset. We demonstrated transfer learning from adult brain tumors to rare pediatric brain tumors was feasible and would improve segmentation results.
Purpose: The purpose of this work was to develop a new method of tracking a laparoscopic ultrasound (LUS) transducer in laparoscopic video by combining the hardware [e.g., electromagnetic (EM)] and the computer vision-based (e.g., ArUco) tracking methods.
Approach: We developed a special tracking mount for the imaging tip of the LUS transducer. The mount incorporated an EM sensor and an ArUco pattern registered to it. The hybrid method used ArUco tracking for ArUco-success frames (i.e., frames where ArUco succeeds in detecting the pattern) and used corrected EM tracking for the ArUco-failure frames. The corrected EM tracking result was obtained by applying correction matrices to the original EM tracking result. The correction matrices were calculated in previous ArUco-success frames by comparing the ArUco result and the original EM tracking result.
Results: We performed phantom and animal studies to evaluate the performance of our hybrid tracking method. The corrected EM tracking results showed significant improvements over the original EM tracking results. In the animal study, 59.2% frames were ArUco-success frames. For the ArUco-failure frames, mean reprojection errors for the original EM tracking method and for the corrected EM tracking method were 30.8 pixel and 10.3 pixel, respectively.
Conclusions: The new hybrid method is more reliable than using ArUco tracking alone and more accurate and practical than using EM tracking alone for tracking the LUS transducer in the laparoscope camera image. The proposed method has the potential to significantly improve tracking performance for LUS-based augmented reality applications.
Surgical instrument segmentation in laparoscopic image sequences can be utilized for a variety of applications during surgical procedures. Recent studies have shown that deep learning-based methods produce competitive results in surgical instrument segmentation. Difficulties, however, lie in the limited number of training datasets involving surgical instruments in laparoscopic image frames. Even though there are publicly available pixelwise training datasets along with trained models from the Robotic Instrument Segmentation challenge, we are not able to relate them to laparoscopic image frames from different surgical scenarios without any pre- or postprocessing. This is because they contain different instrument shapes, image backgrounds, and specular reflections, which implies laborious manual segmentation for training dataset generation. In this work, we propose a novel framework for semi-automated training dataset generation for the purpose of robust segmentation using deep learning. To generate training datasets in various surgical scenarios faster and more accurately, we utilize the publicly available trained model from the Robotic Instrument Segmentation challenge and then use the Watershed Segmentation-based method. For robust segmentation, we use a two-step approach: first, we obtain a coarse segmentation obtained from a deep convolutional neural network architecture, and then we refine the segmentation result via the GrabCut algorithm. Through experiments using four different laparoscopic image sequences, we demonstrate the ability of our proposed framework to provide robust segmentation quality.
The purpose of this work was to develop a clinically viable laparoscopic augmented reality (AR) system employing stereoscopic (3-D) vision, laparoscopic ultrasound (LUS), and electromagnetic (EM) tracking to achieve image registration. We investigated clinically feasible solutions to mount the EM sensors on the 3-D laparoscope and the LUS probe. This led to a solution of integrating an externally attached EM sensor near the imaging tip of the LUS probe, only slightly increasing the overall diameter of the probe. Likewise, a solution for mounting an EM sensor on the handle of the 3-D laparoscope was proposed. The spatial image-to-video registration accuracy of the AR system was measured to be 2.59±0.58 mm and 2.43±0.48 mm for the left- and right-eye channels, respectively. The AR system contributed 58-ms latency to stereoscopic visualization. We further performed an animal experiment to demonstrate the use of the system as a visualization approach for laparoscopic procedures. In conclusion, we have developed an integrated, compact, and EM tracking-based stereoscopic AR visualization system, which has the potential for clinical use. The system has been demonstrated to achieve clinically acceptable accuracy and latency. This work is a critical step toward clinical translation of AR visualization for laparoscopic procedures.
Accurate calibration of laparoscopic cameras is essential for enabling many surgical visualization and navigation technologies such as the ultrasound-augmented visualization system that we have developed for laparoscopic surgery. In addition to accuracy and robustness, there is a practical need for a fast and easy camera calibration method that can be performed on demand in the operating room (OR). Conventional camera calibration methods are not suitable for the OR use because they are lengthy and tedious. They require acquisition of multiple images of a target pattern in its entirety to produce satisfactory result. In this work, we evaluated the performance of a single-image camera calibration tool (rdCalib; Percieve3D, Coimbra, Portugal) featuring automatic detection of corner points in the image, whether partial or complete, of a custom target pattern. Intrinsic camera parameters of a 5-mm and a 10-mm standard Stryker® laparoscopes obtained using rdCalib and the well-accepted OpenCV camera calibration method were compared. Target registration error (TRE) as a measure of camera calibration accuracy for our optical tracking-based AR system was also compared between the two calibration methods. Based on our experiments, the single-image camera calibration yields consistent and accurate results (mean TRE = 1.18 ± 0.35 mm for the 5-mm scope and mean TRE = 1.13 ± 0.32 mm for the 10-mm scope), which are comparable to the results obtained using the OpenCV method with 30 images. The new single-image camera calibration method is promising to be applied to our augmented reality visualization system for laparoscopic surgery.
KEYWORDS: Magnetic resonance imaging, 3D image processing, Kidney, Image segmentation, Algorithm development, 3D imaging standards, Hough transforms, Detection and tracking algorithms, Tumors, Binary data
Probe or needle artifact detection in 3D scans gives an approximate location for the tools inserted, and is thus crucial in assisting many image-guided procedures. Conventional needle localization algorithms often start with cropped images, where unwanted parts of raw scans are cropped either manually or by applying pre-defined masks. In cryoablation, however, the number of probes used, the placement and direction of probe insertion, and the portions of abdomen scanned differs significantly from case to case, and probes are often constantly being adjusted during the Probe Placement Phase. These features greatly reduce the practicality of approaches based on image cropping. In this work, we present a fully Automatic Probe Artifact Detection method, APAD, that works directly on uncropped raw MRI images, taken during the Probe Placement Phase in 3Tesla MRI-guided cryoablation. The key idea of our method is to first locate an initial 2D line strip within a slice of the MR image which approximates the position and direction of the 3D probes bundle, noting that cryoprobes or biopsy needles create a signal void (black) artifact in MRI with a bright cylindrical border. With the initial 2D line, standard approaches to detect line structures such as the 3D Hough Transform can be applied to quickly detect each probe’s axis. By comparing with manually labeled probes, the analysis of 5 patient treatment cases of kidney cryoablation with varying probe placements demonstrates that our algorithm combined with standard 3D line detection is an accurate and robust method to detect probe artifacts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.