Fluorescence-guided surgery systems employed during oral cancer resection help detect the lateral margin yet fail to quantify the deep margins of the tumor prior to resection. Without comprehensive quantification of three-dimensional tumor margins, complete resection remains challenging. While interoperative techniques to assess the deep margin exist, they are limited in precision, leaving an unmet need for a system that can quantify depth. Our group is developing a deep learning (DL)-enabled fluorescence spatial frequency domain imaging (SFDI) system to address this limitation. The SFDI system captures fluorescence (F) and reflectance (R) images that contain information on tissue optical properties (OP) and depth sensitivity across spatial frequencies. Coupling DL with SFDI imaging allows for the near-real time construction of depth and concentration maps. Here, we compare three DL architectures that use SFDI images as inputs: i) F+OP, where OP (absorption and scattering) are obtained analytically from reflectance images; ii) F+R; iii) F/R. Training the three models required 10,000 tumor samples; synthetic tumors derived from composite spherical harmonics circumvented the need for patient data. The synthetic tumors were passed to a diffusion-theory light propagation model to generate a dataset of artificial SFDI images for DL training. Two oral cancer models derived from MRI of patient tongue tumors are used to evaluate DL performance in: i) in silico SFDI images ii) optical phantoms. These studies evaluate how system performance is affected by the SFDI input data and DL architectures. Future studies are required to assess system performance in vivo.
Current intraoperative methods to assess tumor invasion depth in mucosal oral cancer provide limited real-time information. The advent of targeted fluorescence contrast agents for head and neck cancer is a promising innovation, but surgical imaging systems typically provide only two-dimensional views. Here, we investigate the use of an image-guided fluorescence tomography (igFT) system to estimate the depth of tumor invasion in tissue-simulating oral cancer phantoms. Implementation of non-contact diffuse optical tomography using finite-element software (NIRFAST) is enabled with geometric data from intraoperative cone-beam CT (CBCT) imaging and surgical navigation. The tissue phantoms used gelatin for the background (5% for fat, 10% for muscle) and 2% agar for palpable, tumor-like inclusions. Standard agents were used for absorption (hemoglobin), scattering (Intralipid), fluorescence (indocyanine green), and CT contrast (iohexol). The agar inclusions were formed using 3D printed molds, and positioned at the surface of the gelatin background to mimic mucosal tumor invasion (an “iceberg” model). Simulations and phantom experiments characterize fluorescence tomography performance across a range of tumor invasion depths. To aid surgical visualization, the fluorescence volume is converted to a colored surface indicating tumor depth, and overlaid on the navigated endoscopic video. Clinical studies are necessary to assess in vivo performance and intraoperative workflow.
Intraoperative visualization of molecular processes delineated by fluorescence contrast agents offers the potential for increased surgical precision and better patient outcomes. To exploit fully the clinical potential for targeted fluorescence guidance, there is a growing need to develop high-resolution, quantitative imaging systems suitable for surgical use. Diffuse optical fluorescence tomography (DOFT) systems in pre-clinical and diagnostic imaging applications have demonstrated improvements in fluorescence quantification with the addition of a priori data from structural imaging modalities (e.g., MR, CT). Here, we investigate the use of a cone-beam CT (CBCT) surgical guidance system to generate spatial priors for intraoperative DOFT. Imaging and localization data is incorporated directly into a finite element method DOFT implementation (NIRFAST) at multiple stages. First, CBCT data from an intraoperative flat-panel C-arm is used to generate tetrahedral meshes. Second, optical tracking of laser and camera devices enables an adaptable non-contact DOFT approach to accommodate various anatomical sites and acquisition geometries. Finally, anatomical segmentations from CBCT are included in the optical reconstruction process using Laplacian-type regularization (“soft spatial priors”). Calibration results showed that light rays between the tissue surface and navigated optical devices were mapped with sub-millimeter accuracy. Liquid phantom experiments determined the improvements in quantification of fluorescence yield, with errors of 85% and <20% for no priors and spatial priors, respectively. CBCT-DOFT fusion in a VX2-tumor rabbit model delineated contrast enhancement using a dual CT/optical liposomal nanoparticle. These developments motivate future translation and evaluation in an ongoing CBCT-guided head and neck surgery patient study.
A freehand, non-contact diffuse optical tomography (DOT) system has been developed for multimodal imaging with
intraoperative cone-beam CT (CBCT) during minimally-invasive cancer surgery. The DOT system is configured for
near-infrared fluorescence imaging with indocyanine green (ICG) using a collimated 780 nm laser diode and a nearinfrared
CCD camera (PCO Pixelfly USB). Depending on the intended surgical application, the camera is coupled to
either a rigid 10 mm diameter endoscope (Karl Storz) or a 25 mm focal length lens (Edmund Optics). A prototype flatpanel
CBCT C-Arm (Siemens Healthcare) acquires low-dose 3D images with sub-mm spatial resolution. A 3D mesh is
extracted from CBCT for finite-element DOT implementation in NIRFAST (Dartmouth College), with the capability for
soft/hard imaging priors (e.g., segmented lymph nodes). A stereoscopic optical camera (NDI Polaris) provides real-time
6D localization of reflective spheres mounted to the laser and camera. Camera calibration combined with tracking data is
used to estimate intrinsic (focal length, principal point, non-linear distortion) and extrinsic (translation, rotation) lens
parameters. Source/detector boundary data is computed from the tracked laser/camera positions using radiometry
models. Target registration errors (TRE) between real and projected boundary points are ~1-2 mm for typical acquisition
geometries. Pre-clinical studies using tissue phantoms are presented to characterize 3D imaging performance. This
translational research system is under investigation for clinical applications in head-and-neck surgery including oral
cavity tumour resection, lymph node mapping, and free-flap perforator assessment.
A prototype mobile C-arm for cone-beam CT (CBCT) has been translated to a prospective clinical trial in head and neck
surgery. The flat-panel CBCT C-arm was developed in collaboration with Siemens Healthcare, and demonstrates both
sub-mm spatial resolution and soft-tissue visibility at low radiation dose (e.g., <1/5th of a typical diagnostic head CT).
CBCT images are available ~15 seconds after scan completion (~1 min acquisition) and reviewed at bedside using
custom 3D visualization software based on the open-source Image-Guided Surgery Toolkit (IGSTK). The CBCT C-arm
has been successfully deployed in 15 head and neck cases and streamlined into the surgical environment using human
factors engineering methods and expert feedback from surgeons, nurses, and anesthetists. Intraoperative imaging is
implemented in a manner that maintains operating field sterility, reduces image artifacts (e.g., carbon fiber OR table) and
minimizes radiation exposure. Image reviews conducted with surgical staff indicate bony detail and soft-tissue
visualization sufficient for intraoperative guidance, with additional artifact management (e.g., metal, scatter) promising
further improvements. Clinical trial deployment suggests a role for intraoperative CBCT in guiding complex head and
neck surgical tasks, including planning mandible and maxilla resection margins, guiding subcranial and endonasal
approaches to skull base tumours, and verifying maxillofacial reconstruction alignment. Ongoing translational research
into complimentary image-guidance subsystems include novel methods for real-time tool tracking, fusion of endoscopic
video and CBCT, and deformable registration of preoperative volumes and planning contours with intraoperative CBCT.
esthetic appearance is one of the most important factors for reconstructive surgery. The current practice of maxillary
reconstruction chooses radial forearm, fibula or iliac rest osteocutaneous to recreate three-dimensional complex
structures of the palate and maxilla. However, these bone flaps lack shape similarity to the palate and result in a less
satisfactory esthetic. Considering similarity factors and vasculature advantages, reconstructive surgeons recently
explored the use of scapular tip myo-osseous free flaps to restore the excised site. We have developed a new method that
quantitatively evaluates the morphological similarity of the scapula tip bone and palate based on a diagnostic volumetric
computed tomography (CT) image. This quantitative result was further interpreted as a color map that rendered on the
surface of a three-dimensional computer model. For surgical planning, this color interpretation could potentially assist
the surgeon to maximize the orientation of the bone flaps for best fit of the reconstruction site. With approval from the
Research Ethics Board (REB) of the University Health Network, we conducted a retrospective analysis with CT image
obtained from 10 patients. Each patient had a CT scans including the maxilla and chest on the same day. Based on this
image set, we simulated total, subtotal and hemi palate reconstruction. The procedure of simulation included volume
segmentation, conversing the segmented volume to a stereo lithography (STL) model, manual registration, computation
of minimum geometric distances and curvature between STL model. Across the 10 patients data, we found the overall
root-mean-square (RMS) conformance was 3.71± 0.16 mm
Methods for accurate registration and fusion of intraoperative cone-beam CT (CBCT) with endoscopic video have been
developed and integrated into a system for surgical guidance that accounts for intraoperative anatomical deformation and
tissue excision. The system is based on a prototype mobile C-Arm for intraoperative CBCT that provides low-dose 3D
image updates on demand with sub-mm spatial resolution and soft-tissue visibility, and also incorporates subsystems for
real-time tracking and navigation, video endoscopy, deformable image registration of preoperative images and surgical
plans, and 3D visualization software. The position and pose of the endoscope are geometrically registered to 3D CBCT
images by way of real-time optical tracking (NDI Polaris) for rigid endoscopes (e.g., head and neck surgery), and
electromagnetic tracking (NDI Aurora) for flexible endoscopes (e.g., bronchoscopes, colonoscopes). The intrinsic (focal
length, principal point, non-linear distortion) and extrinsic (translation, rotation) parameters of the endoscopic camera
are calibrated from images of a planar calibration checkerboard (2.5×2.5 mm2 squares) obtained at different
perspectives. Video-CBCT registration enables a variety of 3D visualization options (e.g., oblique CBCT slices at the
endoscope tip, augmentation of video with CBCT images and planning data, virtual reality representations of CBCT
[surface renderings]), which can reveal anatomical structures not directly visible in the endoscopic view - e.g., critical
structures obscured by blood or behind the visible anatomical surface. Video-CBCT fusion is evaluated in pre-clinical
sinus and skull base surgical experiments, and is currently being incorporated into an ongoing prospective clinical trial in
CBCT-guided head and neck surgery.
High-quality intraoperative 3D imaging systems such as cone-beam CT (CBCT) hold considerable promise for imageguided
surgical procedures in the head and neck. With a large amount of preoperative imaging and planning information
available in addition to the intraoperative images, it becomes desirable to be able to integrate all sources of imaging
information within the same anatomical frame of reference using deformable image registration. Fast intensity-based
algorithms are available which can perform deformable image registration within a period of time short enough for
intraoperative use. However, CBCT images often contain voxel intensity inaccuracy which can hinder registration
accuracy - for example, due to x-ray scatter, truncation, and/or erroneous scaling normalization within the 3D
reconstruction algorithm. In this work, we present a method of integrating an iterative intensity matching step within the
operation of a multi-scale Demons registration algorithm. Registration accuracy was evaluated in a cadaver model and
showed that a conventional Demons implementation (with either no intensity match or a single histogram match)
introduced anatomical distortion and degradation in target registration error (TRE). The iterative intensity matching
procedure, on the other hand, provided robust registration across a broad range of intensity inaccuracies.
A system for intraoperative cone-beam CT (CBCT) surgical guidance is under development and translation to trials in
head and neck surgery. The system provides 3D image updates on demand with sub-millimeter spatial resolution and
soft-tissue visibility at low radiation dose, thus overcoming conventional limitations associated with preoperative
imaging alone. A prototype mobile C-arm provides the imaging platform, which has been integrated with several novel
subsystems for streamlined implementation in the OR, including: real-time tracking of surgical instruments and
endoscopy (with automatic registration of image and world reference frames); fast 3D deformable image registration (a
newly developed multi-scale Demons algorithm); 3D planning and definition of target and normal structures; and
registration / visualization of intraoperative CBCT with the surgical plan, preoperative images, and endoscopic video.
Quantitative evaluation of surgical performance demonstrates a significant advantage in achieving complete tumor
excision in challenging sinus and skull base ablation tasks. The ability to visualize the surgical plan in the context of
intraoperative image data delineating residual tumor and neighboring critical structures presents a significant advantage
to surgical performance and evaluation of the surgical product. The system has been translated to a prospective trial
involving 12 patients undergoing head and neck surgery - the first implementation of the research prototype in the
clinical setting. The trial demonstrates the value of high-performance intraoperative 3D imaging and provides a valuable
basis for human factors analysis and workflow studies that will greatly augment streamlined implementation of such
systems in complex OR environments.
Surgical simulation has become a critical component of surgical practice and training in the era of high-precision image-guided
surgery. While the ability to simulate surgery of the paranasal sinuses and skull base has been conventionally
limited to 3D digital simulation or cadaveric dissection, we have developed novel methods employing rapid prototyping
technology and 3D printing to create high-fidelity models from real patient images (CT or MR). Such advances allow
creation of patient-specific models for preparation, simulation, and training before embarking on the actual surgery. A
major challenge included the development of novel material formulations compatible with the rapid prototyping process
while presenting anatomically realistic flexibility, cut-ability, drilling purchase, and density (CT number). Initial studies
have yielded realistic models of the paranasal sinuses and skull base for simulation and training in image-guided surgery.
The process of model development and material selection is reviewed along with the application of the phantoms in
studies of high-precision surgery guided by C-arm cone-beam CT (CBCT). Surgical performance is quantitatively
evaluated under CBCT guidance, with the high-fidelity phantoms providing an excellent test-bed for reproducible
studies across a broad spectrum of challenging surgical tasks. Future work will broaden the atlas of models to include
normal anatomical variations as well as a broad spectrum of benign and malignant disease. The role of high-fidelity
models produced by rapid prototyping is discussed in the context of patient-specific case simulation, novel technology
development (specifically CBCT guidance), and training of future generations of sinus and skull base surgeons.
Sheet metal strain analysis is an important tool to ensure products are manufactured within necessary tolerances. A common technique involves electrochemically etching a dark grid pattern of known size onto the flat sheet metal surface and then deforming the sheet. The change in the grid pattern after deformation can be used to calculate surface strain. The computer vision problem is to accurately detect the intersections of the grid pattern. To investigate this problem, a stereo camera system was designed and attached to a bridge style coordinate measurement machine. The stereo head consists of two high resolution monochrome CCD cameras mounted on a Renishaw PH10 motorized probe head that can be articulated into numerous, repeatable, preset positions. Stereo head calibration was achieved using Zhang’s technique with a planar target. Each probe position was calibrated using a global point set registration method to link coordinate systems. A novel approach to segmenting the grid pattern into squares involving region merging and watersheds is described. Grid intersections are determined to sub pixel accuracy and matched between images using a correlation based scheme. The accuracy of the system and experimental results are provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.