Open Access
13 November 2024 Hyperspectral imaging in neurosurgery: a review of systems, computational methods, and clinical applications
Alankar Kotwal, Vishwanath Saragadam, Joshua D. Bernstock, Alfredo Sandoval, Ashok Veeraraghavan, Pablo A. Valdés
Author Affiliations +
Abstract

Significance

Accurate identification between pathologic (e.g., tumors) and healthy brain tissue is a critical need in neurosurgery. However, conventional surgical adjuncts have significant limitations toward achieving this goal (e.g., image guidance based on pre-operative imaging becomes inaccurate up to 3 cm as surgery proceeds). Hyperspectral imaging (HSI) has emerged as a potential powerful surgical adjunct to enable surgeons to accurately distinguish pathologic from normal tissues.

Aim

We review HSI techniques in neurosurgery; categorize, explain, and summarize their technical and clinical details; and present some promising directions for future work.

Approach

We performed a literature search on HSI methods in neurosurgery focusing on their hardware and implementation details; classification, estimation, and band selection methods; publicly available labeled and unlabeled data; image processing and augmented reality visualization systems; and clinical study conclusions.

Results

We present a detailed review of HSI results in neurosurgery with a discussion of over 25 imaging systems, 45 clinical studies, and 60 computational methods. We first provide a short overview of HSI and the main branches of neurosurgery. Then, we describe in detail the imaging systems, computational methods, and clinical results for HSI using reflectance or fluorescence. Clinical implementations of HSI yield promising results in estimating perfusion and mapping brain function, classifying tumors and healthy tissues (e.g., in fluorescence-guided tumor surgery, detecting infiltrating margins not visible with conventional systems), and detecting epileptogenic regions. Finally, we discuss the advantages and disadvantages of HSI approaches and interesting research directions as a means to encourage future development.

Conclusions

We describe a number of HSI applications across every major branch of neurosurgery. We believe these results demonstrate the potential of HSI as a powerful neurosurgical adjunct as more work continues to enable rapid acquisition with smaller footprints, greater spectral and spatial resolutions, and improved detection.

1.

Introduction

Optical imaging approaches have transformed surgery via improved intraoperative detection of both normal and diseased tissues.15 Technologies that jointly leverage optics, computational methods, and visualization tools have facilitated this unparalleled transformation, with several successful commercial technologies in areas such as surgical robotics68 and image-2,3,5 and fluorescence-guided9,10 surgery. Image-guided surgery allows for the clinical deployment of optical imaging systems that are non-invasive and non-ionizing, which in turn can be used for intraoperative computer vision,11 tactile sensing,12 manipulation, and tracking algorithms13 that have a relatively compact footprint and allow for rapid acquisition.

As an example, images acquired via a surgical endoscope, processed through computer vision pipelines,14 have been used for post-surgical analysis of the surgical workflow,15,16 including recognizing surgical goals, predicting the current task being performed, segmenting and recognizing relevant landmarks during surgery, evaluating the difficulty of the surgical plan, and surgeon skill.11 In addition, visual instrument detection and tracking methods for minimally invasive surgeries have been developed and validated on surgical videos.13 Autonomous, high-precision, and dexterous surgical instrument manipulation for surgery and remote telesurgery has been made possible6,1719 through deep learning methods at precisions previously thought impractical.8 Recently developed image-guided surface sensing systems, such as the GelSight sensor,20 can provide joint micron-scale topography (2.5-dimensional depth data) and tactile feedback more sensitive than human skin.21 The demonstrated effectiveness of these approaches suggests exciting potential prospects for intraoperative applications.

A promising approach in image-guided surgery is hyperspectral imaging (HSI),2225 which captures wide-field, spectrally resolved images of the surgical field. HSI systems have been deployed successfully for applications in remote sensing, astronomy, agriculture, and surveillance.2628 Hyperspectral data can be interpreted as an “optical fingerprint” of the material being analyzed (e.g., diffuse reflectance properties) and can be used for material recognition and classification.2932 Therefore, HSI can enhance visualization of tissue structure and composition in image-guided surgery, aiding in guiding diagnosis and treatment.

In this paper, we review the applications of HSI in neurosurgery, focusing on specific HSI techniques and their medical implementations and benefits in clinical practice. Specifically, we provide the reader with an up-to-date review of how HSI has been implemented clinically and, thus, focus on HSI systems and techniques used in clinical studies only. We begin with preliminaries (Sec. 2), which include an overview of the major subspecialities in neurosurgery (Sec. 2.1), followed by a short review of current HSI techniques (Sec. 2.2). We then discuss the benefits and challenges of HSI in neurosurgery (Secs. 2.3 and 2.4). Next, we proceed with an in-depth review of HSI technologies and their clinical applications for imaging under white light in reflectance mode (Sec. 3) and for imaging fluorescence in fluorescence-guided surgery (Sec. 4). We have broken up Secs. 3 and 4 into technological subsections—imaging hardware and software (Secs. 3.1 and 4.1), datasets (Sec. 3.2), and visualization tools (Sec. 3.3)—and followed them up with clinical implementations of and results from these HSI technologies (Secs. 3.4 and 4.2). By separating each section into technological and clinical subsections, the readers will be able to refer to more detailed technological aspects of HSI (e.g., imaging systems, computational methods, datasets, and visualization techniques) or the clinical results and implementations of these technologies in the various subspecialties of neurosurgery. We also provide in-depth tables that summarize the technological and clinical subsections for ease of reference. Finally, we discuss future perspectives on HSI as a novel tool with the potential to become a standard adjunct in image-guided neurosurgery (Sec. 5).

2.

Preliminaries

2.1.

Neurosurgery

Neurosurgery is the branch of medicine that treats disorders of the central nervous system (CNS) or peripheral nervous system (PNS) by physical manipulation, modification, or modulation of anatomical (e.g., the subthalamic nucleus for deep brain stimulation) and pathological (e.g., aneurysm clipping and resection of brain tumors) structures.3335 In terms of research and clinical techniques, neurosurgery is among the most rapidly developing subspecialties of medicine,36 propelled by the interdisciplinary integration of tools from imaging, molecular biology, cancer neuroscience, electrophysiology, brain mapping, neuroengineering, computational biology, bioinformatics, and robotics. Clinically, neurosurgery is composed of the following subspecialties:

“Neurosurgical oncology” is the surgical branch of neuro-oncology focused on the diagnosis, treatment, and long-term management of tumors of the CNS and PNS. Surgical resection is the primary course of treatment for a large set of tumors. The success of tumor resection is one of the most important initial predictors of overall survival and quality of life.37,38 Therefore, the goal of tumor surgery is to maximize the extent of tumor resection (EOR) while preserving the functional brain to ensure high post-operative functional outcomes (i.e., achieving an oncofunctional balance3944). However, rates of EOR can be as low as 30% as reported by post-operative,45 standard-of-care magnetic resonance imaging (MRI) using conventional surgical techniques.

Conventional resections are performed under white light illumination with or without magnification (e.g., using microscopes or surgical loupes). In these procedures, the surgeon uses the cues from visual white light and tactile feedback to determine which tissue to resect and which to preserve.41 However, because brain tumors often appear visually similar to normal brain tissue, residual tumors often remain unresected, leading to low rates of maximal EOR. This is especially problematic in infiltrative areas of the most aggressive malignant tumors, such as glioblastomas (GBMs).41 Surgical adjuncts such as intraoperative MRI (iMRI), intraoperative ultrasound (US), and neuronavigation can improve visualization and intraoperative surgical decision-making. Despite their benefits, these tools have limitations including disruption of the surgical workflow, inaccurate spatial information due to brain shift, low contrast (normal tissue versus pathology), and high costs.46 Therefore, there is an acute need for real-time, high-resolution technologies that accurately delineate tumors from normal brain tissue in neurosurgical oncology.4752

“Vascular neurosurgery” is the branch of neurosurgery focused on the diagnosis and surgical treatment of blood vessel pathologies of the nervous system.33 This encompasses a variety of conditions including aneurysms, arteriovenous malformations (AVMs), stroke, and hemorrhage. The primary aims of surgical treatments include restoring normal blood flow to the brain, preventing blood clot formation and stroke, repairing vascular pathologies (e.g., aneurysms and fistulas), and resecting vascular lesions (e.g., AVMs and cavernomas). Given that the spatial scale of vascular structures in the nervous system is of the order of millimeters, submillimeter precision and real-time intraoperative feedback are critical to safely treat pathologies while preserving normal vasculature. Although intraoperative three-dimensional (3D) digital subtraction angiography provides visualization of the neurovasculature in 3D as well as differentiates its venous and arterial components,53 it does not provide direct intraoperative visualization of vasculature and pathology at the tissue level. Intraoperative Doppler US can detect blood flow,54 but it is constrained in resolution (i.e., millimeters) and field of view (i.e., single point detection) and is sensitive to patient motion. Intraoperative indocyanine green (ICG) fluorescence angiography provides real-time intraoperative feedback with surface visualization of vasculature using ICG fluorescence, which accumulates in the blood vessels.55 However, visualization of vasculature and pathologies is transient (i.e., ICG signal washes out shortly after administration), is useful only for surface imaging, and is not specific to pathologies as it accumulates in all normal and abnormal vasculature.56 Therefore, there is an acute need for real-time, non-transient, and highly specific intraoperative imaging technologies that can distinguish between normal and pathological neurovasculature for visual feedback in vascular neurosurgery.

“Functional neurosurgery” is the surgical branch of neurosurgery that treats various chronic neurologic disorders of the brain through functional modification. These disorders include epilepsy, movement disorders, pain, spasticity, and psychiatric illnesses.33 One example of functional neurosurgery is the treatment of intractable epilepsy via surgically resecting the epileptogenic area, which is the area of the brain where seizures are believed to originate. The goal of this surgery is to eliminate or decrease the frequency and severity of seizures.57 In epilepsy surgery it is important to map out the affected area of the brain, typically with intraoperative electrocorticography (ECoG).58 During this procedure, a grid of electrodes is placed on the cortex to measure electrical activity and identify regions with abnormal signals that might indicate seizure origin. However, intraoperative ECoG interrupts the surgical workflow by requiring electrode placement, signal measurement, signal interpretation, electrode removal, and co-registration of electrode locations with signal origins on the brain. In addition, recordings can take a few minutes to complete and interpret. The resolution of ECoG is dependent on the intrinsic spacing within the electrode array, with spatial resolutions of up to a centimeter using conventional grids. There is also a risk of infection associated with the use of such an electrode array with long-term monitoring. As such, imaging techniques that provide visualization of the epileptogenic regions would enable real-time feedback and ideally more accurate identification of the seizure-causing regions. Overall, there is a need for imaging technologies that provide functional neurosurgeons with real-time and highly specific identification of normal and abnormal functions in the nervous system.

“Spine surgery” is the surgical branch of neurosurgery that treats disorders affecting the spinal cord.33 Spine surgery can address issues such as spinal deformity, nerve compression, pain, and neurological deficits due to disorders of the spinal cord and nerves. Surgical navigation has become critical in spine surgery to perform accurate manipulation of bony structures while preventing damage to the spinal cord and its surrounding neural elements. Such navigation is typically done with fiducial markers placed on the skin and spine, but these can get obscured, deformed, or displaced during surgery,59 compromising accurate real-time guidance. It is therefore clear that to enhance the accuracy and safety of spine surgery, there is a pressing need for non-invasive real-time tracking systems and algorithms. These advanced technologies will provide better guidance during surgical procedures, ensuring more effective treatment of spinal disorders and improved patient outcomes.

“Other subspecialties” of neurosurgery include trauma and peripheral nerve surgery. However, there has been no clinical work with HSI in these subspecialties, so we will not discuss them here.

2.2.

Hyperspectral Imaging

HSI is the acquisition of high-resolution spectra over a wide field of view. HSI allows for capturing a 3D hyperspectral cube of size H×W×N, where H and W are the height and width of images in the cube, respectively, and N is the number of wavelength channels [Fig. 1(a)]. The value of N roughly distinguishes it from multispectral imaging, a spectrally resolved imaging paradigm that uses fewer, broader spectral bins. Here, we define a multispectral system to have less than 10 wavelength channels (N<10) and a hyperspectral system to have more than 10 (N>10). Each H×W channel in the cube is equivalent to a two-dimensional (2D) image that would be captured by placing an appropriate bandpass spectral filter in front of the camera. Capturing spectral data in addition to spatial information can be used to determine the composition of the contents of the imaged scene.31,32,60 An in-depth review of the construction and properties of such systems can be found in the literature,31,32 and we discuss only the essentials here. HSI technologies relevant to neurosurgery and their general specifications are illustrated in Fig. 1. Acquisition of a 3D hyperspectral image cube with a 2D camera sensor, however, is not straightforward. Thus, several techniques for the capture of hyperspectral image cubes have been developed, each with its own unique advantages and pitfalls.61,62

Fig. 1

Hyperspectral imaging technologies used in neurosurgery. (a) Hyperspectral image cube is an array of size W×H×N, where W and H are the width and height, respectively, of images in the cube along the x and y spatial dimensions, and N is the number of wavelength channels along the λ dimension. Each W×H channel in the cube is equivalent to an image that would be captured by placing an appropriate bandpass spectral filter in front of the camera. (b) Point scanning methods acquire a complete spectrum at a single (x,y) pixel coordinate (i.e., “point”), scanning along the x and y spatial dimensions to reconstruct the full 3D hyperspectral cube. (c) Line scanning methods acquire 2D data of size W×N along one x spatial dimension, scanning along the y spatial dimension (i.e., “line”) to reconstruct the full 3D hyperspectral cube. (d) Spectral scanning methods acquire 2D data images of size W×H at one λ wavelength channel, scanning along the λ wavelength dimension (i.e., “spectral”) to reconstruct the full 3D hyperspectral cube. (e) Snapshot methods acquire the full 3D hyperspectral image cube of size W×H  ×N with each single acquisition (i.e., “snapshot”).

JBO_30_2_023512_f001.png

“Point scanning methods” (also referred to as whiskbroom scanners) operate using a single detector or a small array of detectors to sequentially scan the scene, capturing spectral data pixel by pixel. Although this method provides high spectral resolution, the point-scanning approach needs M=HW acquisitions, which for megapixel-sized images is time-consuming and limits their use to imaging static scenes and/or small fields of view [Fig. 1(b)].

“Line scanning methods” (also referred to as pushbroom scanners) encode spectral data in one spatial dimension, allowing parallel measurement of the other spatial dimension. Typically, these methods use a linear array of detectors aligned perpendicular to the scanning direction (say, along the H dimension), capturing spectral data row by row. This approach reduces the number of acquisitions to M=W, which significantly reduces acquisition time compared with point scanners. However, the acquisition of thousands of line scans still comes at a high time cost. These are the most widely available systems6366 used abundantly in HSI applications [Fig. 1(c)].

“Spectral scanning methods” image one spectral channel (i.e., one waveband) in the hyperspectral cube at a time and employ a tunable bandpass spectral filter to capture sequentially 2D images at each spectral channel. Spectral scanners offer the flexibility to acquire cubes over a programmable set of wavelengths with selectable spectral resolution. High-spectral-resolution cubes come at a high time cost, especially when considering their use in the dynamic, fast-paced surgical setting. Typical tunable filters used are liquid crystal tunable filters (LCTFs)67 and acousto-optic tunable filters68 [Fig. 1(d)].

“Snapshot methods”6971 capture a hyperspectral cube with complete spatial and spectral information in a single exposure. Snapshot acquisition is achieved by space division multiplexing of the sensor over the spatial and spectral dimensions, similar to a plenoptic camera.72 In this approach, the sensor area is distributed over a number of parts equal to the number of spectral channels. Each of these parts images a wide-field image corresponding to one spectral channel, and these parts are stacked together to form the hyperspectral cube. This technology is facilitated by new optical designs incorporating lenslet arrays70,71,73,74 and varying filtering and dispersion strategies. This rapid acquisition enables the use of snapshot systems in applications requiring real-time hyperspectral feedback, such as in intraoperative image guidance, where long scan times or bulky scanning hardware can interfere with the surgical workflow. However, space division multiplexing requires a trade-off between spatial and spectral resolutions for equivalent acquisition times—as we increase the number of parts, the sensor is segmented into fewer pixels available for each part [Fig. 1(e)].

“Snapscan systems” combine the benefits of snapshot and line scanning hyperspectral systems. Such systems are built with mosaic filter arrays as in snapshot systems but employ internal scanning of the mosaic and computational reconstructions to yield fast, high-resolution hyperspectral cubes.75

“Compressed sensing methods” exploit the regularity in natural signals to obtain an approximation to the hyperspectral cube.76 An example of such regularity is the sparsity of individual spectral channels in the spatial frequency domain, which is the subject of a classic signal processing technique called compressed sensing. Such systems have the capability to provide video-rate hyperspectral acquisition with high spatial resolution for scenes that follow its assumptions.77 In addition, such methods can also implement programmable spectral filters78 in addition to bandpass filters, which allow for matched filtering of spectral signals for classification and segmentation applications.

2.3.

Benefits of HSI in Neurosurgery

As mentioned before, the spectrum in one pixel of the hyperspectral cube contains the optical signature or “optical fingerprint” of the imaged scene point at that spatial coordinate (Fig. 2). This fingerprint can include fluorophores [e.g., protoporphyrin IX (PpIX)] and/or chromophores (e.g., oxy- and deoxyhemoglobin) that differentially accumulate in tissues. This fingerprint is representative of the tissue composition of the imaged scene point—typically, bulk brain tissue, arterial blood vessels, venous blood vessels, various types of tumors, and background. HSI is particularly useful when classifying these kinds of tissue because reflectance and fluorescence spectra obtained with the hyperspectral cubes have high discriminative power that has been widely characterized.7983

Fig. 2

Spectra of fluorophores, chromophores, and reflectance in the visible to near-infrared (NIR) used in HSI for neurosurgery. HSI in neurosurgery has used both exogenous agents, such as 5-aminolevulinic acid that leads to the production of protoporphyrin IX (PpIX), and ICG as key fluorescence biomarkers in fluorescence guided surgery (FGS), with their fluorescence spectra shown in black. Other endogenous fluorophores (e.g., FAD, NADH) are shown in blue, and PpIX photoproducts as well as tissue reflectance and chromophores (e.g., oxy- and deoxyhemoglobin) are shown in red. The y-axis shows the intensity of fluorescence emission, reflectance, or absorption in arbitrary units, and the x-axis shows the wavelength λ, in nanometers.

JBO_30_2_023512_f002.png

As an example of this high discriminative power in the context of vascular neurosurgery, consider a pixel consisting of a blood vessel. The main chromophores involved in the reflectance spectrum of this pixel are oxyhemoglobin and deoxyhemoglobin. The reflectance spectra of deoxyhemoglobin and oxyhemoglobin, which are equal at 545 nm, change rapidly in opposite directions between 545 and 560 nm. Therefore, spectrally resolved imaging in the visible range of the spectrum allows for highly accurate estimates of the relative concentrations of deoxyhemoglobin and oxyhemoglobin, allowing optical measurements of oxygen saturation.

In addition to pixel-wise classification of tissue constituents, hyperspectral data enable other kinds of optical characterization across the surgical field of view. The rich data encoded in each hyperspectral cube offer the potential to extract optical features that would otherwise be impossible to detect visually with the naked eye or with a conventional color image.67,84 For example, spectrally resolved wide-field data have been shown to correct for the distorting effects of tissue optical properties on emitted fluorescence signals,85 which opens the possibility for using HSI to evaluate the surgical field of view and provide quantitative, objective measures of fluorescence and therefore absolute fluorophore molar concentrations.67

Putting all these capabilities together with modern acquisition techniques from optics and computational imaging, advances in computational methods and hardware, and segmentation and classification with artificial intelligence,86 HSI has the potential to be a powerful tool for real-time intraoperative guidance.

2.4.

Challenges in Current Neurosurgical HSI Approaches

Translating an optical system for clinical use into the neurosurgical operating room presents unique challenges not encountered in traditional benchtop imaging settings for pre-clinical studies87 (Fig. 3). The fundamental principle for translation of a novel HSI system into the operating room is that any system and imaging process must not significantly interfere with or interrupt the neurosurgical workflow; it should enable ease of integration, safety, and efficiency for dynamic intraoperative use. A major practical consideration is the size of the imaging system. The spatial footprint of the optical setup must be as small as possible to seamlessly integrate and “fit” into the already instrument-dense neurosurgical operating room (consisting of, for example, the surgical microscope, US imager, ultrasonic aspirator, neuronavigation, drill, and suction control).

Fig. 3

HSI systems in neurosurgery. (a) HELICoiD system uses an exoscope with two line-scan hyperspectral cameras mounted in a confocal configuration. The HELICoiD system fits within a 60×60×90  cm bounding and requires removing the surgical microscope for acquisition, thus interrupting the surgical workflow. (b) Small footprint handheld HSI snapshot system does not require removing the surgical microscope but does not provide the same field of view as seen from the surgeon’s oculars. (c) and (d) HSI systems [spectral scanning in panel (c) and snapscan in panel (d)] mounted on one of the side ports of the surgical microscope enable the acquisition of 3D hyperspectral image cubes co-registered with the surgeon’s field of view with a small physical footprint to seamlessly integrate into the already space-constrained neurosurgical operating room. (a) Adapted from Leon et al.,88 under CC-BY 4.0. (b) Adapted from MacCormac et al.,89 under CC-BY 4.0. (c) Reproduced from Valdés et al.,67 under CC-NC-SA 3.0. (d) Adapted from Kifle et al.,90 under CC-BY 4.0.

JBO_30_2_023512_f003.png

Next, the hyperspectral image captured by the system should be as high-quality as possible, while being as close to real-time as possible (10  Hz), consistent with other intraoperative imaging modalities such as US imaging, neuronavigation feedback, microscope visualization, and 3D exoscope imaging. For the hyperspectral data to be useful for surgical guidance, it must fulfill certain basic constraints in addition to real-time acquisition. First, structures in the brain visualized intraoperatively are of the order of millimeters. Therefore, submillimeter resolution over a surgical field of view of the order of centimeters is critical. Second, the spectral bandwidth of the fluorescence peaks of commonly used fluorophores may be as narrow as nanometers, requiring spectral resolutions of a few nanometers. Lastly, as light is split into spectral channels in the already light-starved conditions of fluorescence imaging, the hyperspectral system sensor should have high quantum efficiency, high bit depth, and low dark noise to enable short exposure times.

The speed of hyperspectral acquisition is constrained by the space–spectrum–sensitivity trade-off. Therefore, these conditions are all difficult to satisfy together. The most common, line-scan hyperspectral imagers provide high spectral and spatial resolutions in one spatial dimension [Fig. 3(a)]. However, providing equivalent resolution in the second spatial dimension for surgically relevant scales is time-consuming (typically tens to hundreds of seconds). To be more sensitive to low-intensity fluorescence signals, existing spectral scanning methods [Fig. 3(c)] typically increases exposure times, decreasing hyperspectral cube acquisition rates. Snapshot and snapscan HSI systems70,71,75,91 [Figs. 3(b) and 3(d)] can potentially provide fast frame rates for hyperspectral acquisition.67,87,9294 However, they sacrifice spatial resolution to do so, also increasing exposure if increased sensitivity is needed. Managing this balance among the imaging parameters to construct clinically practical and effective systems is one of the most important open problems in neurosurgical HSI.

3.

Neurosurgical HSI in Reflectance Mode

Traditionally, neurosurgery has been performed under white-light illumination provided by xenon or halogen lamps.95 The spectral distributions of such illumination extend across the visible-near-infrared (VIS-NIR) range of the optical spectrum, where the optical properties and reflectance spectra of various types of brain tissue, intracranial structures (e.g., arteries, veins, and nerves), pathologies (e.g., tumors, aneurysms, hemorrhages, and abscess), and their molecular constituents (e.g., oxyhemoglobin and deoxyhemoglobin) have been well-characterized.7983 Therefore, HSI systems can be used across subspecialties in neurosurgery to serve a common purpose—to determine the composition of what the surgeon sees in the surgical field of view.

For example, in neurosurgical oncology, the aim is to determine the presence or absence of tumor in the field of view, to classify tumor type, and to identify background tissue (Fig. 4). In vascular neurosurgery, the aim is to image blood perfusion and oxygen saturation. In functional neurosurgery, the aim is to identify the epileptiform regions by measuring neurovascular coupling. In spine surgery, the aim is to track surgical field skin features for intraoperative navigation without the use of fiducial markers. Here, we provide a detailed presentation of the optical designs of HSI systems that have been implemented in the neurosurgical operating room. These systems along with their parameters are discussed in Sec. 3.1 and summarized in Table 1.

Fig. 4

Reflectance spectra of normal brain and brain tumors. (a) and (b) Reflectance spectra of normal tissue (NT) and tumor tissue (TT) and blood vessels (BVs) are significantly different in the visible-NIR regime. (c)–(f) Significant differences are observed in the reflectance spectra from different grades of primary tumors (low grade, high grade, grade 1, grade 2, grade 3, and grade 4) as well as in metastases (i.e., secondary). These differences in reflectance spectra enable the classification of the field of view into the brain parenchyma, blood vessels, and tumor tissue, along with subclassification into arteries, veins, and various tumor types and grades. The y-axis shows the reflectance of tissue in arbitrary units, and the x-axis shows the wavelength λ in nanometers. Adapted from Leon et al.,88 under CC-BY 4.0.

JBO_30_2_023512_f004.png

Table 1

Technical specifications of current hyperspectral imaging systems in neurosurgery (as applied in individual work).

JBO_30_2_023512_f009.png

To process, interpret, and visualize the hyperspectral data captured with these HSI systems, accompanying computational methods have been developed. For example, in neurosurgical oncology, a number of classification and segmentation algorithms label every pixel in the surgical field as normal tissue, tumor (primary or secondary),96 necrosis,97 blood vessel (artery or vein),98 dura mater,98 hypervascularized tissue,99 skull,100 or background. Similarly, spectral fitting methods process HSI data captured during vascular and functional neurosurgery to yield perfusion and oxygenation maps.92,101105 Along with details on optical hardware, we also present a brief review of these computational methods in Sec. 3.1 and summarize their pipelines, validation methods, and best results in Table 2. For a more detailed review of such computational methods, please refer to Massalimova et al.106

Table 2

Computational methods developed for hyperspectral imaging in neurosurgery.

JBO_30_2_023512_f010.png

3.1.

Imaging Hardware and Software

3.1.1.

Neurosurgical oncology

HSI for use in neurosurgical oncology was introduced by Gebhart et al.107 in 2007 with the use of a Varispec VIS-20 LCTF from Cambridge Research Instruments, Inc.108 coupled with a 512×512 PhotonMax electron multiplying charge-coupled device (EMCCD) camera109 mounted on a surgical microscope to measure intraoperative autofluorescence and diffuse reflectance spectra with acquisition times of 5 min. Here, the authors did not solely use reflectance but rather both reflectance and autofluorescence measurements to determine a reflectance/autofluorescence ratio for optimal identification of tumor tissue. Similar to the previous approach, Valdés et al.67 used a Varispec LCTF coupled with a pco.pixelfly charge-coupled device (CCD) camera110 on a surgical microscope (Zeiss OPMI Pentero) [Fig. 3(c)] to measure the reflectance and fluorescence spectra in a fluorescence correction algorithm to enable more accurate measurement of tissue fluorophores during brain tumor resection. Here, both approaches did not solely use reflectance measurements for tissue identification. Instead, they coupled their reflectance measurements with fluorescence to enable tumor tissue identification, which will be discussed in more detail later (see Sec. 4). It was not until 2016 with the kickoff of the European Hyperspectral Imaging Cancer Detection (HELICoiD) project111 and the development of the HELICoiD demonstrator by Salvador et al.112 and Fabelo et al.,113 where HSI of reflectance was used solely for tumor tissue identification.

The HELICoiD demonstrator consists of a pair of line sensor hyperspectral cameras mounted on a custom optical breadboard in the operating room [Fig. 3(a)]. These cameras, bought off-the-shelf from Headwall Photonics,64 are the CCD-based Hyperspec® VNIR A-series operating in the VIS-NIR wavelength range (400 to 1000 nm, 826 spectral bands, 2- to 3-nm resolution, 90  frames/s) and the InGaS-based Hyperspec® NIR 100/U operating in the NIR short-wave infrared (SWIR) wavelength range (900 to 1700 nm, 172 spectral bands, 5-nm resolution, 100  frames/s). The cameras are set up in a confocal stereo configuration with matched fields of view, at an imaging distance of 40 cm and surgical field clearance of 29 cm. The entire imaging assembly is mounted on a translation stage to implement pushbroom scanning [Fig. 1(c)]. The demonstrator system uses a 150-W quartz–tungsten–halogen (QTH) bulb with a spectral range of 400 to 2200 nm, passed through an optical fiber to a cold light emitter. This ensures that the heat from the QTH bulb is not transmitted to the tissue to avoid tissue damage. Follow-up work in the HELICoiD project used other hyperspectral line cameras, such as the Specim ImSpector® VNIR V10-E spectrograph66 (400 to 1000 nm, 2.8-nm resolution) by Madroñal et al.97 and the Headwall Hyperspec® NIR X-Series63 (900 to 1700 nm, 166 spectral bands, 100  frames/s) by Ravi et al.114 in linear scanning configurations to capture hyperspectral datasets.

In the initial HELICoiD pilot study, several pixel-wise classification algorithms were used on the data collected with the HELICoiD demonstrator to test the potential of reflectance spectra in tumor resection. These include support vector machines (SVMs), multilayer perceptrons (MLPs), and random forests (RFs) implemented on parallel processing platforms such as the Headwall Hyperspec® Data Processing Unit112,113 (31 images from 22 procedures on primary glioblastomas and 135k labeled spectra from the HELICoiD demonstrator) and the Kalray MPPA-256-N HPC device96 (13 images from 13 procedures on glioblastomas and metastases and 25k labeled spectra from the HELICoiD demonstrator). The training data consisted of mixed-patient pixel-wise spectra from intraoperative hyperspectral cubes with pathologist-labeled ground truth classification labels. These were tested on data from both HELICoiD cameras separately, and the VIS-NIR data were shown to be most effective with the RF classifier, providing cross-validated accuracy, sensitivity, and specificity greater than 99% for mixed-patient pixel-wise three-class classification.96,113 Subsequently, this classification scheme with a larger dataset (36 cubes from 22 patients, >375  k labeled spectra from the HELICoiD demonstrator) has been integrated into a mixed supervised–unsupervised framework to provide fast intraoperative visualization115 with a total per-frame acquisition and processing time of 1 min at an overall accuracy greater than 98% for five-class classification (including blood vessels). Further work has extended and improved these results with techniques such as blind linear unmixing116,117 and empirical mode decomposition,118 shown SVMs effective for identifying malignant tumor phenotypes,119 and demonstrated estimation of the molecular composition of brain tissues in real time.120

Further, to ease the time and computational complexity of working with high-dimensional hyperspectral data (hundreds of wavelength channels across millions of pixels) and improve the semantic consistency of segmentation, dimensionality reduction with manifold embedding has been employed.114 This method uses a deep learning–based modified version of the T-distributed stochastic neighbor (t-SNE) manifold embedding algorithm,121 called fixed-reference t-SNE (FR-t-SNE). This non-linear embedding method attempts to preserve local spatial regularity (nearby pixels represent the same class with high probability) while still capturing high-level global features (pixel classes). The possibility for generalization of this method was evaluated by testing the model on patient data from a different set of individuals, with around 72% overall accuracy and 53% tumor sensitivity for four-class classification (33 images from 18 patients, captured with the HELICoiD demonstrator). A combination of the above pixel-wise and dimensionality-reduced classifiers to create a joint spatio-spectral classifier has been shown by Fabelo et al.122 to have an overall accuracy greater than 99%, with a speed-up of >4.5 to 8.5× achieved with hardware acceleration (five cubes from five patients and 45k labeled spectra from HELICoiD demonstrator).

Various hardware acceleration platforms have been explored to speed up the classification computation by individually optimizing the components of these classifiers. The linear kernel SVM113 has been sped up 3 to 5× on massively parallel processor arrays97 and system-on-chip architectures123,124 and 90× on graphics processing unit (GPUs);125 dimensionality reduction with principal component analysis (PCA) for data preprocessing115 has been sped up 36× using multiple central processing unit (CPU) compute cores;126 k-nearest neighbor classification115,122,127 has been sped up 30 to 66× on GPUs; and k-means clustering115,122 has been sped up 150× on GPUs.128 Jointly implementing the entire pipeline with PCA on a multi-GPU129 platform has resulted in a total speed-up of 180× over the serial platforms, resulting in processing times being reduced from several hundreds of seconds to tens of seconds.129 The effect of optimizing the data-type representation of the hyperspectral images and their storage in memory has been explored for lower-throughput processing.130

Recently, deep learning has been applied to tumor identification in both deep fully connected per-pixel and convolutional spatio-spectral configurations.131,132 This generalizes the hyperspectral data embedding and classification features for the embedded data while allowing for fast computation on the GPU. In combination with unsupervised clustering techniques and minimal user guidance, these accuracies rise to 77% to 78% for one-dimensional (1D) spectral deep neural networks (DNNs),131,132 72% to 77% for 2D convolutional neural networks (CNNs),131,132 80% for a combination of 1D DNN and 2D CNN,131 and 80% for 3D spatio-spectral CNNs132 (with datasets consisting of eight cubes from six patients and 82k labeled spectra;131 12 cubes from 12 patients and 116k spectra,132 both from the HELICoiD). Other deep learning architectures133143 have also produced comparable results with the potential for fast hyperspectral brain structure classification. Figure 5 shows examples of the HELICoiD demonstrator during brain tumor surgery for tissue classification using unmixing methods and deep neural networks.

Fig. 5

Classifying brain tissue types based on reflectance spectra. Left to right: intraoperative hyperspectral reflectance imaging on four patients with glioma grades 2 and 4 using the HELICoiD system (patient 1 in row 1, patient 2 in rows 2 and 4, patient 3 in row 3, and patient 4 in row 5), white-light synthetic RGB image reconstructed from the hyperspectral cube with tumor regions marked in yellow and biopsy sites with black circles, ground truth–labeled pixels and pixel classifications using linear unmixing methods [extended blind end-member and abundance extraction (EBEAE)],117,144 and a two-layer pixel-wise DNN.131 The four classes are NT, TT, BV, and BG. EBEAE yields around 60% overall accuracy, 30% tumor sensitivity, and 85% tumor specificity, whereas the DNN yields 85% overall accuracy, 65% tumor sensitivity, and 95% tumor specificity with fivefold cross-validation on mixed-patient pixel-wise data. GBM, glioblastoma; OD, oligodedrogioma; A, astrocytoma. Adapted from Leon et al.,88 under CC-BY 4.0.

JBO_30_2_023512_f005.png

Manual initial feature engineering has also been attempted to provide better pre-processed data as input for classification algorithms. For example, by selecting the most relevant spectral bands using iterative combinatorial optimization algorithms,99 correlation-based ranking,145 and deep learning.141 In addition, registered pairs of VIS-NIR and NIR images from the HELICoiD demonstrator have been analyzed for spectral similarities between classes to ignore non-distinctive samples.146

The two data streams from the visible and near-infrared (VNIR) and NIR cameras in the original HELICoiD setup112,113 need to be fused to create a single hyperspectral cube146 to add more useful data to the computational methods described above. Therefore, a new version of the demonstrator has been proposed by Leon et al.,147 where the confocal stereo configuration is changed to make the camera axes parallel. This changes the transformation between the two camera viewpoints from a projection to a translation, allowing for less spatial and radiometric distortion of the captured spectra. Combined with spatial and spectral upsampling, hyperspectral cubes are generated at the original spatial resolution and two wavelength ranges (641 spectral bands between 435 and 901 nm and 144 spectral bands between 956 and 1638 nm), resulting in a 21% accuracy increase as compared with using just the VNIR camera on a synthetic material database.

Because the HELICoiD system is mounted on a platform separate from the surgical microscope, it interrupts the surgical workflow due to the need for physical translation of the HELICoiD system prior to data acquisition [Fig. 3(a)]. To prevent such movement, Mühle et al.87 designed a workflow with a TIVITA® VIS-NIR tissue hyperspectral camera (500 to 1000 nm, 100 spectral bands, 5-nm spectral resolution, 640×480 output pixels, 100  frames/s, 6  s/cube)65 mounted onto surgical microscope oculars. However, as the cameras used in the above projects can capture only one-dimensional spatial slices, physical scanning of the cameras in one dimension across the surgical field of view is required to capture the entire hyperspectral cube. Thus, this system can capture nanometer-resolution megapixel intraoperative surgical datasets (comparable with previous systems88,122,147,148) at the cost of 5  s per capture. Data captured from this system yields 99% accuracy and greater than 98% sensitivity for tumor detection (one patient, 29k labeled spectra). However, given the time requirement for data acquisition of a single hyperspectral cube, it has had limited utility for routine clinical use as it significantly interrupts the surgical workflow, which precludes performing the resection under continuous feedback from the HSI system.

Therefore, snapshot HSI systems such as the Ximea Corporation MQ022HG-IM-SM5X5-NIR (665 to 975 nm, 25 spectral bands, 409×217  pixels, 170  frames/s)149 based on the IMEC SM5x5 NIR sensor, the Cubert Ultris X50 (350 to 1000 nm, 155 spectral bands, 570×570  pixels, 1.5  frames/s),91 the Senop HSC-2 (freely selectable bandwidths and resolutions)73 and the BaySpec OCI-2000 Series snapshot hyperspectral imagers (475 to 875 nm, 35 to 40 spectral bands, 50  frames/s)74 have been explored as potential alternatives89,90,98,150154 [Fig. 1(d)]. These can be mounted either by themselves89,98,150152 or coupled to a surgical microscope90,93,153,154 to minimize disturbance to the surgical workflow [Fig. 3(d)]. In addition, systems that fuse the advantages of snapshot and line scanning hyperspectral acquisition, called snapscan systems (such as the IMEC Snapscan VNIR,75,93 470 to 900 nm, 150 spectral bands, 3600×2048  pixels, 2- to 20-s acquisition), coupled with surgical microscopes have been used for intraoperative imaging.141 These systems have been used to develop machine learning-based classification (e.g., SVM, decision tree, and RF classifiers90,93,98,151) and convolutional neural networks,153 with similar results—for instance, a system with the Senop HSC-2 camera reported accuracies around 98%.153

3.1.2.

Vascular neurosurgery

A major goal in vascular neurosurgery is to restore healthy blood flow to structures in the brain and prevent ischemia (i.e., oxygen-starved), clots, and bleeding. Healthy blood flow leads to an adequate supply of oxyhemoglobin to tissue. Therefore, oxygen saturation (i.e., ratio of oxyhemoglobin to total hemoglobin) in bulk tissue is used as a measure of tissue health and adequate oxygen delivery to tissues.

Hyperspectral oxygen saturation estimation was first used for intraoperative imaging of the cerebral cortex in the superficial temporal artery (STA)–middle cerebral artery (MCA) bypass by Mori et al.101 Hyperspectral cubes were acquired with a standalone HSC1700 line scanning camera originally developed for the TAIKI Hyperspectral EO Mission (400 to 800 nm, 81 spectral bands, 640×480  pixels, 5- to 16-s acquisition).155 A mixed spectrum consisting of hemoglobin, deoxyhemoglobin, and bulk tissue scattering was fit,102 and oxygen saturation was estimated from these proportions. This study found that the STA-MCA anastomosis increased the oxygen saturation distal to the anastomosis corresponding to MCA territory brain regions in two patients with moyamoya disease and two with occlusion of the internal carotid artery. Further, Iwaki et al.156 also found that HSI could detect cerebral hyperperfusion following this anastomosis in five patients with moyamoya disease. These results showcased the potential of hyperspectral data in vascular neurosurgery for hemodynamic imaging (i.e., imaging of blood flow and tissue perfusion).

Fu et al.103 developed an LCTF-based HSI system coupled with a Zeiss surgical microscope tested to predict cerebral ischemia in rats. Unlike the prior work which fit spectra to estimate oxygen saturation, the authors used an empirical measure to estimate oxygen saturation and tissue perfusion. This work showed that the ratio of tissue reflectance around 545 nm to reflectance around 560 nm could identify early brain ischemia in a rat stroke model. Their method works using the reflectance of deoxyhemoglobin and oxyhemoglobin, which are equal at 545 nm but change rapidly in opposite directions between 545 and 560 nm, yielding a high predictive power for estimating low oxygen saturation.

Further, a snapshot hyperspectral system from IMEC with filters mosaiced on a CCD sensor (480 to 630 nm, 16 spectral bands, 256×512  pixels, 20  frames/s) was used by Laurence et al.157 to distinguish between blood vessels and bleeding in the cortex in three patients. Diffuse reflectance spectra measured by the camera are fit to a model consisting of a combination of oxyhemoglobin, deoxyhemoglobin, and tissue absorption.102 The estimated oxyhemoglobin proportion is Fourier-transformed to calculate its temporal frequency distribution. It was inferred that the healthy regions where the oxygen saturation is driven by the respiratory rate (cortex and blood vessels) had a first harmonic temporal frequency of around 0.23 Hz, with a significant second harmonic at 0.46 Hz. Meanwhile, bleeding varied more significantly than the heart rate at a frequency of around 1.3 Hz, which allowed for accurate identification of the vessels. Noordmans et al.158 used intraoperative HSI and found that these slow, sinusoidal hemodynamic oscillations displayed a stable and reproducible frequency in four epilepsy patients, which included non-lesional, focal cortical dysplasia and dysembryoplastic neuroepithelial tumor, emphasizing the possibility to generalize this method.

3.1.3.

Functional neurosurgery

Epilepsy surgery requires the mapping of metabolically active brain regions, including epileptogenic regions, that demand more oxygen and blood. This link between neuronal activity and changes in blood flow and oxygenation is commonly referred to as neurovascular coupling.159 As seizures result from intense, uncontrolled neuronal activity, regions of the brain exhibiting seizure activity are highly metabolically active and as such display differences in their neurovascular coupling compared with regions not exhibiting seizure activity.

The first use of HSI for evaluating neurovascular coupling dynamics in epilepsy intraoperatively was in 2013 by Noordmans et al.,104 where one patient with intractable sensorimotor seizures of the left hand was imaged using an LCTF-based system (Varispec VIS108 filter with a pco.pixelfly camera,110 1392×1024  pixels) coupled to a Zeiss Pentero surgical microscope (Fig. 6). In this work, the entire cerebral cortex was imaged over the span of 7 min, and the area of increased oxyhemoglobin at the start of seizure activity matched the epileptogenic zone. Subsequently, Laurence et al.105 further validated this finding in 12 epilepsy patients, which included non-lesional, focal cortical dysplasia type and heterotopia. The authors found that regions of seizure activity were isolated with an intraoperative HSI system.

Fig. 6

HSI to map seizures intraoperatively. (a) Local increase in oxygenation during seizure: oxygenation changes estimated from oxyhemoglobin concentration during a seizure. (b) Area matched to a photo of the cortex: overlay of oxygenation changes on an RGB image of the brain cortex, which correlates with electrical recordings of seizure activity measured via electrocorticography. Position 20 corresponds to the sensory cortex of the hand where positive seizure activity was recorded and HSI measured higher oxygenation. Reproduced from Noordmans et al.,104 with permission from John Wiley & Sons, Inc. (c) Relative concentration as a function of time.

JBO_30_2_023512_f006.png

Further, a snapshot hyperspectral system from IMEC with filters mosaiced on a CCD sensor (480 to 630 nm, 16 spectral bands, 256×512  pixels, 10 to 20  frames/s) coupled with a Zeiss Pentero microscope was used for intraoperative hemodynamic imaging on one patient undergoing epilepsy surgery resection by Pichette et al.92 at video rates. Laurence at al.160 tested this system to measure the interictal discharges in eight patients with non-lesional or subcortical heterotopias undergoing epilepsy surgery, where unsupervised clustering of oxygenation correlated well with direct electrical measurements of the imaged cortex.

Lastly, HSI has been used for intraoperative optical functional brain mapping with a three-chromophore [oxyhemoglobin, deoxyhemoglobin, and oxygenated cytochrome-c-oxidase (oxCCO)161,162] system by Caredda et al.163 Incorporating oxCCO into the model introduces a direct measure of cellular metabolism. This work used a Ximea Corporation MQ022HG-IM-SM5X5-NIR hyperspectral camera (665 to 960 nm, 25 spectral bands, 409×217  pixels, 14  frames/s)149 to measure the tissue reflectance spectra while the patient was repetitively clenching his fist. These reflectance spectra were fit to the model, and the resulting concentration maps were thresholded to identify areas of high oxygenation and metabolism, which were found to strongly correlate with those identified with gold standard direct electric brain stimulation. In addition to incorporating oxCCO, Caredda et al.164 have demonstrated blind unmixing using non-negative matrix factorization to account for two metabolic biomarkers strongly correlating with direct electrical brain stimulation on 12 patients undergoing resection for a brain tumor near the motor cortex.

HSI techniques in vascular and functional neurosurgery have both used oxygen saturation and hemodynamics. Therefore, optimal schemes for measuring the two simultaneously have been studied in Caredda et al.165 with Monte Carlo simulations of hemodynamic signals following neuronal firings. These schemes select specific combinations of NIR spectral bands from the hyperspectral image to ensure minimal errors in estimating the proportions of oxyhemoglobin, deoxyhemoglobin, and oxCCO, therefore seeking to achieve accurate metabolic and hemodynamic inferences. Simulations for the specific system designed and implemented in previous work166 augmented with a Ximea MQ022HG-IM-SM5X5-NIR hyperspectral camera149 were performed considering the effect of realistic factors such as spectral cross-talk and Gaussian noise on the estimation error. This study found that 21 to 22 spectral bands were enough to compute tissue chromophore proportions accurately (0.5% error for oxyhemoglobin, 4.4% error for deoxyhemoglobin, and 15% error for oxCCO), whereas 10 to 12 spectral bands provided a similar performance. The general approach implemented with this Monte Carlo simulation can potentially be used outside hemodynamic imaging in neurosurgical oncology and spine surgery to determine the optimal spectral signatures for tissue identification tasks using HSI.

3.1.4.

Spine surgery

HSI has been hypothesized to be useful in spine surgery as another form of surgical navigation to enable surgeons to operate without causing injury to surrounding neural elements. To demonstrate the utility of HSI in non-invasive patient positioning and navigation, a Hyperea snapshot hyperspectral camera from Quest Medical Imaging BV (450 to 950 nm, 41 spectral bands, 500×250  pixels, 16  frames/s) has been used to track skin features pre-operatively by Manni et al.59 Based on hyperspectral data collected from 17 healthy volunteers with breathing-based motion, submillimeter feature tracking was demonstrated using both handcrafted features and deep learning.

The first demonstration of intraoperative HSI in spine surgery was on a single patient undergoing spinal fusion by Ebner et al.167 This work showed the utility of both a stand-alone snapscan system from IMEC (470 to 900 nm, 150 spectral bands, 3650×2048  pixels, 2 to 40 s acquisition)75 and a stand-alone Photonfocus MV0-D2048x1088-C01-HS02-160-G2 NIR snapshot camera (665 to 975 nm, 25 spectral bands, 409×217  pixels, 50  frames/s)168 separately. These systems were used to capture video-rate hyperspectral reflectance data for tissue types and implant materials encountered in spinal surgery (skin, fat, muscle, bone, connective tissue, dura, and screws) in a bovine calf cadaver. The experience of the surgical team using this system intraoperatively was that it integrated smoothly into the surgical workflow.

3.2.

Datasets

The HSI systems described in Sec. 3.1 have produced rich datasets of intraoperative hyperspectral data. Some of this data are available in the public domain for use by researchers who do not have access to or do not have the resources for constructing and deploying their own HSI systems. We describe publicly available datasets, including those captured for individual projects.

3.2.1.

Neurosurgical oncology

The HELICoiD project has produced the following datasets available169 by contacting the authors.

  • HELICoiD Sample In-Vivo HS Human Brain Database: This dataset from Fabelo et al.122 contains five VIS-NIR hyperspectral cubes of grade IV glioblastoma multiforme (GBMs) taken during procedures on five different adult patients with a Hyperspec® VIS-NIR A-series camera. These acquisitions took place at the University Hospital Doctor Negrin of Las Palmas de Gran Canaria (Spain) and the University Hospital of Southampton (United Kingdom). These cubes are 1004×1010 in spatial dimension and contain 826 spectral bands between 400 and 1000 nm. A subset of 44,555 marked pixels from these images with types identified with high confidence by the operating neurosurgeon has been labeled in one of four categories: normal tissue, tumor tissue, blood vessel, and background with a biopsy smear of their corresponding tissue. To reduce human error, this entire gold standard labeling process was done in a computer-assisted manner with a custom-built graphical unit interface and a programmable angle threshold from known tissue-type spectra with the spectral angle mapper algorithm.170 This data can be downloaded from the authors’ webpage.169

  • HELICoiD Full In-Vivo HS Human Brain Database: This extended version of the previous dataset from Fabelo et al.148 contains 36 hyperspectral cubes from 22 patients with the same VIS-NIR camera, cropped to the region of interest (ROI). It contains data not only on GBMs but also on grade II and III oligodendrogliomas, meningiomas, and metastases from renal, lung, and breast carcinomas. The gold standard labeling was done in the same semi-automatic way as in the previous database. The password for this repository can be obtained by contacting the authors.169

  • HELICoiD Enhanced In-Vivo HS Human Brain Database (Benchmark): These data from Leon et al.88 were captured, processed, and labeled with the previously described method in the process of validating a mixed supervised-unsupervised classification technique. It contains a total of 61 cubes captured from 34 adult patients for the same kinds of tumors as above. The password for this repository can be obtained by contacting the authors.169

Later work by Puustinen et al.154 attempted to establish a systematic design for a microsurgical hyperspectral database. The architecture of the database was modeled to consider multiple characteristics of captured cubes such as patient information, raw data, red–green–blue (RGB) reconstructions, imaging parameters, manual annotations, pre-operative MRI, regions of interest, calibration standards, and labeled classes. This database is currently access-restricted to their collaborators but is projected to be publicly available in 2024.154

Lastly, the Southwest University Longitudinal Imaging Multimodal (SLIM) Brain Database of hyperspectral data has been recently introduced by Martín-Pérez et al.100 This dataset contains multimodal data from one line scan hyperspectral camera (Headwall Hyperspec® VIS-NIR E-series, 400 to 1000 nm, 369 effective spectral bands), one snapshot hyperspectral camera (Ximea Corporation MQ022HG-IM-SM5X5-NIR, 665 to 960 nm, 25 spectral bands, 409×217  pixels, 170  frames/s) and an RGB-depth light detection and ranging (LiDAR) (Azure Kinect DK, 3840×2160  pixels, 8-bit depth). The data captured for 193 patients (and counting) at the Hospital Universitario 12 de Octubre in Madrid, Spain, encompasses over a million-pixel spectra labeled semi-automatically by neurosurgeons into five classes: normal (2 subclasses), tumor (10 subclasses), blood (4 subclasses), meninges (2 subclasses), and skull. In addition to raw images, the database contains pre-processed data that remove the effects of depth and noise, hyperspectral cubes cropped to region of interest, generated pseudo-RGB images, and pixel-wise labels. The dataset is available on the database webpage after seeking permission from the authors.171 Data from this setup coupled and fused with MRI reconstructions are also available.172,173

3.2.2.

Vascular and functional neurosurgery

The data used for hemodynamic imaging in vascular and functional neurosurgery consist of hyperspectral video captured during surgery. One such dataset, captured for imaging interictal epileptiform discharges, exists. This dataset, captured at the Centre Hospitalier de l’Université de Montréal by Laurence et al.,160 consists of 8- to 15-m recordings of eight patients aged 24 to 35 treated for epilepsy. Each hyperspectral cube in the video is 256×512  pixels, with 16 spectral channels between 480 and 630 nm. In addition, the data contain intraoperative ECoG recordings from an electrode grid that was manually time-synced with the hyperspectral video, which can be used as the gold standard. These data are available upon request from the authors.160

3.2.3.

Spinal surgery

Hyperspectral data captured by Ebner et al.167 from a bovine calf cadaver in the spinal fusion study described above are available. This dataset was acquired at the Balgrist University Hospital, Zurich, and consists of aligned hyperspectral snapscan (470 to 900 nm, 150 spectral bands, 3650×2048  pixels) and snapshot (665 to 975 nm, 25 spectral bands, 409×217  pixels) cubes. The relevant parts of the hyperspectral cubes were labeled manually by a neurosurgeon. The labels include the various tissue types and implant materials encountered in spinal surgery (skin, fat, muscle, bone, connective tissue, dura, and screws) and are available from the authors upon request.

3.3.

Visualization Techniques

3.3.1.

Neurosurgical oncology

The standard technique for visualizing pixel-wise tissue classification from hyperspectral data is by superimposing a segmentation map (e.g., map of tumor versus normal tissue) over a synthetic RGB (i.e., anatomic) image created from the hyperspectral cube.113,115,131 However, as classification algorithms that use pixel-wise data do not enforce that neighboring pixels have the same class with high probability (i.e., the classification map is piecewise constant), generating a realistic map requires integrating spatial information. Therefore, several methods from the HELICoiD project115,122,128,129,131,174 [Figs. 3(a), 4, and 5] use a mixed pixel-wise wide-field approach that makes use of both spatial and spectral information. This approach uses a k-nearest neighbor-based algorithm based on matching and averaging non-local neighborhoods175 to combine pixel-wise supervised classification outputs (e.g., with SVM or RF) with locality information from a single-channel representation of the hyperspectral data (generated with spectral dimensionality reduction). This yields a spatio-spectrally inferred pixel-wise classification map. Further, spectral similarity information is incorporated using a majority voting approach176 between this spatio-spectral map and a segmentation map generated with k-means clustering. The result of this pipeline is then overlaid upon a synthetic RGB (anatomic) image to yield a visualization that is faithful to both the spectral and spatial properties of the measured hyperspectral data.

Recently, sophisticated methods to visualize, reconstruct, refocus, and project hyperspectral data and segmentation maps have been developed. Augmented reality-based co-projection of HSI-generated RGB data and neural network-based segmentation labels was implemented on a HoloLens AR headset by Huang et al.177 and successfully tested in phantom resection procedures. Although the projection quality was excellent, the frame rate was restricted due to an unoptimized software implementation. Other approaches have explored low-level image processing and imaging operations such as hyperspectral image demosaicing to generate synthetic RGB images consistent with the response of the human eye,178 hyperspectral image refocusing to tackle depth variation in the surgical field,179 and synthetic white balancing to correct for illumination spectrum variability.180

“Vascular, functional, and spine neurosurgery” all use digital overlays of the results of their data analysis on an RGB reconstruction of the surgical field.92,101,104,156,157,160,163,181

3.4.

Clinical Results

Clinical studies using the optical systems and computational methods described above have shown the potential for surgical utility of HSI in reflectance mode for neurosurgery. Here, we review the results from clinical studies performed and present a summary of their statistics and findings in Table 3.

Table 3

Clinical validation of hyperspectral imaging systems for neurosurgical applications.

JBO_30_2_023512_f011.png

3.4.1.

Neurosurgical oncology

Clinical studies using HSI in reflectance mode for neurosurgical oncology have focused on brain tissue classification during brain tumor resection (16 studies from 2016 to 2024). These studies have implemented classification algorithms, ranging from classical machine learning (RFs, SVMs, and MLPs)88,90,96,98100,112,115,116,119,122,150,171 to modern deep learning architectures (CNNs and recurrent NNs)136,141,150 (Figs. 4 and 5) with the imaging systems described in Table 1. These algorithms have been shown to be highly accurate, sensitive, and specific for identifying tumors. Some algorithms have been optimized to provide results within 1  min97,119,152 (three studies from 2016 to 2023). Accurate segmentation of a large range of primary tumors, including high-grade gliomas to low-grade gliomas, metastases, and healthy tissue types, has been shown using reflectance hyperspectral data. Further, work toward dimensionality reduction and spectral band selection (two studies from 2017 to 2021) has sought to further reduce data processing and acquisition time to enable real-time feedback for the surgeon.99,114 In addition, clinical studies have calculated the objective measures for this separability based on reflectance spectral similarity between the components (2021)146 that tested the ease of integration of these methods into the surgical flow (three studies from 2020 to 2023)87,89,93 and tested the possibility of augmented reality visualization of the hyperspectral outputs (2023).152 To facilitate further development with HSI (e.g., novel applications of machine learning algorithms), several of these studies have made their data either publicly available88,89,100,122,147,148 or available upon request.150

3.4.2.

Vascular and functional neurosurgery

Clinical studies have explored the application of HSI for imaging of brain hemodynamics, neurovascular coupling, and vascular or functional pathologies using the hyperspectral systems detailed in Table 1. Vascular neurosurgery clinical studies (three studies from 2014 to 2020) have shown HSI can provide accurate estimates of cerebral oxygenation,101 the potential for HSI to diagnose brain bleeding,157 and estimating hyperperfusion156 from hyperspectral data. Using these oxygenation mapping techniques, four studies between 2013 and 2022 demonstrated how intraoperative HSI can be used to detect seizure activity and map functional areas of the brain using principles of neurovascular coupling160 and validation with electrocorticography (Fig. 6). One study160 has made their data available upon request to facilitate further algorithmic research.182

3.4.3.

Spine surgery

As a first translational experience using HSI intraoperatively in spine surgery, Ebner et al.167 measured full-field spectra of various components in the scene of a patient undergoing spinal fusion (data available upon request). In addition, there has been clinical evidence of HSI-based skin feature tracking as a useful tool for intraoperative navigation in spine surgery.59

4.

Neurosurgical HSI in Fluorescence Mode

Reflectance-based hyperspectral systems provide excellent pixel-wise tissue classification capabilities. However, as observed in previous studies, the reflectance spectra of normal and tumor tissues can be very similar.146 Although these similarities can be tolerated in regions of predominantly healthy tissue or bulk tumor, they can be problematic in areas of diffusely infiltrative tumor, which is the case especially in the margins of gliomas,41 where residual tumor is likely to lead to tumor recurrence.

In addition, inter-patient and inter-system variability in the reflectance spectra has shown limited generalization of trained models. For instance, mixed-patient pixel-wise data give high classification metrics (99% accuracy and sensitivity).112,113 However, these metrics drop to as low as 80% accuracy and 40% sensitivity131 when data are divided patient-wise for classification. Such a significant drop in accuracy and sensitivity highlights the current limitations in generalizing these reflectance-based HSI techniques across patients for guiding brain tumor resections.

Fluorescence-guided surgery (FGS) was introduced as a standard of care technique for high-grade gliomas almost 20 years ago and has been shown to be a safe and effective surgical adjunct to delineate tumor tissue intraoperatively.9,183 FGS “extends” the surgeon’s vision by increasing the contrast between healthy and tumor tissues.5,45,184186 Clinically approved fluorophores for FGS include 5-aminolevulinic acid (ALA)-induced PpIX,187189 fluorescein sodium (FS),190,191 and ICG192,193 (Fig. 2). These fluorophores selectively accumulate in tumor tissue through various cellular mechanisms194 and fluoresce when illuminated with excitation light having an appropriate wavelength. PpIX and FS are typically excited with violet and blue light at 405 and 494 nm, respectively, and fluoresce in the VIS spectrum with emission maxima at 635 nm188 and 520 nm,190 respectively.195 ICG is excited at 780  nm and fluoresces with its peak in the NIR at 815 nm.193 However, it has been shown to produce significant fluorescence contrast beyond 1000 nm, allowing for imaging in the SWIR range.196

5-ALA-induced PpIX fluorescence has been extensively studied,197,198 validated,45 characterized,199202 and established as a standard in surgery.195,203 PpIX is an intermediate in the hemoglobin synthesis pathway. The mechanisms of PpIX accumulation in tumor tissue are multifactorial (e.g., increased tumor metabolism, tumor proliferation, enzymatic or cellular transporter modifications, and blood–brain barrier breakdown204). Studies have clearly demonstrated its utility in guiding resections with excellent diagnostic metrics for tumor tissue identification. PpIX accumulates in tumors to produce significant fluorescence after an oral dose of its precursor, 5-ALA (20  mg/kg)205 2 to 3 h before surgery. Further, PpIX has its largest excitation maximum at 405 nm,188 with a broad (>200  nm) Stokes shift between the 405-nm excitation maxima and its emission maximum at 635 nm.188 This large Stokes shift allows for effective filtering of excitation light without loss of fluorescence emissions. Further, most of its fluorescence spectrum lies in the domain where tissue scatters light with low hemoglobin absorption and low autofluorescence.200 Thus, HSI has been used to isolate PpIX fluorescence from autofluorescence, other fluorescent markers and noise via spectral fitting, and correction for attenuation due to tissue optical properties. The use of spectral-based processing capable with HSI has enabled the detection of “invisible tumors” due to the ability to measure lower levels of PpIX below the visible threshold of conventional clinical systems67,199 (Fig. 7). This increase in sensitivity and preservation of specificity for PpIX fluorescence has been quantified systematically.206 We will next discuss HSI systems that leverage these advantages along with associated computational methods.

Fig. 7

In vivo hyperspectral fluorescence imaging of PpIX in a glioblastoma patient. Intraoperative images using a spectral scanning system [Fig. 4(b)] were captured during the resection of glioblastoma with images at the beginning (a)–(c), near end (e)–(g), and end of the surgery (i)–(k). The first three columns show (from left to right) RGB images reconstructed from the hyperspectral cube (white light), co-registered fluorescence images using the conventional fluorescence surgical microscope (conventional fluorescence), and PpIX concentration maps estimated from hyperspectral cubes (hyperspectral quantitative fluorescence). (d) In vivo fluorescence spectra acquired from three locations and marked by different colored crosses (+) in panel (a) with a high-intensity PpIX spectrum, and peak in red (+) matches the visible pink fluorescence in the center of tumor (b); an intermediate intensity PpIX spectrum and peak in blue (+) with no visible pink fluorescence is close to tumor in panel (b); and no PpIX spectrum and peak in green (+) matching no visible pink fluorescence far from tumor in panel (b). (h) In vivo fluorescence spectra acquired from one location and marked by a blue colored cross (+) in panel (e) show an intermediate intensity PpIX spectrum and peak in blue (+), no visible pink fluorescence in panel (f), high estimated PpIX concentrations in panel (g), and are validated with pathology as tumor-infiltrated tissue in panel (l). In panels (d) and (h), the y-axis shows the intensity of fluorescence emission in arbitrary units, and the x-axis shows the wavelength λ in nanometers. vFI, visible fluorescence with the conventional microscope; qFI, quantitative fluorescence imaging estimates of PpIX. Reproduced from Valdés et al.,67 under CC-NC-SA 3.0.

JBO_30_2_023512_f007.png

4.1.

Imaging Hardware and Software

The first demonstration of multispectral fluorescence imaging in neurosurgical oncology was in 2003 using a wide-field five-band (bandpass spectral filters from Omega Optical207 at 495-, 543-, 600-, 640-, and 720-nm center wavelengths; 20-nm bandwidth, 755×484 DVC CCD detector208) multispectral system. Here, the authors imaged a fluorescent tumor after exogenous administration of the fluorescent agent, Photofrin,209 with a total acquisition time of 15 s. This study concluded that multispectral imaging has the capability to separate Photofrin fluorescence from a background with a 10:1 signal-to-background ratio. Further, it hypothesized that multispectral data could estimate Photofrin concentrations, with a detection limit of 50 to 100  ng/ml at 0.5-mm depth inside tissue-mimicking phantoms. However, this work assumed that tissue is homogeneous, causing these estimates to be accurate only when tissue optical properties matched the validation phantoms.

As noted before, the first hyperspectral fluorescence imaging was in 2007, where Gebhart et al.107 developed an HSI system that consisted of a Varispec VIS-20 LCTF from Cambridge Research Instruments, Inc.108 coupled with a 512×512 PhotonMax EMCCD camera109 mounted on a surgical microscope to measure intraoperative autofluorescence and diffuse reflectance spectra in one patient. The authors found that a value less than 1.25 for the ratio of autofluorescence at 460 nm to diffuse reflectance at 700 nm was highly diagnostic for tumor tissue.

Valdés et al.67 developed a similar hyperspectral system and implemented the first intraoperative approach to correct fluorescence signals for the distorting and attenuating effects of tissue optical properties in 12 patients with brain tumors [Fig. 3(c)]. They imaged the diffuse reflectance at excitation and emission wavelengths and fluorescence, followed by implementation of a correction algorithm67,210,211 (i.e., a spectrally constrained dual-band normalization algorithm) for use in 5-ALA-PpIX FGS. Similar to the work by Gebhart et al.,107 this approach used a Varispec LCTF coupled to a pco.pixelfly camera and custom optical adapter110 unto a surgical microscope modified for fluorescence imaging (Zeiss OPMI Pentero). The surgical field was imaged under white light and 405-nm illumination respectively67,84,211 to measure fluorescence spectra and reflectance with a total maximum acquisition time of <16  s. The measured fluorescence spectrum Fraw(λ) was corrected by an empirical factor inversely proportional to the excitation reflectance Rexc and power law proportional to the emission reflectance Rem.

Fcorr(λ)=ΩFraw(λ)RexcRem0.7.

The corrected fluorescence spectrum was fit to a weighted sum of basis spectra for fluorophores of interest (e.g., PpIX, fluorescein sodium, and tissue autofluorescence) to isolate only PpIX or FS fluorescence. Thus, the estimated corrected PpIX values were found to be directly proportional to absolute PpIX concentrations. This correction allowed the detection of PpIX concentrations as low as 20  ng/ml, which was significantly lower than the lowest concentrations of 600 to 1000  ng/ml found in visually fluorescent (i.e., red-pink visual fluorescence through surgical oculars) high-grade glioma tissues. Further, these results were encouraging as they indicate the ability to detect low yet diagnostically significant PpIX concentrations to identify low-grade glioma and infiltrative margins that are usually “invisible” with conventional techniques49,51,67,189,212214 (Fig. 7). This work concluded that a threshold of 100  ng/ml had a positive predictive power of >90% for tumor tissues. The HSI approach by Valdés et al.67 was further validated in additional studies demonstrating improved detection capabilities in clinical ALA-PpIX FGS.84 In further work by Valdés et al.,211 a more sensitive pco.edge camera215 allowed lower acquisition times of 1 to 2 s with the same detection limit. An even more sensitive EMCCD camera216 from Nüvü cooled to 85°C further decreased the limit of detection to 1  ng/ml, comparable to point spectroscopy methods217 at a maximum total acquisition time of 5 s. This correction method was further applied to pediatric brain tumors, where the limit of visual detection was determined to be 200  ng/ml,218 and the lower limit of detection for PpIX was 20  ng/ml. These were all validated with tissue-mimicking phantoms consisting of a solution of PpIX mixed with an absorber (e.g., hemoglobin and yellow food dye) and a scatterer (e.g., intralipid emulsion).67 Known fluorophore concentrations in these phantoms can be used to map the corrected fluorescence to absolute PpIX concentrations and evaluated for accuracy metrics such as linearity (i.e., R2 value and mean percentage errors).

Spectrally constrained dual-band normalization has been systematically evaluated for its accuracy in correcting the raw fluorescence signal for tissue optical properties, its highly sensitive estimates of fluorophore concentrations (i.e., PpIX),67,210,211,219,220 its reproducibility by different clinical and research teams and HSI systems,94,221223 and its diagnostic utility with greater sensitivity, negative predictive values, and overall accuracy for tumor detection compared with visual expert evaluation. Specifically, Lehtonen et al.206 found that visual assessments yielded 63% accuracy, 48% sensitivity, 92% specificity, and 340  ng/ml minimum limit of detection for PpIX. Meanwhile, an HSI system based on a standalone Senop HSC-2 camera (500 to 900 nm, up to 1000 spectral bands, 1024×1024  pixels, 150  frames/s)224 yielded 96% accuracy, 100% sensitivity, and 86% specificity, and 16  ng/ml minimum limit of detection (16 samples with PpIX and eight control samples; number of patients not specified).

Bravo et al.219 have shown in three patients that corrected concentration estimates (with spectral fitting to isolate PpIX) correlate strongly with point spectroscopy estimates220 (linear fit r=0.98) when compared with uncorrected estimates (linear fit r=0.91 accounting for other fluorophores, linear fit r=0.82 not accounting for other fluorophores) (Fig. 8).

Fig. 8

Comparison of HSI to point spectroscopy. Point spectroscopy provides gold standard spectrally resolved measurements and PpIX concentration estimates that can be used to validate the estimates from hyperspectral processing. HSI extends the applicability of fluorescence guidance to WHO grade III anaplastic oligoastrocytomas (AOA) (a)–(e) and meningiomas (MEN) (f)–(j), where the PpIX concentration is significantly less than the limit for visual fluorescence. (k) Fluorescence spectra fit and estimated PpIX concentrations from HSI (top) and point spectroscopy measurements (bottom). MR texture map = matching MRI 2D image; Zeiss—white = white-light image from a conventional Zeiss microscope; Zeiss—blue = fluorescence image from a conventional Zeiss microscope; integrated fluorescence = map of fluorescence calculated from the area under the fluorescence spectrum from 620 to 650 nm; quantitative PpIX = map of PpIX concentration estimates. Reproduced from Bravo et al.,219 under CC-BY 4.0.

JBO_30_2_023512_f008.png

Xie et al.221 developed a Bayesian reconstruction method based on spatial regularization and tested it on one tissue specimen from a glioblastoma patient. This approach defines reconstruction in terms of a total variation-regularized minimization problem

C^(x,y)=argminC(x,y)[x,y,λ(Fraw(x,y,λ)Ω(1Rexc(x,y,λ))Rem2.6(x,y,λ)C(x,y))2+ΓC(x,y)1].

The first term here, based on previous point spectroscopy analysis,85 attempts to make the reconstruction of C(x,y) faithful to the measurement of Fraw(λ). Here, Ω is a factor that maps corrected fluorescence intensity to concentration, and Γ is a regularization factor that decides the smoothness of the reconstruction. This reconstruction lowers the detection limit to 10  ng/ml using an uncooled ORCA-Flash4.0 EMCCD sensor from Hamamatsu Photonics with 26 s of total acquisition and processing time. Such low detection levels would be particularly useful for detecting low, but diagnostically significant PpIX levels in low-grade gliomas.220 Further computational work used an unspecified sCMOS camera225 with the Sony IMX252 sensor by Black et al.199 and a pco.edge camera (14  ng/ml minimum detection limit).94,219,226

Finally, Black et al.222 used machine learning–based approaches on the unmixed fluorophore contributions to predict the following tumor properties in 891 hyperspectral measurements from 184 patients with multiple brain tumor histology types: tumor type (12 categories, test accuracy 85%), tumor margin location (tumor bulk, infiltrative margin, and healthy tissue altered due to tumor, test accuracy 96%), isocitrate dehydrogenase enzyme (IDH) gene mutation type (mutated and normal, test accuracy 86%), and tumor grade (II–IV, test accuracy 93%). In addition, PCA variation analysis revealed that the five fluorophores mentioned above were the most likely components explaining the dataset spectra under the assumption of Gaussian noise.222 Incorporating the more physically accurate Poisson unmixing model, with a dataset containing 555,666 spectra, allowed Black et al.223 to unmix fluorophores previously impossible due to their small proportion and thus building up a “spectral library” containing PpIX620 (see next paragraph), PpIX634, reduced nicotinamide-adenine dinucleotide (NADH), flavin adenine dinucleotide (FAD), flavins, lipofuscin, melanin, elastin, and collagen as its members. Finally, deep learning–based architectures have incorporated the non-linear wavelength-dependent effects not taken into account by the previous algorithms to correct and unmix fluorescence spectra with a semi-supervised architecture.226 This approach yielded more realistic and smooth estimates of PpIX concentration maps as well as removing imaging artifacts such as specularities.

As mentioned above, correction methods, such as spectrally constrained dual-band normalization, commonly undergo validation using fluorescent tissue-mimicking liquid phantoms. However, a recent study by Suero Molina et al.227 has proposed a photostate of PpIX that contributes a fluorescence spectrum shifted to a peak at 620 nm that likely occurs naturally in tissue, but not in such phantoms. The presence of this photostate (called PpIX620 as opposed to the usual PpIX634) impacts the accuracy of conventional linear fitting models which use the basis spectra of PpIX634, PpIX photoproducts and autofluorescence from NADH, lipofuscin, and flavins. Therefore, incorporation of the PpIX620 spectrum into linear fitting models has been proposed to improve the accuracy of the spectral fit in dimly fluorescent areas (e.g., low-grade gliomas and infiltrative regions of high-grade gliomas). This also lowers false positives by removing the spurious contribution of PpIX620, yielding the true PpIX634 spectrum and therefore accurate, lower PpIX634 estimates.199 Further, additional studies have noted the proportion of the two photostates (i.e., the overall blue shift of the PpIX spectrum) correlates with tumor grades of tissues.214

This LCTF design provided a small footprint to enable HSI with high spatial resolution at user-defined spectral resolutions and acquisition times in the order of seconds. Although this HSI design and subsequent implementations have been translated into the operating room given their integration with commercial surgical microscopes, they suffer from one major limitation for widespread surgical use: image acquisition from these systems requires spectral scanning (i.e., an image for every wavelength of interest with a finite amount of camera exposure for each wavelength to reconstruct a full hyperspectral cube). As such, these HSI systems have limited intraoperative utility for widespread use because they do not provide real-time surgical guidance. To address this limitation, a recent snapshot HSI system that used a series of birefringent crystals was developed by Marois et al.228 to capture 64 spectral channels at a time. This system achieved a frame rate of 4 to 6  frames/s over a broad wavelength range (425 to 825 nm, 64 spectral bands, 600×400  pixels) and subsequently implemented a spectrally constrained dual-band normalization technique as well.

4.2.

Clinical Results

Clinical studies using HSI in FGS have focused mostly on tissue classification for improving tumor detection (Table 3). The first study sought to detect residual tumors with a limited number of (multispectral) images209 coupled to visual inspection of these channels. The first quantitative clinical studies, carried out by Valdés et al.,67 performed unmixing of fluorescent components of tissue via fluorescence spectrum fitting and correction of PpIX fluorescence intensity for attenuation due to light-tissue interaction to estimate absolute pixel-wise tissue concentrations67,84 on 12 patients undergoing brain tumor resection (Fig. 7). Subsequent work from this group showed improvements in accuracy and sensitivity for PpIX detection.219 These corrections were further incorporated into a spatially regularized optimization for smooth and accurate estimates of PpIX concentration maps.221,226 Further, the autofluorescence properties of tissue were characterized in two studies to incorporate them into the unmixing algorithms, using an increasing number of components and known compounds (e.g., PpIX photoproducts and differing PpIX states)—one analyzing 2692 in vivo spectra from 128 patients199 and one building a spectral endmember library from 555,666 fluorescence spectra measured from 891 ex vivo sample measurements.223 The coefficients of the resulting fluorescence spectrum fit were shown to be useful for predicting properties of tumors such as type, margin, grade, and IDH mutation status222 in 891 spectra from 184 patients. Further, to optimize the dose and administration time of 5-ALA, hyperspectral studies were performed to estimate the pharmaco-kinetics of PpIX inside tissue—one with 81 spectra from 25 patients for low-grade gliomas201 and one with 201 spectra from 68 patients for malignant gliomas.202 These studies showed an optimal post-dose time of 7 to 8 h at which PpIX tumor fluorescence signal peaks.

The results of these studies point toward the potential for HSI to enhance fluorescence feedback to serve as an improved surgical adjunct. One of these HSI studies has made its dataset available upon request,219 whereas another offers the spectral library constructed during its analysis223 to facilitate further research. Exact PpIX concentrations, which are determined by correcting its fluorescence spectrum from the distorting effects of tissue optical properties and unmixed from autofluorescent and other fluorescent components in tissue, can increase the accuracy of predicting tumor presence, whereas the unmixed autofluorescent parts predict tumor properties with machine learning. This, combined with the optical functional and vasculature mapping from the previous section, will allow for all-optical joint visualization of anatomy and tumor for safe and accurate tumor resection.

5.

Future Perspectives

As discussed in the previous sections, there is substantial evidence supporting the potential of HSI for intraoperative visual feedback. HSI systems, particularly those utilizing snapshot and snapscan techniques, are convenient for clinical deployment due to their small footprint and near-real-time acquisition capabilities. Co-developed computational methods have demonstrated excellent results in the accurate delineation of tumor pathology and normal tissue. These systems have also enabled non-invasive ECoG-style brain mapping of metabolically active tissue to visualize functional connectivity and hemodynamic inference of molecular metabolic parameters such as oxyhemoglobin, oxCCO concentrations, and oxygen saturation. Prototype augmented reality systems have shown promise in accurately projecting hyperspectral results onto the surgical field in 3D. Integrating these capabilities together can create a powerful, unified, non-invasive, optical 3D visualization system that seamlessly integrates into the existing surgical hardware and workflow. Such a system will provide the surgeon with information far richer than can be done with traditional visual methods or with an RGB camera displayed on 2D monitors.

However, there are areas that need improvement to enhance these guidance techniques. The most critical aspect is the framerate of the final hyperspectral outputs. The pipeline leading to these outputs involves acquisition, processing, and projection, each of which needs optimization. By individually or jointly refining these components, the final framerate can be brought closer to real-time, significantly improving the system’s utility in surgical settings.

Among the variety of HSI implementations discussed in Sec. 2.2, snapshot and snapscan hyperspectral systems70,71,92 coupled with a surgical microscope seem to be the most practical for immediate clinical translation. Even with these solutions, more work needs to be done to increase the spatial resolution of the hyperspectral cube. One possible approach in this direction is upsampling the low-spatial-resolution hyperspectral cube with bilateral upsampling229 and pansharpening230 algorithms. To make the more commonly used line-scan hyperspectral imaging systems practical for surgical guidance, their quantum efficiency needs to be increased and noise floor needs to be decreased—both of which can be achieved using cooled emCCD cameras,216 among other systems.

Another potential direction of acquisition speedup is dimensionality reduction. Because hyperspectral channels have certain spatial regularity (nearby pixels of nearby channels have close intensity values with high probability), not all the entries in the hyperspectral cube are fully independent. Therefore, it is possible to measure subsets of the hyperspectral cube, or an approximation to it, while still extracting the required information. Examples of this approach are selecting specific, most important spectral channels;90,99,118,145,147 implementing pre-calculated programmable spectral filters matched with the combination of tissue components needed;78 and measuring low-rank approximations to the hyperspectral cube.76 Even with these existing methods, selecting the free parameters—number of channels to use, filter shapes, and rank of the approximation—remains an open problem, requiring an analysis of the statistics of the hyperspectral data and the propagation and noise model of the imaging system.160 However, a balance needs to be achieved among speed (e.g., real-time imaging), quality of HSI data (e.g., high spatial resolution, high spectral resolution, and high signal to noise), and/or cost (e.g., light-field technologies) that would be of clinical value. HSI is still in its infancy as an intraoperative imaging modality, and as newer systems are translated into clinical use for specific applications (e.g., HSI for FGS of gliomas), the right balance among speed, data quality to provide clinical value, and costs will likely determine the impact HSI will have as an intraoperative imaging modality.

Current computational algorithms and their implementations need significant work to bring them up to the required speeds. Condensed data input from imaging systems as described above, combined with parallel computational implementations of optimized algorithms on platforms such as field-programmable gate arrays,125 can allow for this acceleration. Improved classification algorithms, optimized for sensitivity to the pathology under consideration and modified to use the condensed data above, can lessen the required computational load. The ability to process hyperspectral images fast would imply that it is possible to also process hyperspectral videos, opening up avenues for applying previously developed computer vision techniques for instrument and feature tracking, manipulation, and guidance. To incorporate these results into a comfortable 3D display equipped for surgery or telesurgery, optimized implementations of augmented reality projection methods prototyped in the literature177 need to be developed. Lastly, to jointly optimize all the components above, methods to simulate the entire pipeline—emission at the light source, propagation through the scene and image formation at the camera—must be developed to ease the requirement of prototyping the corresponding HSI systems.162,165

Due to the narrow focus of existing clinical studies on certain kinds of pathologies, each clinical study suffers from a low number of patients.113,115,141,221,231 The need for larger and ultimately randomized controlled clinical studies—in terms of pathologies and imaged tissue properties90,96,119,145,150,151,222,223 and demographics156,160 is an essential step forward in establishing hyperspectral imaging as a standard in intraoperative guidance. Further, clinical HSI studies have not reported on non-randomized patient outcomes (e.g., overall survival, progression free survival, and rates of seizure freedom). In addition, it is vital to standardize the protocol of such clinical studies so that results are reproducible and comparable across studies,160,231 to standardize data formats and schematics so that they can be parsed and re-utilized easily, and to set specific goals to be achieved with each clinical approach.89 These studies must include in them an analysis of inter-patient data and statistics variabilities125 and methods to counter them to ensure consistent results across time. In addition, it is necessary for clinical studies to also consider the ease and complexity of use of the studied system and to note the experience of the operating room (OR) team post-study for further refinement.89,167

As a result of the relatively few clinical studies and privacy concerns, as noted in previous work,90,99,113,115,141,152,222,231,232 there is a lack of publicly labeled hyperspectral data to enable the development of computational techniques at venues of high expertise in artificial intelligence, where clinical studies cannot be conducted. This is especially the case with rare tumors and vascular and functional disorders. The available datasets are all semi-automatically labeled with input from a neurosurgeon or a pathologist, which has the possibility of human error. Therefore, there is a need for fusing HSI with other, more established imaging modalities, such as MRI, for automatic labeling of hyperspectral images.172,173 In addition, in infiltrative tumors, where it is impossible to draw a sharp boundary between tumor and healthy tissues, a method for fuzzy margins is needed to perform accurate labeling,222 which can be achieved with co-registered MRI data and MRI classification algorithms. Fusion with MRI also allows for estimation of brain shift and joint intraoperative feedback from both modalities.59

Furthermore, all the HSI systems described here image light in the visible, NIR, and SWIR ranges of the electromagnetic spectrum. Light in this range has limited penetration depth. Therefore, these HSI systems have limited ability for imaging deep in tissues,233 typically lesser than a centimeter of depth. Meanwhile, techniques such as MRI, US, and intraoperative neuronavigation provide 3D information deeper inside brain tissue. A fusion of these techniques will allow the surgeon to interpret these sources of complementary information—in vivo surface/subsurface molecular information from HSI, in vivo subcentimeter structural information from US, 3D structural information at one time point during surgery from intraoperative MRI, and correspondences with 3D pre-operative information with intraoperative neuronavigation.46

The widespread adoption of intraoperative HSI depends on the success of the aspects of future work listed above and the practicality of the resulting optimized methods. The success of these developed HSI methods in pre-clinical work and clinical studies opens up possibilities for commercial miniaturization, cost reduction, and integration into existing surgical microscopes and visualization software and hardware and will drive further research with large-scale funded projects such as the HELICoiD.113 If effective enough, techniques developed in neurosurgical HSI can be applied to minimally invasive procedures, procedures in other surgical subspecialties, and data generation for education and surgical training tools. In summary, supported by modern techniques from imaging, computation, and visualization, and driven by clinical interest, hyperspectral imaging has the potential to be a clinical standard of care in neurosurgery.

Disclosures

J. D. Bernstock has positions and equity in Pockit Diagnostics Ltd. J. D. Bernstock also has an equity position in Treovir Inc. and is on the boards of Centile Bio and NeuroX1. P. A. Valdés is a consultant for NX Development Corp. All other authors have no pertinent disclosures to make.

Code and Data Availability

This review paper was based on a literature survey of hyperspectral imaging in neurosurgery performed using standard tools such as Google Scholar and PubMed. Therefore, there is no code or data accompanying this paper. All the claims, results, and data we quoted in the paper are accompanied by citations to their original research publications.

Acknowledgments

This work was supported in part by the National Institutes of Health (National Institute of Biomedical Imaging and Bioengineering) (Grant No. 5R21EB034033) (P. A. Valdés), the Cancer Prevention Research Institute of Texas (Grant No. RP220581) (P. A. Valdés), and the National Science Foundation Expeditions (Grant No. 1730574) (A. Veeraraghavan).

References

1. 

Z. Lin, C. Lei and L. Yang, “Modern image-guided surgery: a narrative review of medical image processing and visualization,” Sensors, 23 (24), 9872 https://doi.org/10.3390/s23249872 SNSRES 0746-9462 (2023). Google Scholar

2. 

L. Privitera et al., “Image-guided surgery and novel intraoperative devices for enhanced visualisation in general and paediatric surgery: a review,” Innov. Surg. Sci., 6 (4), 161 –172 https://doi.org/10.1515/iss-2021-0028 (2021). Google Scholar

3. 

B. Bortot et al., “Image-guided cancer surgery: a narrative review on imaging modalities and emerging nanotechnology strategies,” J. Nanobiotechnol., 21 (1), 155 https://doi.org/10.1186/s12951-023-01926-y (2023). Google Scholar

4. 

V. Ntziachristos, J. S. Yoo and G. M. van Dam, “Current concepts and future perspectives on surgical optical imaging in cancer,” J. Biomed. Opt., 15 (6), 066024 https://doi.org/10.1117/1.3523364 JBOPFO 1083-3668 (2010). Google Scholar

5. 

P. A. Valdés et al., “Optical technologies for intraoperative neurosurgical guidance,” Neurosurg. Focus, 40 (3), E8 https://doi.org/10.3171/2015.12.FOCUS15550 (2016). Google Scholar

6. 

Y. Rivero-Moreno et al., “Robotic surgery: a comprehensive review of the literature and current trends,” Cureus, 15 (7), e42370 https://doi.org/10.7759/cureus.42370 (2023). Google Scholar

7. 

B. S. Peters et al., “Review of emerging surgical robotic technology,” Surg. Endosc., 32 (4), 1636 –1655 https://doi.org/10.1007/s00464-018-6079-2 (2018). Google Scholar

8. 

F. Cepolina and R. Razzoli, “Review of robotic surgery platforms and end effectors,” J. Rob. Surg., 18 (1), 74 https://doi.org/10.1007/s11701-023-01781-x (2024). Google Scholar

9. 

W. Stummer et al., “Intraoperative detection of malignant gliomas by 5-aminolevulinic acid-induced porphyrin fluorescence,” Neurosurgery, 42 (3), 518 –526 https://doi.org/10.1097/00006123-199803000-00017 NEQUEB (1998). Google Scholar

10. 

W. Stummer et al., “Fluorescence-guided resection of glioblastoma multiforme by using 5-aminolevulinic acid-induced porphyrins: a prospective study in 52 consecutive patients,” J. Neurosurg., 93 (6), 1003 –1013 https://doi.org/10.3171/jns.2000.93.6.1003 JONSAC 0022-3085 (2000). Google Scholar

11. 

P. Mascagni et al., “Computer vision in surgery: from potential to clinical value,” NPJ Digit. Med., 5 (1), 163 https://doi.org/10.1038/s41746-022-00707-5 (2022). Google Scholar

12. 

W. Othman et al., “Tactile sensing for minimally invasive surgery: conventional methods and potential emerging tactile technologies,” Front. Rob. AI, 8 705662 https://doi.org/10.3389/frobt.2021.705662 (2022). Google Scholar

13. 

Y. Wang et al., “Visual detection and tracking algorithms for minimally invasive surgical instruments: a comprehensive review of the state-of-the-art,” Rob. Auton. Syst., 149 103945 https://doi.org/10.1016/j.robot.2021.103945 RASOEJ 0921-8890 (2022). Google Scholar

14. 

F. Chadebecq et al., “Computer vision in the surgical operating room,” Visc. Med., 36 (6), 456 –462 https://doi.org/10.1159/000511934 (2020). Google Scholar

15. 

C. R. Garrow et al., “Machine learning for surgical phase recognition: a systematic review,” Ann. Surg., 273 (4), 684 –693 https://doi.org/10.1097/SLA.0000000000004425 (2021). Google Scholar

16. 

E. D. Goodman et al., “A real-time spatiotemporal AI model analyzes skill in open surgical videos,” (2021). Google Scholar

17. 

A. A. Gumbs et al., “The advances in computer vision that are enabling more autonomous actions in surgery: a systematic review of the literature,” Sensors, 22 (13), 4918 https://doi.org/10.3390/s22134918 SNSRES 0746-9462 (2022). Google Scholar

18. 

P. Fiorini et al., “Concepts and trends in autonomy for robot-assisted surgery,” Proc. IEEE, 110 (7), 993 –1011 https://doi.org/10.1109/JPROC.2022.3176828 IEEPAD 0018-9219 (2022). Google Scholar

19. 

P. Barba et al., “Remote telesurgery in humans: a systematic review,” Surg. Endosc., 36 (5), 2771 –2777 https://doi.org/10.1007/s00464-022-09074-4 (2022). Google Scholar

20. 

M. K. Johnson and E. H. Adelson, “Retrographic sensing for the measurement of surface texture and shape,” in IEEE Conf. Comput. Vis. and Pattern Recognit., 1070 –1077 (2009). https://doi.org/10.1109/CVPR.2009.5206534 Google Scholar

21. 

W. Yuan, M. A. Srinivasan and E. H. Adelson, “Estimating object hardness with a GelSight touch sensor,” in IEEE/RSJ Int. Conf. Intell. Rob. and Syst. (IROS), 208 –215 (2016). https://doi.org/10.1109/IROS.2016.7759057 Google Scholar

22. 

N. T. Clancy et al., “Surgical spectral imaging,” Med. Image Anal., 63 101699 https://doi.org/10.1016/j.media.2020.101699 (2020). Google Scholar

23. 

G. Lu and B. Fei, “Medical hyperspectral imaging: a review,” J. Biomed. Opt., 19 (1), 010901 https://doi.org/10.1117/1.JBO.19.1.010901 JBOPFO 1083-3668 (2014). Google Scholar

24. 

M. Barberio et al., “Intraoperative guidance using hyperspectral imaging: a review for surgeons,” Diagnostics, 11 (11), 2066 https://doi.org/10.3390/diagnostics11112066 (2021). Google Scholar

25. 

S. Ahmed et al., “Chapter 8—hyperspectral imaging: current and potential clinical applications,” Biomedical Imaging Instrumentation, 115 –130 Academic Press( (2022). Google Scholar

26. 

Sneha and A. Kaul, “Hyperspectral imaging and target detection algorithms: a review,” Multimedia Tools Appl., 81 (30), 44141 –44206 https://doi.org/10.1007/s11042-022-13235-x (2022). Google Scholar

27. 

B. Lu et al., “Recent advances of hyperspectral imaging technology and applications in agriculture,” Remote Sens., 12 2659 https://doi.org/10.3390/rs12162659 (2020). Google Scholar

28. 

G. Tejasree and L. Agilandeeswari, “An extensive review of hyperspectral image classification and prediction: techniques and challenges,” Multimed. Tools Appl., 83 80941 –81038 https://doi.org/10.1007/s11042-024-18562-9 (2024). Google Scholar

29. 

C.-I. Chang, Hyperspectral Imaging: Techniques for Spectral Detection and Classification, Springer, New York, NY (2003). Google Scholar

30. 

M. J. Khan et al., “Modern trends in hyperspectral image analysis: a review,” IEEE Access, 6 14118 –14129 https://doi.org/10.1109/ACCESS.2018.2812999 (2018). Google Scholar

31. 

H. F. Grahn and P. Geladi, Techniques and Applications of Hyperspectral Image Analysis, John Wiley & Sons, Ltd.( (2007). Google Scholar

32. 

V. Saragadam Raja, “Spectrally-programmable cameras for imaging and inference,” https://kilthub.cmu.edu/articles/thesis/Spectrally-Programmable_Cameras_for_Imaging_and_Inference/12001422 (2020). Google Scholar

33. 

M. Greenberg, Greenberg’s Handbook of Neurosurgery, Thieme( (2023). Google Scholar

34. 

N. Agarwal, Neurosurgery Fundamentals, 1st ed.Thieme( (2019). Google Scholar

35. 

A. Kaye, Essential Neurosurgery, 3rd ed.Wiley( (2005). Google Scholar

36. 

H. J. Marcus et al., “Technological innovation in neurosurgery: a quantitative study,” J. Neurosurg., 123 (1), 174 –181 https://doi.org/10.3171/2014.12.JNS141422 (2015). Google Scholar

37. 

A. S. Jakola et al., “Comparison of a strategy favoring early surgical resection vs a strategy favoring watchful waiting in low-grade gliomas,” JAMA, 308 (18), 1881 –1888 https://doi.org/10.1001/jama.2012.12807 JAMAAP 0098-7484 (2012). Google Scholar

38. 

N. Sanai and M. S. Berger, “Glioma extent of resection and its impact on patient outcome,” Neurosurgery, 62 (4), 753 –766 https://doi.org/10.1227/01.neu.0000318159.21731.cf NEQUEB (2008). Google Scholar

39. 

N. Sanai et al., “An extent of resection threshold for newly diagnosed glioblastomas,” J. Neurosurg., 115 (1), 3 –8 https://doi.org/10.3171/2011.2.JNS10998 JONSAC 0022-3085 (2011). Google Scholar

40. 

M. E. Oppenlander et al., “An extent of resection threshold for recurrent glioblastoma and its risk for neurological morbidity,” J. Neurosurg., 120 (4), 846 –853 https://doi.org/10.3171/2013.12.JNS13184 JONSAC 0022-3085 (2014). Google Scholar

41. 

D. Orringer et al., “Extent of resection in patients with glioblastoma: limiting factors, perception of resectability, and effect on survival,” J. Neurosurg., 117 (5), 851 –859 https://doi.org/10.3171/2012.8.JNS12234 JONSAC 0022-3085 (2012). Google Scholar

42. 

M. Yogarajah et al., “The structural plasticity of white matter networks following anterior temporal lobe resection,” Brain, 133 (8), 2348 –2364 https://doi.org/10.1093/brain/awq175 BRAIAK 0006-8950 (2010). Google Scholar

43. 

P. A. Valdes et al., “Development of an educational method to rethink and learn oncological brain surgery in an “a la carte” connectome-based perspective,” Acta Neurochir., 165 (9), 2489 –2500 https://doi.org/10.1007/s00701-023-05626-2 (2023). Google Scholar

44. 

S. Ng et al., “Intraoperative functional remapping unveils evolving patterns of cortical plasticity,” Brain, 146 (7), 3088 –3100 https://doi.org/10.1093/brain/awad116 BRAIAK 0006-8950 (2023). Google Scholar

45. 

W. Stummer et al., “Fluorescence-guided surgery with 5-aminolevulinic acid for resection of malignant glioma: a randomised controlled multicentre phase III trial,” Lancet Oncol., 7 (5), 392 –401 https://doi.org/10.1016/S1470-2045(06)70665-9 LOANBN 1470-2045 (2006). Google Scholar

46. 

P. A. Valdés et al., “Estimation of brain deformation for volumetric image updating in protoporphyrin IX fluorescence-guided resection,” Stereotact. Funct. Neurosurg., 88 (1), 1 –10 https://doi.org/10.1159/000258143 SFUNE4 1011-6125 (2010). Google Scholar

47. 

L. Van Hese et al., “The diagnostic accuracy of intraoperative differentiation and delineation techniques in brain tumours,” Discov. Oncol., 13 (1), 123 https://doi.org/10.1007/s12672-022-00585-z (2022). Google Scholar

48. 

D. W. Roberts et al., “Coregistered fluorescence-enhanced tumor resection of malignant glioma: relationships between δ-aminolevulinic acid-induced protoporphyrin IX fluorescence, magnetic resonance imaging enhancement, and neuropathological parameters. Clinical article,” J. Neurosurg., 114 (3), 595 –603 https://doi.org/10.3171/2010.2.JNS091322 JONSAC 0022-3085 (2011). Google Scholar

49. 

P. A. Valdés et al., “Quantitative fluorescence in intracranial tumor: implications for ALA-induced PpIX as an intraoperative biomarker,” J. Neurosurg., 115 (1), 11 –17 https://doi.org/10.3171/2011.2.JNS101451 JONSAC 0022-3085 (2011). Google Scholar

50. 

K. Bekelis et al., “Roberts quantitative and qualitative 5-aminolevulinic acid-induced protoporphyrin IX fluorescence in skull base meningiomas,” Neurosurg. Focus, 30 (5), E8 https://doi.org/10.3171/2011.2.FOCUS1112 (2011). Google Scholar

51. 

P. A. Valdés et al., “δ-Aminolevulinic acid-induced protoporphyrin IX concentration correlates with histopathologic markers of malignancy in human gliomas: the need for quantitative fluorescence-guided resection to identify regions of increasing malignancy,” Neuro Oncol., 13 (8), 846 –856 https://doi.org/10.1093/neuonc/nor086 (2011). Google Scholar

52. 

D. W. Roberts et al., “Adjuncts for maximizing resection: 5-aminolevuinic acid,” Clin. Neurosurg., 59 75 –78 https://doi.org/10.1227/NEU.0b013e31826b2e8b (2012). Google Scholar

53. 

T. W. Vitaz et al., “Utility, safety, and accuracy of intraoperative angiography in the surgical treatment of aneurysms and arteriovenous malformations,” AJNR Am. J. Neuroradiol., 20 (8), 1457 –1461 (1999). Google Scholar

54. 

V. Gulino et al., “The use of intraoperative microvascular Doppler in vascular neurosurgery: rationale and results-a systematic review,” Brain Sci., 14 (1), 56 https://doi.org/10.3390/brainsci14010056 (2024). Google Scholar

55. 

S. Balamurugan et al., “Intra operative indocyanine green video-angiography in cerebrovascular surgery: an overview with review of literature,” Asian J. Neurosurg., 6 (2), 88 –93 https://doi.org/10.4103/1793-5482.92168 (2011). Google Scholar

56. 

H. Li et al., “A narrative review of intraoperative use of indocyanine green fluorescence imaging in gastrointestinal cancer: situation and future directions,” J. Gastrointest. Oncol., 14 (2), 1095 –1113 https://doi.org/10.21037/jgo-23-230 (2023). Google Scholar

57. 

J. W. Miller and S. Hakimian, “Surgical treatment of epilepsy,” Continuum, 19 (3 Epilepsy), 730 –742 https://doi.org/10.1212/01.CON.0000431398.69594.97 CMETEJ 0935-1175 (2013). Google Scholar

58. 

M. V. Simon, M. R. Nuwer and A. Szelényi, “Electroencephalography, electrocorticography, and cortical stimulation techniques,” Handb. Clin. Neurol., 186 11 –38 https://doi.org/10.1016/B978-0-12-819826-1.00001-6 HACNEU 0072-9752 (2022). Google Scholar

59. 

F. Manni et al., “Hyperspectral imaging for skin feature detection: advances in markerless tracking for spine surgery,” Appl. Sci., 10 (12), 4078 https://doi.org/10.3390/app10124078 (2020). Google Scholar

60. 

L. Gao and R. T. Smith, “Optical hyperspectral imaging in microscopy and spectroscopy—a review of data acquisition,” J. Biophotonics, 8 (6), 441 –456 https://doi.org/10.1002/jbio.201400051 (2015). Google Scholar

62. 

B. Boldrini et al., “Hyperspectral imaging: a review of best practice, performance and pitfalls for in-line and on-line applications,” J. Near Infrared Spectrosc., 20 (5), 483 –508 https://doi.org/10.1255/jnirs.1003 (2012). Google Scholar

64. 

H. Photonics, “Machine vision—headwall photonics,” https://headwallphotonics.com/product-category/machine-vision/ (). Google Scholar

67. 

P. A. Valdés et al., “Quantitative, spectrally-resolved intraoperative fluorescence imaging,” Sci. Rep., 2 798 https://doi.org/10.1038/srep00798 SRCEC3 2045-2322 (2012). Google Scholar

68. 

, “AOTF hyperspectral imagers,” https://www.brimrose.com/aotf-hyperspectral-imagers (). Google Scholar

69. 

N. Hagen and M. Kudenov, “Review of snapshot spectral imaging technologies,” Opt. Eng., 52 (9), 090901 https://doi.org/10.1117/1.OE.52.9.090901 (2013). Google Scholar

71. 

, “Hyperspectral imaging technology: how it works | imec,” https://www.imechyperspectral.com/en/hyperspectral-imaging-technology (). Google Scholar

72. 

R. Ng et al., “Light field photography with a hand-held plenoptic camera,” (2005). Google Scholar

73. 

S. Oy, “Senop HSC-2 hyperspectral camera 450-800nm,” https://senop.fi/product/hsc-2-hyperspectral-camera-450-800nm/ (). Google Scholar

74. 

, “OCI™-2000 series snapshot hyperspectral imagers,” https://www.bayspec.com/products/vis-nir-snapshot-hyperspecral-camera/ (). Google Scholar

76. 

V. Saragadam and A. C. Sankaranarayanan, “KRISM—Krylov subspace-based optical computing of hyperspectral images,” ACM Trans. Graph., 38 (5), 1 –14 https://doi.org/10.1145/3345553 ATGRDF 0730-0301 (2019). Google Scholar

77. 

V. Saragadam et al., “SASSI—super-pixelated adaptive spatio-spectral imaging,” IEEE Trans. Pattern Anal. Mach. Intell., 43 (7), 2233 –2244 https://doi.org/10.1109/TPAMI.2021.3075228 ITPIDJ 0162-8828 (2021). Google Scholar

78. 

V. Saragadam and A. C. Sankaranarayanan, “Programmable spectrometry: per-pixel material classification using learned spectral filters,” in IEEE Int. Conf. Comput. Photogr. (ICCP), 1 –10 (2020). https://doi.org/10.1109/ICCP48838.2020.9105281 Google Scholar

79. 

L. Giannoni et al., “Optical characterisation and study of ex vivo glioma tissue for hyperspectral imaging during neurosurgery,” Proc. SPIE, 12628 1262829 https://doi.org/10.1117/12.2670854 PSISDG 0277-786X (2023). Google Scholar

80. 

I. Ezhov et al., “Identifying chromophore fingerprints of brain tumor tissue on hyperspectral imaging using principal component analysis,” Proc. SPIE, 12628 1262826 https://doi.org/10.1117/12.2670775 PSISDG 0277-786X (2023). Google Scholar

81. 

M. S. Patterson, B. Chance and B. C. Wilson, “Time resolved reflectance and transmittance for the noninvasive measurement of tissue optical properties,” Appl. Opt., 28 (12), 2331 –2336 https://doi.org/10.1364/AO.28.002331 APOPAI 0003-6935 (1989). Google Scholar

82. 

B. C. Wilson, “Measurement of tissue optical properties: methods and theories,” Optical-Thermal Response of Laser-Irradiated Tissue, 233 –303 Springer US( (1995). Google Scholar

83. 

B. C. Wilson and S. L. Jacques, “Optical reflectance and transmittance of tissues: principles and applications,” IEEE J. Quantum Electron., 26 (12), 2186 –2199 https://doi.org/10.1109/3.64355 IEJQA7 0018-9197 (1990). Google Scholar

84. 

P. Valdés et al., “Quantitative spectrally resolved intraoperative fluorescence imaging for neurosurgical guidance in brain tumor surgery: pre-clinical and clinical results,” Proc. SPIE, 8928 892809 https://doi.org/10.1117/12.2039090 PSISDG 0277-786X (2014). Google Scholar

85. 

A. Kim et al., “Quantification of in vivo fluorescence decoupled from the effects of tissue optical properties using fiber-optic spectroscopy measurements,” J. Biomed. Opt., 15 (6), 067006 https://doi.org/10.1117/1.3523616 JBOPFO 1083-3668 (2010). Google Scholar

86. 

N. Aburaed et al., “Chapter 10—cancer detection in hyperspectral imagery using artificial intelligence: current trends and future directions,” Artificial Intelligence for Medicine, 133 –149 Academic Press( (2024). Google Scholar

87. 

R. Mühle et al., “Workflow and hardware for intraoperative hyperspectral data acquisition in neurosurgery,” Biomed. Eng./Biomed. Tech., 66 (1), 31 –42 https://doi.org/10.1515/bmt-2019-0333 (2021). Google Scholar

88. 

R. Leon et al., “Hyperspectral imaging benchmark based on machine learning for intraoperative brain tumour detection,” NPJ Precis. Oncol., 7 (1), 119 https://doi.org/10.1038/s41698-023-00475-9 (2023). Google Scholar

89. 

O. MacCormac et al., “Lightfield hyperspectral imaging in neuro-oncology surgery: an IDEAL 0 and 1 study,” Front. Neurosci., 17 1239764 https://doi.org/10.3389/fnins.2023.1239764 1662-453X (2023). Google Scholar

90. 

N. Kifle et al., “Pediatric brain tissue segmentation using a snapshot hyperspectral imaging (sHSI) camera and machine learning classifier,” Bioengineering, 10 (10), 1190 https://doi.org/10.3390/bioengineering10101190 BENGEQ 0178-2029 (2023). Google Scholar

91. 

, “ULTRIS X20: the world’s first UV-VIS-NIR hyperspectral video camera,” https://www.cubert-hyperspectral.com/products/ultris-x20 (). Google Scholar

92. 

J. Pichette et al., “Intraoperative video-rate hemodynamic response assessment in human cortex using snapshot hyperspectral optical imaging,” Neurophotonics, 3 (4), 045003 https://doi.org/10.1117/1.NPh.3.4.045003 (2016). Google Scholar

93. 

R. Vandebriel et al., “Integrating hyperspectral imaging in an existing intra-operative environment for detection of intrinsic brain tumors,” Proc. SPIE, 12368 123680D https://doi.org/10.1117/12.2647690 PSISDG 0277-786X (2023). Google Scholar

94. 

A. Walke et al., “Challenges in, and recommendations for, hyperspectral imaging in ex vivo malignant glioma biopsy measurements,” Sci. Rep., 13 (1), 3829 https://doi.org/10.1038/s41598-023-30680-2 SRCEC3 2045-2322 (2023). Google Scholar

95. 

C. A. Schutt et al., “The illumination characteristics of operative microscopes,” Am. J. Otolaryngol., 36 (3), 356 –360 https://doi.org/10.1016/j.amjoto.2014.12.009 AJOTDP 0196-0709 (2015). Google Scholar

96. 

H. Fabelo et al., “A novel use of hyperspectral images for human brain cancer detection using in-vivo samples,” in Proc. Int. Joint Conf. on Biomed. Eng. Syst. and Technol., (2016). https://doi.org/10.5220/0005849803110320 Google Scholar

97. 

D. Madroñal et al., “Hyperspectral image classification using a parallel implementation of the linear SVM on a massively parallel processor array (MPPA) platform,” in Conf. Des. and Architect. for Signal and Image Process. (DASIP), 154 –160 (2016). https://doi.org/10.1109/DASIP.2016.7853812 Google Scholar

98. 

L. Ruiz et al., “Multiclass brain tumor classification using hyperspectral imaging and supervised machine learning,” in XXXV Conf. on Des. of Circuits and Integr. Syst. (DCIS), 1 –6 (2020). https://doi.org/10.1109/DCIS51330.2020.9268650 Google Scholar

99. 

B. Martinez et al., “Most relevant spectral bands identification for brain cancer detection using hyperspectral imaging,” Sensors, 19 (24), 5481 https://doi.org/10.3390/s19245481 SNSRES 0746-9462 (2019). Google Scholar

100. 

A. Martín-Pérez et al., “SLIM brain database: a multimodal image database of in-vivo human brains for tumour detection,” (2023). Google Scholar

101. 

M. Mori et al., “Intraoperative visualization of cerebral oxygenation using hyperspectral image data: a two-dimensional mapping method,” Int. J. Comput. Assist. Radiol. Surg., 9 (6), 1059 –1072 https://doi.org/10.1007/s11548-014-0989-9 (2014). Google Scholar

102. 

H. G. M. K. John et al., “Hyperspectral imaging system for imaging O2Hb and HHb concentration changes in tissue for various clinical applications,” Proc. SPIE, 7890 78900R https://doi.org/10.1117/12.875110 PSISDG 0277-786X (2011). Google Scholar

103. 

C. Fu et al., “Rapid, label-free detection of cerebral ischemia in rats using hyperspectral imaging,” J. Neurosci. Methods, 329 108466 https://doi.org/10.1016/j.jneumeth.2019.108466 JNMEDT 0165-0270 (2020). Google Scholar

104. 

H. J. Noordmans et al., “Imaging the seizure during surgery with a hyperspectral camera,” Epilepsia, 54 (11), e150 –e154 https://doi.org/10.1111/epi.12386 EPILAK 0013-9580 (2013). Google Scholar

105. 

A. Laurence et al., “Localization of epileptic activity based on hemodynamics using an intraoperative hyperspectral imaging system,” in Biophotonics Congr.: Biomed. Opt. 2020 (Transl., Microsc., OCT, OTS, BRAIN), TW1B.3 (2020). https://doi.org/10.1364/TRANSLATIONAL.2020.TW1B.3 Google Scholar

106. 

A. Massalimova et al., “Intraoperative tissue classification methods in orthopedic and neurological surgeries: a systematic review,” Front. Surg., 9 952539 https://doi.org/10.3389/fsurg.2022.952539 (2022). Google Scholar

107. 

S. C. Gebhart, R. C. Thompson and A. Mahadevan-Jansen, “Liquid-crystal tunable filter spectral imaging for brain tumor demarcation,” Appl. Opt., 46 (10), 1896 –910 https://doi.org/10.1364/AO.46.001896 APOPAI 0003-6935 (2007). Google Scholar

108. 

C. R. Instruments, “VariSpec liquid crystal tunable filters,” https://www.photonics.com/Products/VariSpec_Liquid_Crystal_Tunable_Filters/p5/pr27133 (). Google Scholar

109. 

, “PI MAX® 4 high speed gated imaging and spectroscopy cameras,” https://www.princetoninstruments.com/products/pi-max-family/pi-max (). Google Scholar

110. 

E. Technologies, “pco.pixelfly 1.4 USB,” https://www.excelitas.com/product/pcopixelfly-14-usb (). Google Scholar

111. 

CORDIS, “HypErspectraL imaging cancer detection | HELiCoiD Project | Fact Sheet | FP7,” (2017). Google Scholar

112. 

R. Salvador et al., “Demo: HELICoiD tool demonstrator for real-time brain cancer detection,” in Conf. Des. and Archit. for Signal and Image Process. (DASIP), 237 –238 (2016). https://doi.org/10.1109/DASIP.2016.7853831 Google Scholar

113. 

H. Fabelo et al., “HELICoiD project: a new use of hyperspectral imaging for brain cancer detection in real-time during neurosurgical operations,” Proc. SPIE, 9860 986002 https://doi.org/10.1117/12.2223075 PSISDG 0277-786X (2016). Google Scholar

114. 

D. Ravi et al., “Manifold embedding and semantic segmentation for intraoperative guidance with hyperspectral brain imaging,” IEEE Trans. Med. Imaging, 36 (9), 1845 –1857 https://doi.org/10.1109/TMI.2017.2695523 ITMID4 0278-0062 (2017). Google Scholar

115. 

H. Fabelo et al., “An intraoperative visualization system using hyperspectral imaging to aid in brain tumor delineation,” Sensors, 18 (2), 430 https://doi.org/10.3390/s18020430 SNSRES 0746-9462 (2018). Google Scholar

116. 

I. A. Cruz-Guerrero et al., “Classification of hyperspectral in vivo brain tissue based on linear unmixing,” Appl. Sci., 10 (16), 5686 https://doi.org/10.3390/app10165686 (2020). Google Scholar

117. 

D. U. Campos-Delgado et al., “Nonlinear extended blind end-member and abundance extraction for hyperspectral images,” Signal Process., 201 108718 https://doi.org/10.1016/j.sigpro.2022.108718 (2022). Google Scholar

118. 

N. Baig et al., “Empirical mode decomposition based hyperspectral data analysis for brain tumor classification,” in Annu. Int. Conf. IEEE Eng. Med. Biol. Soc., 2274 –2277 (2021). https://doi.org/10.1109/embc46164.2021.9629676 Google Scholar

119. 

A. Martínez-González et al., “Can hyperspectral images be used to detect brain tumor pixels and their malignant phenotypes?,” in XXXV Conf. Des. of Circuits and Integr. Syst. (DCIS), 1 –5 (2020). https://doi.org/10.1109/DCIS51330.2020.9268641 Google Scholar

120. 

I. Ezhov et al., “Shallow learning enables real-time inference of molecular composition from spectroscopy of brain tissue,” (2024). Google Scholar

121. 

L. van der Maaten and G. Hinton, “Visualizing data using t-SNE,” J. Mach. Learn. Res., 9 (86), 2579 –2605 (2008). Google Scholar

122. 

H. Fabelo et al., “Spatio-spectral classification of hyperspectral images for brain cancer detection during surgical operations,” PLoS One, 13 (3), e0193721 https://doi.org/10.1371/journal.pone.0193721 POLNCL 1932-6203 (2018). Google Scholar

123. 

A. Baez et al., “High-level synthesis of multiclass SVM using code refactoring to classify brain cancer from hyperspectral images,” Electronics, 8 1494 https://doi.org/10.3390/electronics8121494 ELECAD 0013-5070 (2019). Google Scholar

124. 

P. Sutradhar et al., “Exploration of realtime brain tumor classification from hyperspectral images in heterogeneous embedded MPSoC,” in 37th Conf. Des. of Circuits and Integr. Circuits (DCIS), 1 –6 (2022). https://doi.org/10.1109/DCIS55711.2022.9970064 Google Scholar

125. 

E. Torti et al., “Acceleration of brain cancer detection algorithms during surgery procedures using GPUs,” Microprocess. Microsyst., 61 171 –178 https://doi.org/10.1016/j.micpro.2018.06.005 MIMID5 0141-9331 (2018). Google Scholar

126. 

R. Lazcano et al., “Parallel implementation of an iterative PCA algorithm for hyperspectral images on a manycore platform,” in Conf. Des. and Archit. for Signal and Image Process. (DASIP), 1 –6 (2017). https://doi.org/10.1109/DASIP.2017.8122111 Google Scholar

127. 

G. Florimbi et al., “Accelerating the k-nearest neighbors filtering algorithm to optimize the real-time classification of human brain tumor in hyperspectral images,” Sensors, 18 (7), 2314 https://doi.org/10.3390/s18072314 SNSRES 0746-9462 (2018). Google Scholar

128. 

E. Torti et al., “Parallel k-means clustering for brain cancer detection using hyperspectral images,” Electronics, 7 283 https://doi.org/10.3390/electronics7110283 ELECAD 0013-5070 (2018). Google Scholar

129. 

G. Florimbi et al., “Towards real-time computing of intraoperative hyperspectral imaging for brain cancer detection using multi-GPU platforms,” IEEE Access, 8 8485 –8501 https://doi.org/10.1109/ACCESS.2020.2963939 (2020). Google Scholar

130. 

M. Villa et al., “Data-type assessment for real-time hyperspectral classification in medical imaging,” Lect. Notes Comput. Sci., 13425 123 –135 https://doi.org/10.1007/978-3-031-12748-9_10 LNCSD9 0302-9743 (2022). Google Scholar

131. 

H. Fabelo et al., “Deep learning-based framework for in vivo identification of glioblastoma tumor using hyperspectral images of human brain,” Sensors, 19 (4), 920 https://doi.org/10.3390/s19040920 SNSRES 0746-9462 (2019). Google Scholar

132. 

F. Manni et al., “Hyperspectral imaging for glioblastoma surgery: improving tumor identification using a deep spectral-spatial approach,” Sensors, 20 (23), 6955 https://doi.org/10.3390/s20236955 SNSRES 0746-9462 (2020). Google Scholar

133. 

P. Poonkuzhali and K. Helen Prabha, “Deep convolutional neural network based hyperspectral brain tissue classification,” J. Xray Sci. Technol., 31 (4), 777 –796 https://doi.org/10.3233/XST-230045 (2023). Google Scholar

134. 

M. Wang et al., “Deep margin cosine autoencoder-based medical hyperspectral image classification for tumor diagnosis,” IEEE Trans. Instrum. Meas., 72 1 –12 https://doi.org/10.1109/TIM.2023.3293548 IEIMAO 0018-9456 (2023). Google Scholar

135. 

Q. Hao et al., “Fusing multiple deep models for in vivo human brain hyperspectral image classification to identify glioblastoma tumor,” IEEE Trans. Instrum. Meas., 70 1 –14 https://doi.org/10.1109/TIM.2021.3117634 IEIMAO 0018-9456 (2021). Google Scholar

136. 

H. Ayaz et al., “Hyperspectral brain tissue classification using a fast and compact 3D CNN approach,” in IEEE 5th Int. Conf. on Image Process. Appl. and Syst. (IPAS), 1 –4 (2022). https://doi.org/10.1109/IPAS55744.2022.10053044 Google Scholar

137. 

P. L. Cebrián et al., “Deep recurrent neural network performing spectral recurrence on hyperspectral images for brain tissue classification,” Lect. Notes Comput. Sci., 13879 15 –27 https://doi.org/10.1007/978-3-031-29970-4_2 LNCSD9 0302-9743 (2023). Google Scholar

138. 

G. Vazquez et al., “Brain blood vessel segmentation in hyperspectral images through linear operators,” Lect. Notes Comput. Sci., 13879 28 –39 https://doi.org/10.1007/978-3-031-29970-4_3 LNCSD9 0302-9743 (2023). Google Scholar

139. 

G. Vazquez et al., “Sparse to dense ground truth pre-processing in hyperspectral imaging for in-vivo brain tumour detection,” in IEEE Int. Conf. on Metrol. for eXtended Reality, Artif. Intell. and Neural Eng. (MetroXRAINE), 272 –277 (2023). https://doi.org/10.1109/MetroXRAINE58569.2023.10405811 Google Scholar

140. 

M. La Salvia et al., “AI-based segmentation of intraoperative glioblastoma hyperspectral images,” Proc. SPIE, 12338 123380E https://doi.org/10.1117/12.2646782 PSISDG 0277-786X (2023). Google Scholar

141. 

T. Giannantonio et al., “Intra-operative brain tumor detection with deep learning-optimized hyperspectral imaging,” Proc. SPIE, 12373 123730F https://doi.org/10.1117/12.2646999 PSISDG 0277-786X (2023). Google Scholar

142. 

C. Zhang et al., “Medical hyperspectral image classification based weakly supervised single-image global learning network,” Eng. Appl. Artif. Intell., 133 108042 https://doi.org/10.1016/j.engappai.2024.108042 EAAIE6 0952-1976 (2024). Google Scholar

143. 

S. Puustinen et al., “Spectrally tunable neural network-assisted segmentation of microneurosurgical anatomy,” Front. Neurosci., 14 640 https://doi.org/10.3389/fnins.2020.00640 1662-453X (2020). Google Scholar

144. 

D. U. Campos-Delgado et al., “Extended blind end-member and abundance extraction for biomedical imaging applications,” IEEE Access, 7 178539 –178552 https://doi.org/10.1109/ACCESS.2019.2958985 (2019). Google Scholar

145. 

C. Zhang et al., “Unsupervised band selection of medical hyperspectral images guided by data gravitation and weak correlation,” Comput. Methods Programs Biomed., 240 107721 https://doi.org/10.1016/j.cmpb.2023.107721 CMPBEK 0169-2607 (2023). Google Scholar

146. 

R. Leon et al., “Hyperspectral VNIR and NIR sensors for the analysis of human normal brain and tumor tissue,” in XXXVI Conf. on Des. of Circuits and Integr. Syst. (DCIS), 1 –6 (2021). https://doi.org/10.1109/DCIS53048.2021.9666168 Google Scholar

147. 

R. Leon et al., “VNIR-NIR hyperspectral imaging fusion targeting intraoperative brain cancer detection,” Sci. Rep., 11 (1), 19696 https://doi.org/10.1038/s41598-021-99220-0 SRCEC3 2045-2322 (2021). Google Scholar

148. 

H. Fabelo et al., “In-vivo hyperspectral human brain image database for brain cancer detection,” IEEE Access, 7 39098 –39116 https://doi.org/10.1109/ACCESS.2019.2904788 (2019). Google Scholar

149. 

X. Corporation, “Hyperspectral Snapshot USB3 camera 24 bands 665-960nm,” https://www.ximea.com/en/products/hyperspectral-cameras-based-on-usb3-xispec/mq022hg-im-sm5x5-nir (). Google Scholar

150. 

G. Urbanos et al., “Supervised machine learning methods and hyperspectral imaging techniques jointly applied for brain cancer classification,” Sensors, 21 (11), 3827 https://doi.org/10.3390/s21113827 SNSRES 0746-9462 (2021). Google Scholar

151. 

A. Martín-Pérez et al., “Hyperparameter optimization for brain tumor classification with hyperspectral images,” in 25th Euromicro Conf. Digit. Syst. Des. (DSD), 835 –842 (2022). https://doi.org/10.1109/DSD57027.2022.00117 Google Scholar

152. 

J. Sancho et al., “SLIMBRAIN: augmented reality real-time acquisition and processing system for hyperspectral classification mapping with depth information for in-vivo surgical procedures,” J. Syst. Archit., 140 102893 https://doi.org/10.1016/j.sysarc.2023.102893 (2023). Google Scholar

153. 

S. Puustinen et al., “Hyperspectral imaging in brain tumor surgery-evidence of machine learning-based performance,” World Neurosurg., 175 e614 –e635 https://doi.org/10.1016/j.wneu.2023.03.149 (2023). Google Scholar

154. 

S. Puustinen et al., “Towards clinical hyperspectral imaging (HSI) standards: initial design for a microneurosurgical hsi database,” in IEEE 35th Int. Symp. on Comput.-Based Med. Syst. (CBMS), 394 –399 (2022). https://doi.org/10.1109/CBMS55023.2022.00077 Google Scholar

155. 

, “TAIKI hyperspectral EO mission,” https://www.eoportal.org/satellite-missions/taiki#overview (). Google Scholar

156. 

K. Iwaki et al., “A novel hyperspectral imaging system for intraoperative prediction of cerebral hyperperfusion syndrome after superficial temporal artery-middle cerebral artery anastomosis in patients with moyamoya disease,” Cerebrovasc. Dis., 50 (2), 208 –215 https://doi.org/10.1159/000513289 (2021). Google Scholar

157. 

A. Laurence et al., “Multispectral diffuse reflectance can discriminate blood vessels and bleeding during neurosurgery based on low-frequency hemodynamics,” J. Biomed. Opt., 25 (11), 116003 https://doi.org/10.1117/1.JBO.25.11.116003 JBOPFO 1083-3668 (2020). Google Scholar

158. 

H. J. Noordmans et al., “Detailed view on slow sinusoidal, hemodynamic oscillations on the human brain cortex by Fourier transforming oxy/deoxy hyperspectral images,” Hum. Brain Mapp., 39 (9), 3558 –3573 https://doi.org/10.1002/hbm.24194 HBRME7 1065-9471 (2018). Google Scholar

159. 

A. A. Phillips et al., “Neurovascular coupling in humans: physiology, methodological advances and clinical implications,” J. Cereb. Blood Flow Metab., 36 (4), 647 –664 https://doi.org/10.1177/0271678X15617954 (2016). Google Scholar

160. 

A. Laurence et al., “Multispectral intraoperative imaging for the detection of the hemodynamic response to interictal epileptiform discharges,” Biomed. Opt. Express, 13 (12), 6245 –6257 https://doi.org/10.1364/BOE.465699 BOEICL 2156-7085 (2022). Google Scholar

161. 

L. Giannoni et al., “A hyperspectral imaging system for mapping haemoglobin and cytochrome-c-oxidase concentration changes in the exposed cerebral cortex,” IEEE J. Sel. Top. Quantum Electron., 27 (4), 7400411 https://doi.org/10.1109/JSTQE.2021.3053634 IJSQEN 1077-260X (2021). Google Scholar

162. 

L. Giannoni, F. Lange and I. Tachtsidis, “Investigation of the quantification of hemoglobin and cytochrome-c-oxidase in the exposed cortex with near-infrared hyperspectral imaging: a simulation study,” J. Biomed. Opt., 25 (4), 046001 https://doi.org/10.1117/1.JBO.25.4.046001 JBOPFO 1083-3668 (2020). Google Scholar

163. 

C. Caredda et al., “Intraoperative functional and metabolic brain mapping using hyperspectral imaging,” Proc. SPIE, 11225 112250B https://doi.org/10.1117/12.2545968 PSISDG 0277-786X (2020). Google Scholar

164. 

C. Caredda et al., “A priori free spectral unmixing with periodic absorbance changes: application for auto-calibrated intraoperative functional brain mapping,” Biomed. Opt. Express, 15 (1), 387 –412 https://doi.org/10.1364/BOE.491292 BOEICL 2156-7085 (2024). Google Scholar

165. 

C. Caredda et al., “Optimal spectral combination of a hyperspectral camera for intraoperative hemodynamic and metabolic brain mapping,” Appl. Sci., 10 (15), 5158 https://doi.org/10.3390/app10155158 (2020). Google Scholar

166. 

C. Caredda et al., “Intraoperative quantitative functional brain mapping using an RGB camera,” Neurophotonics, 6 (4), 045015 https://doi.org/10.1117/1.NPh.6.4.045015 (2019). Google Scholar

167. 

M. Ebner et al., “Intraoperative hyperspectral label-free imaging: from system design to first-in-patient translation,” J. Phys. D Appl. Phys., 54 (29), 294003 https://doi.org/10.1088/1361-6463/abfbf6 (2021). Google Scholar

169. 

Universidad de Las Palmas de Gran, C. HSI Human Brain Database, “HSI Human Brain Database,” https://hsibraindatabase.iuma.ulpgc.es/ Google Scholar

170. 

F. A. Kruse et al., “The spectral image processing system (SIPS)—interactive visualization and analysis of imaging spectrometer data,” Remote Sens. Environ., 44 (2), 145 –163 https://doi.org/10.1016/0034-4257(93)90013-N (1993). Google Scholar

171. 

U. P. D. Madrid, “SLIM Brain database,” https://slimbrain.citsem.upm.es/ (). Google Scholar

173. 

M. Villa et al., “HyperMRI: hyperspectral and magnetic resonance fusion methodology for neurosurgery applications,” Int. J. Comput. Assist. Radiol. Surg., 19 1367 –1374 https://doi.org/10.1007/s11548-024-03102-5 (2024). Google Scholar

174. 

H. Fabelo et al., “Surgical aid visualization system for glioblastoma tumor identification based on deep learning and in-vivo hyperspectral images of human patients,” Proc. SPIE, 10951 1095110 https://doi.org/10.1117/12.2512569 PSISDG 0277-786X (2019). Google Scholar

175. 

K. Huang et al., “Spectral–spatial hyperspectral image classification based on KNN,” Sens. Imaging, 17 (1), 1 https://doi.org/10.1007/s11220-015-0126-z (2015). Google Scholar

176. 

Y. Tarabalka, J. A. Benediktsson and J. Chanussot, “Spectral–spatial classification of hyperspectral imagery based on partitional clustering techniques,” IEEE Trans. Geosci. Remote Sens., 47 (8), 2973 –2987 https://doi.org/10.1109/TGRS.2009.2016214 IGRSD2 0196-2892 (2009). Google Scholar

177. 

J. Huang et al., “Augmented reality visualization of hyperspectral imaging classifications for image-guided brain tumor phantom resection,” Proc. SPIE, 11315 113150U https://doi.org/10.1117/12.2549041 PSISDG 0277-786X (2020). Google Scholar

178. 

P. Li et al., “Spatial gradient consistency for unsupervised learning of hyperspectral demosaicking: application to surgical imaging,” Int. J. Comput. Assist. Radiol. Surg., 18 (6), 981 –988 https://doi.org/10.1007/s11548-023-02865-7 (2023). Google Scholar

179. 

C. Budd et al., “Deep reinforcement learning based system for intraoperative hyperspectral video autofocusing,” Lect. Notes Comput. Sci., 14228 658 –667 https://doi.org/10.1007/978-3-031-43996-4_63 LNCSD9 0302-9743 (2023). Google Scholar

180. 

A. Bahl et al., “Synthetic white balancing for intra-operative hyperspectral imaging,” J. Med. Imaging, 10 (4), 046001 https://doi.org/10.1117/1.JMI.10.4.046001 JMEIET 0920-5497 (2023). Google Scholar

181. 

C. Caredda et al., “Free spectral unmixing with periodic absorbance changes: application for auto-calibrated intraoperative functional brain mapping,” Biomed. Opt. Express, 15 (1), 387 –412 https://doi.org/10.1364/BOE.491292 BOEICL 2156-7085 (2024). Google Scholar

182. 

R. W. Byrne et al., “Introduction: advances in intraoperative brain mapping,” Neurosurg. Focus, 45 (VideoSuppl2), Intro https://doi.org/10.3171/2018.10.FocusVid.Intro (2018). Google Scholar

183. 

A. J. Schupper and C. Hadjipanayis, “Use of intraoperative fluorophores,” Neurosurg. Clin. North Am., 32 (1), 55 –64 https://doi.org/10.1016/j.nec.2020.08.001 (2021). Google Scholar

184. 

U. Pichlmeier et al., “Resection and survival in glioblastoma multiforme: an RTOG recursive partitioning analysis of ALA study patients,” Neuro-Oncology, 10 (6), 1025 –1034 https://doi.org/10.1215/15228517-2008-052 (2008). Google Scholar

185. 

R. Stendel, “Extent of resection and survival in glioblastoma multiforme: identification of and adjustment for bias,” Neurosurgery, 64 (6), E1206 https://doi.org/10.1227/01.NEU.0000346230.80425.3A NEQUEB (2009). Google Scholar

186. 

C. G. Hadjipanayis, G. Widhalm and W. Stummer, “What is the surgical benefit of utilizing 5-aminolevulinic acid for fluorescence-guided surgery of malignant gliomas?,” Neurosurgery, 77 (5), 663 –673 https://doi.org/10.1227/NEU.0000000000000929 NEQUEB (2015). Google Scholar

187. 

E. S. Molina et al., “Double dose of 5-aminolevulinic acid and its effect on protoporphyrin IX accumulation in low-grade glioma,” J. Neurosurg., 137 (4), 943 –952 https://doi.org/10.3171/2021.12.JNS211724 JONSAC 0022-3085 (2022). Google Scholar

189. 

G. Widhalm et al., “Berger the value of visible 5-ALA fluorescence and quantitative protoporphyrin IX analysis for improved surgery of suspected low-grade gliomas,” J. Neurosurg., 133 (1), 79 –88 https://doi.org/10.3171/2019.1.JNS182614 JONSAC 0022-3085 (2019). Google Scholar

191. 

F. Acerbi et al., “Fluorescein-guided surgery for resection of high-grade gliomas: a multicentric prospective phase II study (FLUOGLIO),” Clin. Cancer Res., 24 (1), 52 –61 https://doi.org/10.1158/1078-0432.CCR-17-1184 (2018). Google Scholar

192. 

S. S. Cho, R. Salinas and J. Y. K. Lee, “Indocyanine-green for fluorescence-guided surgery of brain tumors: evidence, techniques, and practical experience,” Front. Surg., 6 11 https://doi.org/10.3389/fsurg.2019.00011 (2019). Google Scholar

194. 

E. Belykh et al., “Blood-brain barrier, blood-brain tumor barrier, and fluorescence-guided neurosurgical oncology: delivering optical labels to brain tumors,” Front. Oncol., 10 739 https://doi.org/10.3389/fonc.2020.00739 FRTOA7 0071-9676 (2020). Google Scholar

195. 

B. W. Pogue et al., “Review of neurosurgical fluorescence imaging methodologies,” IEEE J. Sel. Top. Quantum Electron., 16 (3), 493 –505 https://doi.org/10.1109/JSTQE.2009.2034541 IJSQEN 1077-260X (2010). Google Scholar

196. 

J. A. Carr et al., “Shortwave infrared fluorescence imaging with the clinically approved near-infrared dye indocyanine green,” Proc. Natl. Acad. Sci. U. S. A., 115 (17), 4465 –4470 https://doi.org/10.1073/pnas.1718917115 (2018). Google Scholar

197. 

W. Stummer et al., “Technical principles for protoporphyrin-IX-fluorescence guided microsurgical resection of malignant glioma tissue,” Acta Neurochir., 140 (10), 995 –1000 https://doi.org/10.1007/s007010050206 (1998). Google Scholar

198. 

E. Suero Molina et al., “Unraveling the blue shift in porphyrin fluorescence in glioma: the 620 nm peak and its potential significance in tumor biology,” Front. Neurosci., 17 1261679 https://doi.org/10.3389/fnins.2023.1261679 1662-453X (2023). Google Scholar

199. 

D. Black et al., “Characterization of autofluorescence and quantitative protoporphyrin IX biomarkers for optical spectroscopy-guided glioma surgery,” Sci. Rep., 11 (1), 20009 https://doi.org/10.1038/s41598-021-99228-6 SRCEC3 2045-2322 (2021). Google Scholar

200. 

M. Marois et al., “Characterization and standardization of tissue-simulating protoporphyrin IX optical phantoms,” J. Biomed. Opt., 21 (3), 035003 https://doi.org/10.1117/1.JBO.21.3.035003 JBOPFO 1083-3668 (2016). Google Scholar

201. 

S. Kaneko et al., “Fluorescence real-time kinetics of protoporphyrin IX after 5-ALA administration in low-grade glioma,” J. Neurosurg., 136 (1), 9 –15 https://doi.org/10.3171/2020.10.JNS202881 JONSAC 0022-3085 (2022). Google Scholar

202. 

S. Kaneko et al., “Fluorescence-based measurement of real-time kinetics of protoporphyrin IX after 5-aminolevulinic acid administration in human in situ malignant gliomas,” Neurosurgery, 85 (4), E739 –e746 https://doi.org/10.1093/neuros/nyz129 NEQUEB (2019). Google Scholar

203. 

W. Stummer et al., “5-Aminolevulinic acid-derived tumor fluorescence: the diagnostic accuracy of visible fluorescence qualities as corroborated by spectrometry and histology and postoperative imaging,” Neurosurgery, 74 (3), 310 –319 https://doi.org/10.1227/NEU.0000000000000267 NEQUEB (2014). Google Scholar

204. 

J. C. Kennedy, R. H. Pottier and D. C. Pross, “Photodynamic therapy with endogenous protoporphyrin IX: basic principles and present clinical experience,” J. Photochem. Photobiol. B, 6 (1–2), 143 –148 https://doi.org/10.1016/1011-1344(90)85083-9 JPPBEG 1011-1344 (1990). Google Scholar

205. 

M. Pharma, “Gliolan,” https://gleolan.com/ (). Google Scholar

206. 

S. J. R. Lehtonen et al., “Detection improvement of gliomas in hyperspectral imaging of protoporphyrin IX fluorescence—in vitro comparison of visual identification and machine thresholds,” Cancer Treat Res. Commun., 32 100615 https://doi.org/10.1016/j.ctarc.2022.100615 (2022). Google Scholar

207. 

O. Optical, “Bandpass filters,” https://www.omegafilters.com/optical-filters/bandpass (). Google Scholar

208. 

D. V. C. Company, “DVC-4000C color camera,” https://www.thorlabs.com/software/TSI/DVC-4000AC_Color_Datasheet.pdf (). Google Scholar

209. 

V. X. D. Yang et al., “A multispectral fluorescence imaging system: design and initial clinical tests in intra-operative Photofrin-photodynamic therapy of brain tumors,” Lasers Surg. Med., 32 (3), 224 –232 https://doi.org/10.1002/lsm.10131 LSMEDI 0196-8092 (2003). Google Scholar

210. 

P. A. Valdés et al., “A spectrally constrained dual-band normalization technique for protoporphyrin IX quantification in fluorescence-guided surgery,” Opt. Lett., 37 (11), 1817 –1819 https://doi.org/10.1364/OL.37.001817 OPLEDP 0146-9592 (2012). Google Scholar

211. 

P. A. Valdes et al., “System and methods for wide-field quantitative fluorescence imaging during neurosurgery,” Opt. Lett., 38 (15), 2786 –2788 https://doi.org/10.1364/OL.38.002786 OPLEDP 0146-9592 (2013). Google Scholar

212. 

A. Gautheron et al., “5-ALA induced PpIX fluorescence spectroscopy in neurosurgery: a review,” Front. Neurosci., 18 1310282 https://doi.org/10.3389/fnins.2024.1310282 1662-453X (2024). Google Scholar

213. 

P. A. Valdés et al., “Roberts quantitative fluorescence using 5-aminolevulinic acid-induced protoporphyrin IX biomarker as a surgical adjunct in low-grade glioma surgery,” J. Neurosurg., 123 (3), 771 –780 https://doi.org/10.3171/2014.12.JNS14391 JONSAC 0022-3085 (2015). Google Scholar

214. 

B. Montcel et al., “Two-peaked 5-ALA-induced PpIX fluorescence emission spectrum distinguishes glioblastomas from low grade gliomas and infiltrative component of glioblastomas,” Biomed. Opt. Express, 4 (4), 548 –558 https://doi.org/10.1364/BOE.4.000548 BOEICL 2156-7085 (2013). Google Scholar

215. 

E. Technologies, “pco.edge 4.2 bi USB sCMOS Camera,” https://www.excelitas.com/product/pcoedge-42-bi-usb-scmos-camera (). Google Scholar

216. 

N. Cameras, “HNü 512–512 x 512 EMCCD,” https://www.nuvucameras.com/products/hnu-512/# (). Google Scholar

217. 

M. Jermyn et al., “Improved sensitivity to fluorescence for cancer detection in wide-field image-guided neurosurgery,” Biomed. Opt. Express, 6 (12), 5063 –5074 https://doi.org/10.1364/BOE.6.005063 BOEICL 2156-7085 (2015). Google Scholar

218. 

M. Schwake et al., “Spectroscopic measurement of 5-ALA-induced intracellular protoporphyrin IX in pediatric brain tumors,” Acta Neurochir., 161 (10), 2099 –2105 https://doi.org/10.1007/s00701-019-04039-4 (2019). Google Scholar

219. 

J. J. Bravo et al., “Hyperspectral data processing improves PpIX contrast during fluorescence guided surgery of human brain tumors,” Sci. Rep., 7 (1), 9455 https://doi.org/10.1038/s41598-017-09727-8 SRCEC3 2045-2322 (2017). Google Scholar

220. 

P. A. Valdés et al., “Roberts combined fluorescence and reflectance spectroscopy for in vivo quantification of cancer biomarkers in low- and high-grade glioma surgery,” J. Biomed. Opt., 16 (11), 116007 https://doi.org/10.1117/1.3646916 JBOPFO 1083-3668 (2011). Google Scholar

221. 

Y. Xie et al., “Wide-field spectrally resolved quantitative fluorescence imaging system: toward neurosurgical guidance in glioma resection,” J. Biomed. Opt., 22 (11), 116006 https://doi.org/10.1117/1.JBO.22.11.116006 JBOPFO 1083-3668 (2017). Google Scholar

222. 

D. Black et al., “Towards machine learning-based quantitative hyperspectral image guidance for brain tumor resection,” (2023). Google Scholar

223. 

D. Black, “A spectral library and method for sparse unmixing of hyperspectral images in fluorescence guided resection of brain tumors,” Biomed. Opt. Express, 15 4406 –4424 (2024). Google Scholar

226. 

D. Black et al., “Deep learning-based correction and unmixing of hyperspectral images for brain tumor surgery,” (2024). Google Scholar

227. 

E. Suero-Molina et al., “Unraveling the blue shift in porphyrin fluorescence in glioma: the 620 nm peak and its potential significance in tumor biology,” Front. Neurosci., 17 1261679 https://doi.org/10.3389/fnins.2023.1261679 1662-453X (2023). Google Scholar

228. 

M. Marois et al., “A birefringent spectral demultiplexer enables fast hyper-spectral imaging of protoporphyrin IX during neurosurgery,” Commun. Biol., 6 (1), 341 https://doi.org/10.1038/s42003-023-04701-9 (2023). Google Scholar

229. 

J. Kopf et al., “Joint bilateral upsampling,” ACM Trans. Graph., 26 (3), 96 –es https://doi.org/10.1145/1276377.1276497 ATGRDF 0730-0301 (2007). Google Scholar

230. 

L. Loncan et al., “Hyperspectral pansharpening: a review,” IEEE Geosci. Remote Sens. Mag., 3 (3), 27 –46 https://doi.org/10.1109/MGRS.2015.2440094 (2015). Google Scholar

231. 

G. Anichini et al., “Hyperspectral and multispectral imaging in neurosurgery: a systematic literature review and meta-analysis,” Eur. J. Surg. Oncol., 12 108293 https://doi.org/10.1016/j.ejso.2024.108293 (2024). Google Scholar

232. 

L. Giannoni, F. Lange and I. Tachtsidis, “Hyperspectral imaging solutions for brain tissue metabolic and hemodynamic monitoring: past, current and future developments,” J. Opt., 20 (4), 044009 https://doi.org/10.1088/2040-8986/aab3a6 (2018). Google Scholar

233. 

R. H. Wilson et al., “Durkin review of short-wave infrared spectroscopy and imaging methods for biological tissue characterization,” J. Biomed. Opt., 20 (3), 030901 https://doi.org/10.1117/1.JBO.20.3.030901 JBOPFO 1083-3668 (2015). Google Scholar

234. 

M. Lai et al., “Automated classification of brain tissue: comparison between hyperspectral imaging and diffuse reflectance spectroscopy,” Proc. SPIE, 11315 113151X https://doi.org/10.1117/12.2548754 PSISDG 0277-786X (2020). Google Scholar

235. 

I. A. Cruz-Guerrero et al., “Extended blind end-member and abundance estimation with spatial total variation for hyperspectral imaging,” in Annu. Int. Conf. IEEE Eng. Med. Biol. Soc., 1957 –1960 (2021). https://doi.org/10.1109/EMBC46164.2021.9629708 Google Scholar

236. 

J. Sun et al., “Adaptive denoising hyperspectral data for visualization enhancement of intraoperative tissue,” J. Biophotonics, 15 (8), e202200083 https://doi.org/10.1002/jbio.202200083 (2022). Google Scholar

237. 

H. Bay, T. Tuytelaars and L. Van Gool, SURF: Speeded Up Robust Features, 404 –417 Springer Berlin Heidelberg, Berlin, Heidelberg (2006). Google Scholar

238. 

H. Noh et al., “Large-scale image retrieval with attentive deep local features,” in IEEE Int. Conf. Comput. Vis. (ICCV), 3476 –3485 (2017). https://doi.org/10.1109/ICCV.2017.374 Google Scholar

239. 

J. Matas et al., “Robust wide-baseline stereo from maximally stable extremal regions,” Image Vis. Comput., 22 (10), 761 –767 https://doi.org/10.1016/j.imavis.2004.02.006 (2004). Google Scholar

Biography

Alankar Kotwal, PhD, is a research scientist in the Department of Neurosurgery at the University of Texas Medical Branch and the Department of Electrical and Computer Engineering at Rice University. His research interests span optics, photonics, biomedical imaging, computational imaging, and computer vision. His current focus applies the principles of computational imaging to provide effective visual feedback in neurosurgery. His PhD at Carnegie Mellon University encompassed micrometer-scale computational imaging with interferometry.

Vishwanath Saragadam, PhD, is an assistant professor in the Department of Electrical and Computer Engineering at the University of California Riverside. He leads a laboratory developing novel spectral and computational imaging techniques.

Joshua D. Bernstock, MD, PhD, is a neurosurgeon–scientist in the Department of Neurosurgery at Harvard Medical School/Mass General Brigham and is a research scientist at the Massachusetts Institute of Technology. He is focused on developing novel therapeutics to treat a variety of neurological diseases including pediatric and adult brain tumors and stroke.

Alfredo Sandoval, BS, is a graduate student at the University of Texas Medical Branch.

Ashok Veeraraghavan, PhD, is a professor in the Department of Electrical and Computer Engineering at Rice University. He leads the development of novel computational imaging technologies with a focus on their use in neuroengineering and biomedical applications.

Pablo A. Valdés, MD, PhD, is an assistant professor in the Department of Neurosurgery, holder of the endowed Jennie Sealy Distinguished Chair in Neuroscience, is a director of the Neurosurgical Oncology and Brain Tumor Program at the University of Texas Medical Branch, and is an affiliate faculty at Rice University. He is a brain tumor neurosurgeon developing novel imaging technologies for improved diagnostics and therapeutics in brain cancer.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 International License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Alankar Kotwal, Vishwanath Saragadam, Joshua D. Bernstock, Alfredo Sandoval, Ashok Veeraraghavan, and Pablo A. Valdés "Hyperspectral imaging in neurosurgery: a review of systems, computational methods, and clinical applications," Journal of Biomedical Optics 30(2), 023512 (13 November 2024). https://doi.org/10.1117/1.JBO.30.2.023512
Received: 1 June 2024; Accepted: 3 October 2024; Published: 13 November 2024
Advertisement
Advertisement
Back to Top