Intraoperative fluorescence molecular imaging based on targeted fluorescence agents is an emerging approach to improve surgical and endoscopic imaging and guidance. Short exposure times per frame and implementation at video rates are necessary to provide continuous feedback to the physician and avoid motion artifacts. However, fast imaging implementations also limit the sensitivity of fluorescence detection. To improve on detection sensitivity in video rate fluorescence imaging, we considered herein an optical flow technique applied to texture-rich color images. This allows the effective accumulation of fluorescence signals over longer, virtual exposure times. The proposed correction scheme is shown to improve signal-to-noise ratios both in phantom experiments and in vivo tissue imaging.
The signal curves in perfusion dynamic contrast enhanced MRI (DCE-MRI) of cancerous breast tissue reveal valuable
information about tumor angiogenesis. Pathological studies have illustrated that breast tumors consist of different subregions,
especially with more homogeneous properties during their growth. Differences should be identifiable in DCEMRI
signal curves if the characteristics of these sub-regions are related to the perfusion and angiogenesis. We introduce
a stepwise clustering method which in a first step uses a new similarity measure. The new similarity measure (PM)
compares how parallel washout phases of two curves are. To distinguish the starting point of the washout phase, a linear
regression method is partially fitted to the curves. In the next step, the minimum signal value of the washout phase is
normalized to zero. Finally, PM is calculated according to maximal variation among the point wise differences during
washout phases. In the second step of clustering the groups of signal curves with parallel washout are clustered using
Euclidean distance. The introduced method is evaluated on 15 DCE-MRI breast datasets with different types of breast
tumors. The use of our new heterogeneity analysis is feasible in single patient examination and improves breast MR
diagnostics.
Mesoscopic epifluorescence tomography is a novel technique that discovers fluorescence bio-distribution in small animals by tomographic means in reflectance geometry. A collimated laser beam is scanned over the skin surface to excite fluorophores hidden within the tissue while a CCD camera acquires an image of the fluorescence emission for each source position. This configuration is highly efficient in the visible spectrum range where trans-illumination imaging of small animals is not feasible due to the high tissue absorption and scattering in biological organisms. The reconstruction algorithm is similar to the one used in fluorescence molecular tomography. However, diffusion theory cannot be employed since the source-detector separation for most image pixels is comparable to or below the scattering length of the tissue. Instead Monte Carlo simulations are utilized to predict the sensitivity functions. In a phantom study we show the effect of using enhanced source grid arrangements during the data acquisition and the reconstruction process to minimize boundary artefacts. Furthermore, we present ex vivo data that show high spatial resolution and quantitative accuracy in heterogeneous tissues using GFP-like fluorescence in B6-albino mice up to a depth of 1100 μm.
Obtaining quantified optoacoustic reconstructions is an important and longstanding challenge, mainly caused by the
complex heterogeneous structure of biological tissues as well as the lack of accurate and robust reconstruction
algorithms. The recently introduced model-based inversion approaches were shown to eliminate some of reconstruction
artifacts associated with the commonly used back-projection schemes, while providing an excellent platform for
obtaining quantified maps of optical energy deposition in experimental configurations of various complexity. In this
work, we introduce a weighted model-based approach, capable of overcoming reconstruction challenges caused by perprojection
variations of object's illumination and other partial illumination effects. The universal weighting procedure is
equally shown to reduce reconstruction artifacts associated with other experimental imperfections, such as non-uniform
transducer sensitivity fields. Significant improvements in image fidelity and quantification are showcased both
numerically and experimentally on tissue phantoms.
The recent development of hybrid imaging scanners that integrate fluorescence molecular tomography (FMT) and x-ray computed tomography (XCT) allows the utilization of x-ray information as image priors for improving optical tomography reconstruction. To fully capitalize on this capacity, we consider a framework for the automatic and fast detection of different anatomic structures in murine XCT images. To accurately differentiate between different structures such as bone, lung, and heart, a combination of image processing steps including thresholding, seed growing, and signal detection are found to offer optimal segmentation performance. The algorithm and its utilization in an inverse FMT scheme that uses priors is demonstrated on mouse images.
A novel approach is introduced for clustering tumor regions with similar signal-time series measured by dynamic
contrast-enhanced (DCE) MRI to segment the tumor area in breast cancer. Each voxel of the DCE-MRI dataset is
characterized by a signal-time curve. The clustering process uses two describer values for each pixel. The first value is
L2-norm of each time series. The second value r is calculated as sum of differences between each pair of S(n-i) and S(i)
for i = {0...n/2} where S is the intensity and n the number of values in a time series. We call r reverse value of a time
series. Each time series is considered as a vector in an n-dimensional space and the L2-norm and reverse value of a vector
are used as similarity measures. The curves with similar L2-norms and similar reverse values are clustered together. The
method is tested on breast cancer DCE-MRI datasets with N = 256 x 256 spatial resolution and n = 128 temporal
resolution. The quality of each cluster is described through the variance of Euclidean distances of the vectors to the mean
vector of the corresponding cluster. The combination of both similarity measures improves the segmentation compared
to using each measure alone.
We describe an improved optoacoustic tomography method, that utilizes a diffusion-based photon propagation model in
order to obtain quantified reconstruction of targets embedded deep in heterogeneous scattering and absorbing tissue. For
the correction we utilize an iterative finite-element solution of the light diffusion equation to build a photon propagation
model. We demonstrate image improvements achieved by this method by using tissue-mimicking phantom
measurements. The particular strength of the method is its ability to achieve quantified reconstructions in non-uniform
illumination configurations resembling whole-body small animal imaging scenarios.
KEYWORDS: Image registration, 3D modeling, 3D image processing, Magnetic resonance imaging, Kinematics, Data modeling, Data conversion, Magnetism, Medical imaging, Rigid registration
The most frequent reasons for revision of total knee arthroplasty are loosening and abnormal axial alignment leading to
an unphysiological kinematic of the knee implant. To get an idea about the postoperative kinematic of the implant, it is
essential to determine the position and orientation of the tibial and femoral prosthesis.
Therefore we developed a registration method for fitting 3D CAD-models of knee joint prostheses into an 3D MR image.
This rigid registration is the basis for a quantitative analysis of the kinematics of knee implants.
Firstly the surface data of the prostheses models are converted into a voxel representation; a recursive algorithm
determines all boundary voxels of the original triangular surface data.
Secondly an initial preconfiguration of the implants by the user is still necessary for the following step: The user has to
perform a rough preconfiguration of both remaining prostheses models, so that the fine matching process gets a
reasonable starting point.
After that an automated gradient-based fine matching process determines the best absolute position and orientation: This
iterative process changes all 6 parameters (3 rotational- and 3 translational parameters) of a model by a minimal amount
until a maximum value of the matching function is reached.
To examine the spread of the final solutions of the registration, the interobserver variability was measured in a group of
testers. This variability, calculated by the relative standard deviation, improved from about 50% (pure manual
registration) to 0.5% (rough manual preconfiguration and subsequent fine registration with the automatic fine matching
process).
The aim of this study was to demonstrate the possibilities, advantages and limitations of virtual bronchoscopy using data sets from positron emission tomography (PET) and computed tomography (CT). Eight consecutive patients with lung cancer underwent PET/CT. PET was performed with F-18-labelled 2-[fluorine-18]-fluoro-2-deoxy-D: -glucose ((18)F-FDG). The tracheobronchial system was segmented with a volume-growing algorithm, using the CT data sets, and visualized with a shaded-surface rendering method. The primary tumours and the lymph node metastases were segmented for virtual CT-bronchoscopy using the CT data set and for virtual PET/CT-bronchoscopy using the PET/CT data set. Virtual CT-bronchoscopy using the low-dose or diagnostic CT facilitates the detection of anatomical/morphological structure changes of the tracheobronchial system. Virtual PET/CT-bronchoscopy was superior to virtual CT-bronchoscopy in the detection of lymph node metastases (P=0.001), because it uses the CT information and the molecular/metabolic information from PET. Virtual PET/CT-bronchoscopy with a transparent colour-coded shaded-surface rendering model is expected to improve the diagnostic accuracy of identification and characterization of malignancies, assessment of tumour staging, differentiation of viable tumour tissue from atelectases and scars, verification of infections, evaluation of therapeutic response and detection of an early stage of recurrence that is not detectable or is misjudged in comparison with virtual CT-bronchoscopy.
This paper presents new methods for knowledge extraction and visualization, applied to datasets selected from the astronomical literature. One of the objectives is to detect correlations between concepts extracted from the documents. Concepts are generally meta-information which may be defined a priori, or may be extracted from the document contents and are organised along domain ontologies or concept hierarchies.
The study illustrated in the paper uses a data collection of about 10,000 articles extracted from the NASA ADS, corresponding to all publications for which at least one author is a French astronomer, for the years 1996 to 2000. The study presents new approaches for visualizing relationships between institutes, co-authorships, scientific domains, astronomical object types, etc.
To support ophthalmologists in their routine and enable the quantitative assessment of vascular changes in color fundus photographs a multi-resolution approach was developed which segments the vessel tree efficiently and precisely in digital images of the retina. The algorithm starts at seed points, found in a preprocessing step and then follows the vessel, iteratively adjusting the direction of the search, and finding the center line of the vessels. As an addition, vessel branches and crossings are detected and stored in detailed lists. Every iteration of the Directional Smoothing Based (DSB) tracking process starts at a given point in the middle of a vessel. First rectangular windows for several directions in a neighborhood of this point are smoothed in the assumed direction of the vessel. The window, that results in the best contrast is then said to have the true direction of the vessel. The center point is moved into that direction 1/8th of the vessel width, and the algorithm continues with the next iteration. The vessel branch and crossing detection uses a list with unique vessel segment IDs and branch point IDs. During the tracking, when another vessel is crossed, the tracking is stopped. The newly traced vessel segment is stored in the vessel segment list, and the vessel, that had been traced before is broken up at the crossing- or branch point, and is stored as two different vessel segments. This approach has several advantages: - With directional smoothing, noise is eliminated, while the edges of the vessels are kept. - DSB works on high resolution images (3000 x 2000 pixel) as well as on low-resolution images (900 x 600 pixel), because a large area of the vessel is used to find the vessel direction - For the detection of venous beading the vessel width is measured for every step of the traced vessel. - With the lists of branch- and crossing points, we get a network of connected vessel segments, that can be used for further processing the retinal vessel tree.
The purpose of this study was the development of a method for fast and efficient analysis of dynamic MR images of the female breast. The image data sets were acquired with a saturation-recovery turbo-FLASH sequence facilitating the detection of the kinetics of the contrast agent concentration in the whole breast with a high temporal and spatial resolution. In addition, a morphological 3D-FLASH data set was acquired. The dynamic image data sets were analyzed by tracer kinetic modeling in order to describe the physiological processes underlying the contrast enhancement in mathematical terms and thus to enable the estimation of functional tissue specific parameters, reflecting the status of microcirculation. In order to display morphological and functional tissue information simultaneously, we developed a multidimensional visualization system, which enables a practical and intuitive human-computer interface in virtual reality. The quality of real-time volume visualization (using 3D-texture mapping) mainly depends on two factors: number and spatial resolution of the sampling slices. Furthermore, to allow the representation of both the MR signal and the relevant set of model parameters, an adaptation of these quantities to the size of the texture element might be necessary. - Detection and localization of multiple breast lesions may be an important application.
KEYWORDS: Cartilage, Image segmentation, Magnetic resonance imaging, 3D image processing, Biological research, 3D modeling, 3D acquisition, Image acquisition, Image processing algorithms and systems, Bone
We developed 3D MR based image processing methods for biomechanical analysis of joints. These methods provide quantitative data on the morphological distribution of the joint cartilage as well as biomechanical analysis of relative translation and rotation of joints. After image data acquisition in an open MR system, the segmentation of the different joint structures was performed by a semi automatic technique based on a gray value oriented region growing algorithm. After segmentation 3D reconstructions of cartilage and bone surfaces were performed. Principal axis decomposition is used to calculate a reproducible tibia plateau based coordinate system that allows the determination of relative rotation and translation of the condyles and menisci in relation to the tibia plateau. The analysis of the femoral movement is based on a reproducible, semi automatic calculated epicondylar axis. The analysis showed a posterior translation of the meniscus and even more of the femur condyles in healthy knees and in knees with an insufficiency of the anterior cruciate ligament (ACL).
To improve diagnosis and therapy planning with additional information in an easy to use and fast way a virtual endoscopy system was developed. From a technical viewpoint, virtual endoscopy can be generated using image sequencies acquired with CT or MRI. It requires appropriate software for image processing and endoluminal visualization and hardware capabilities for immersive virtual reality. This includes that firstly the intuitive user interaction is supported by data gloves, position tracking systems and stereo display devices. Secondly the virtual environment requires real time visualization supported by high end graphic engines to enable the continuous operation and interaction. To enable the endoluminal view, the precise segmentation of the inner lumina like tracheobronchial tree, inner ear or vessels is necessary. In addition to this pathological findings must be defined. We use automatic segmentation techniques like volume growing as well as semiautomatic techniques like deformable models in a virtual environment. After that the surfaces of the segmented volume are reconstructed. This is the basis for our multidimensional display system which visualizes volumes, surfaces and computation results simultaneously. Our developed method of virtual endoscopy enables the interactive, immersive and endoluminal inspection of complex anatomical structures. It is based on intensive image processing like 3D-segmentation and a so called hybrid technique which displays all the information by volume and surface rendering. The system was applied on virtual bronchoscopy, colonoscopy, angioscopy as well as endoluminal representation of the inner ear.
Purpose: To improve the diagnosis of pathologic modified airways, a visualization system has been developed and tested based on the techniques of digital image analysis, synthesis of spiral CT and the visualization by methods of virtual reality. Materials and Methods: 20 patients with pathologic modifications of the airways (tumors, obstructions) were examined with Spiral-CT. The three-dimensional shape of the airways and the lung tissue is defined by a semiautomatic volume growing method and a following geometric surface reconstruction. This is the basis of a multidimensional display system which visualizes volumes, surfaces and computation results simultaneously. To enable the intuitive and immersive inspection of the airways a virtual reality system, consisting of two graphic engines, a head mounted display system, data gloves and specialized software was integrated. Results: In 20 cases the extension of the pathologic modification of the airways could be visualized with the virtual bronchoscopy. The user interacts with and manipulates the 3D model of the airways in an intuitive and immersive way. In contrast to previously proposed virtual bronchoscopy systems the described method permits truly interactive navigation and detailed exploration of anatomic structures. The system enables a user oriented and fast inspection of the volumetric image data. Conclusion: To support radiological diagnosis with additional information in an easy to use and fast way a virtual bronchoscopy system was developed. It enables the immersive and intuitive interaction with 3D Spiral CTs by truly 3D navigation within the airway system. The complex anatomy of the central tracheobronchial system could be clearly visualized. Peripheral bronchi are displayed up to 5th degree.
To support ophthalmologists in their daily routine and enable the quantitative assessment of vascular changes in color fundus photographs vessel extraction techniques have been improved. Within a model based approach steerable filters have been tested for efficient and precise segmentation of the vessel tree. The global model comprises the detection of the optic disc, the finding of starting points close to the optic disc, the tracking of the vessel course, the extraction of the vessel contour and the identification of branching points. This helps evaluating image quality and pathological changes to the retina and thus, improves diagnosis and therapy. The optic disc location is estimated and then more precisely extracted with the help of a hierarchical filter scheme based on first- order gaussian kernels at varying orientations. Vessel points are automatically identified around the optic disc and the vessel course is tracked in the actual direction by second-order gaussian kernels at varying orientations and scales. Using this backbone, differently oriented first- order gaussian kernels approximate the vessel contour. Thus, the direction and diameter of each vessel segment are determined. Steerable filters enable most efficient implementation. The developed methods have been applied to color fundus photographs showing different levels of diabetic retinopathy.
Virtual bronchoscopy is one method using a virtual reality equipment to improve diagnostical information drawn from CT data. As a preprocessing step the bronchial tract has to be extracted from the data. Therefore a method has been developed, which allows automatic segmentation of the bronchial tract. In the preprocessing step a backbone to the bronchial tract is extracted, which allows using more sophisticated methods for segmentation. In the second step either minimal variance region growing or an Active Contour Model is used. The first one uses statistical information to extend the presegmented area according to the local greyvalue distribution. The second method automatically fits the predefined contour to the local gradient information.
Modern 3D scanning techniques like magnetic resonance imaging (MRI) or computed tomography (CT) produce high- quality images of the human anatomy. Virtual environments open new ways to display and to analyze those tomograms. Compared with today's inspection of 2D image sequences, physicians are empowered to recognize spatial coherencies and examine pathological regions more facile, diagnosis and therapy planning can be accelerated. For that purpose a powerful human-machine interface is required, which offers a variety of tools and features to enable both exploration and manipulation of the 3D data. Man-machine communication has to be intuitive and efficacious to avoid long accustoming times and to enhance familiarity with and acceptance of the interface. Hence, interaction capabilities in virtual worlds should be comparable to those in the real work to allow utilization of our natural experiences. In this paper the integration of hand gestures and visual focus, two important aspects in modern human-computer interaction, into a medical imaging environment is shown. With the presented human- machine interface, including virtual reality displaying and interaction techniques, radiologists can be supported in their work. Further, virtual environments can even alleviate communication between specialists from different fields or in educational and training applications.
The assessment of blood flow in the gastrointestinal mucosa might be an important factor for the diagnosis and treatment of several diseases such as ulcers, gastritis, colitis, or early cancer. The quantity of blood flow is roughly estimated by computing the spatial hemoglobin distribution in the mucosa. The presented method enables a practical realization by calculating approximately the hemoglobin concentration based on a spectrophotometric analysis of endoscopic true-color images, which are recorded during routine examinations. A system model based on the reflectance spectroscopic law of Kubelka-Munk is derived which enables an estimation of the hemoglobin concentration by means of the color values of the images. Additionally, a transformation of the color values is developed in order to improve the luminance independence. Applying this transformation and estimating the hemoglobin concentration for each pixel of interest, the hemoglobin distribution can be computed. The obtained results are mostly independent of luminance. An initial validation of the presented method is performed by a quantitative estimation of the reproducibility.
A method of a three-dimensional surface reconstruction of the retina in the area of the papilla is presented. The surface reconstruction is based on a sequence of discrete gray-level images of the retina recorded by a scanning laser ophthalmoscope (SLO). The underlying assumption of the developed surface reconstruction algorithm is that the depth information is also encoded in the brightness values of the single pixels beside the ordinary spatial 2D information. The brightness of an image position depends also on the degree of reflection of a confocal laser beam. Only these surface structures produce a high response of the focused laser light, which are located directly in the focus plane of the confocal laser beam. The occurring disparities between the single images of a sequence are considered to be approximately linear and are corrected by applying the cepstrum technique. The depth information is estimated out of the volumetric representation of the image sequence by searching the maximal value of the brightness within a computed depth profile at every image position. In the resulting range images disturbances which occur during the recording cause wrong local estimations of the depth information. These local disturbances are corrected by applying especially developed surface improvement processes. The work is completed by investigating several different approaches to reduce the noise and disturbances of SLO image data.
In order to enable the interaction with and manipulation of 3-D data sets in the realm of medical diagnosis and therapy planning we developed a modified Z-merging algorithm that includes transparency and texture mapping features. For this an extended shape based interpolation model creates isotropic grayscale data volume in case of spatial image sequences. Interesting anatomical regions such as soft tissue, organs, and bones are detected by automatic and interactive segmentation procedures. Following that, a fully automatic surface construction algorithm detects the 3-D object boundaries by fitting geometric primitives to the binary data. The surface representations support the user with a fast overview about the structure of the 3D scene. Texture mapping is implemented as the projection of the gray values of the isotropic voxels onto a polygonal surface. Adaptive refinement, Phong's normal interpolation, and transparency are the most important features of this raytracer. The described technique enables the simultaneous display of multimodal 3D image data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.