The prostate is known to move between daily fractions during the course of radiation therapy using external beams. This
movement causes problem with 3D conformal or intensity-modulated radiation therapy, in which tight margins are used
for treatment planning. To minimize the adverse effect of this motion on dose delivery, daily localization of the prostate
with respect to the planning CT is necessary. Current ultrasound-based localization systems require manual alignment of
ultrasound images with the planning CT. The resulting localization is subjective and has high interobserver variability.
To reduce the alignment uncertainty and increase the setup efficiency, we proposed an automatic prostate alignment
method using a volume subdivision-based elastic image registration algorithm. The algorithm uses normalized mutual
information as the measure of image similarity between the daily 3D ultrasound images and the planning CT. The
prostate contours on the CT are mapped to the ultrasound space by applying the transformation fields from image
registration. The displacement of the center-of-mass of the mapped contours is calculated for automatic patient setup. For
validation purposes, six experts independently and manually aligned the archived CT and 3D ultrasound images using
the SonArray system and reported their readings as shifts along the three principal axes. The mean shift and standard
deviation of the readings along each axis were calculated. We regarded the automatic alignment as being acceptable if
the difference between the mean shift and the automatic shift is within two times the standard deviation. Three out of
five patients were successfully aligned with two failures.
Minimally invasive laparoscopic surgeries are known to lead to improved outcomes, less scarring, and significantly
faster patient recovery as compared with conventional open invasive surgeries. Laparoscopes, used to visualize internal
anatomy and guide laparoscopic surgeries, however, remain limited in visualization capability. Not only do they provide
a relatively flat representation of the three-dimensional (3D) anatomy, they show only the exposed surfaces. A surgeon
is thus unable to see inside a structure, which limits the precision of current-generation minimally invasive surgeries and
is often a source of complications. To see inside a structure before dissecting it has been a long-standing need in
minimally invasive laparoscopic surgeries, a need that laparoscopy is fundamentally limited in meeting. In this work we
propose to use continuous computed tomography (CT) of the surgical field as a supplementary imaging tool to guide
laparoscopic surgeries. The recent emergence of 64-slice CT and its continuing evolution make it an ideal candidate for
four-dimensional (3D space + time) intraoperative imaging. We also propose a novel, elastic image registration-based
technique to keep the net radiation dose within acceptable levels. We have successfully created 3D renderings from
multislice CT corresponding to anatomy visible within the field of view of the laparoscope in a swine. These renderings
show the underlying vasculature along with their latest intraoperative orientation. With additional developments, our
research has the potential to help improve the precision of laparoscopic surgeries further, reduce complications, and
expand the scope of minimally invasive surgeries.
Radiofrequency ablation (RFA) is emerging as the primary mode of treatment of unresectable malignant liver tumors.
With current intraoperative imaging modalities, quick, precise, and complete localization of lesions remains a challenge
for liver RFA. Fusion of intraoperative CT and preoperative PET images, which relies on PET and CT registration, can
produce a new image with complementary metabolic and anatomic data and thus greatly improve the targeting accuracy.
Unlike neurological images, alignment of abdominal images by combined PET/CT scanner is prone to errors as a result
of large nonrigid misalignment in abdominal images. Our use of a normalized mutual information-based 3D nonrigid
registration technique has proven powerful for whole-body PET and CT registration. We demonstrate here that this
technique is capable of acceptable abdominal PET and CT registration as well. In five clinical cases, both qualitative and
quantitative validation showed that the registration is robust and accurate. Quantitative accuracy was evaluated by
comparison between the result from the algorithm and clinical experts. The accuracy of registration is much less than the
allowable margin in liver RFA. Study findings show the technique's potential to enable the augmentation of
intraoperative CT with preoperative PET to reduce procedure time, avoid repeating procedures, provide clinicians with
complementary functional/anatomic maps, avoid omitting dispersed small lesions, and improve the accuracy of tumor
targeting in liver RFA.
Real-time acquisition of 3D volumes is an emerging trend in medical imaging. True real-time 3D ultrasonic imaging is particularly valuable for echocardiography and trauma imaging as well as an intraoperative imaging technique for surgical navigation. Since the frame rate of ultrasonic imaging is fundamentally limited by the speed of sound, many schemes of forming multiple receive beams with a single transmit event have been proposed. With the advent of parallel receive beamforming, several architectures to form multiple (4-8) scan lines at a time have been suggested. Most of these architectures employ uniform sampling and input memory banks to store the samples acquired from all the channels. Some recent developments like crossed electrode array, coded excitation, and synthetic aperture imaging facilitate forming an entire 2D plane with a single transmit event. These techniques are speeding up frame rate to eventually accomplish true real-time 3D ultrasonic imaging. We present an FPGA-based scalable architecture capable of forming a complete scan plane in the time it usually takes to form a single scan line. Our current implementation supports 32 input channels per FPGA and up to 128 dynamically focused beam outputs. The desired focusing delay resolution is achieved using a hybrid scheme, with a combination of nonuniform sampling of the analog channels and linear interpolation for nonsparse delays within a user-specified minimum sampling interval. Overall, our pipelined architecture is capable of processing the input RF data in an online fashion, thereby reducing the input storage requirements and potentially providing better image quality.
Three-dimensional ultrasonic imaging, especially the emerging real-time version of it, is particularly valuable in medical applications such as echocardiography and surgical navigation. A known problem with ultrasound images is their high level of speckle noise. Anisotropic diffusion filtering has been shown to be effective in enhancing the visual quality of 3D ultrasound images and as preprocessing prior to advanced image processing. However, due to its arithmetic complexity and the sheer size of 3D ultrasound images, it is not possible to perform online, real-time anisotropic diffusion filtering using standard software implementations. We present an FPGA-based architecture that allows performing anisotropic diffusion filtering of 3D images at acquisition rates, thus enabling the use of this filtering technique in real-time applications, such as visualization, registration and volume rendering.
KEYWORDS: Digital filtering, 3D image processing, Convolution, Image processing, Field programmable gate arrays, Image filtering, Clocks, Surgery, Linear filtering, 3D acquisition
Median Filtering and Convolution operations constitute 60-70% of the preprocessing operations performed on digital images. Software implementations of 3D filters in general-purpose processors do not match the speed requirements for real-time performance. Field Programmable Gate Arrays (FPGAs) support reconfigurable architectures that are sufficiently flexible to implement more than one operation in the existing hardware, yielding higher speed for real-time execution. We present a linear systolic array architecture for median filtering, that implements bit-serial searching and majority voting. The unique arrangement of line delay units endows parallelism to the bit-serial median finding algorithm. Convolution operation, based on the fast embedded multiplier units in the FPGA and an optimized Carry Save Adder array is also presented. The application of the above designs to 3D image preprocessing is described. A voxel rate of 220MHz is achieved for median filtering and 277MHz for convolution operation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.