This will count as one of your downloads.
You will have access to both the presentation and article (if available).
In this paper, we present an on-the-fly surgical support system that provides guidance using augmented reality and can be used in quasi-unprepared operating rooms. The proposed system builds upon a multi-modality marker and simultaneous localization and mapping technique to co-calibrate an optical see-through head mounted display to a C-Arm fluoroscopy system. Then, annotations on the 2-D X-Ray images can be rendered as virtual objects in 3-D providing surgical guidance. In a feasibility study on a semi-anthropomorphic phantom we found the accuracy of our system to be comparable to the traditional image-guided technique while substantially reducing the number of acquired X-Ray images as well as procedure time. Our promising results encourage further research on the interaction between virtual and real objects, that we believe will directly benefit the proposed method. Further, we would like to explore the capabilities of our on-the-fly augmented reality support system in a larger study directed towards common orthopaedic interventions.
We previously introduced four fiducial marker-based strategies to compensate for involuntary knee-joint motion during weight-bearing C-arm CT scanning of the lower body. 2D methods showed significant reduction of motion- related artifacts, but 3D methods worked best.
However, previous methods led to increased examination times and patient discomfort caused by the marker attachment process. Moreover, sub-optimal marker placement may lead to decreased marker detectability and therefore unstable motion estimates. In order to reduce overall patient discomfort, we developed a new image-based 2D projection shifting method.
A C-arm cone-beam CT system was used to acquire projection images of five healthy volunteers at various flexion angles. Projection matrices for the horizontal scanning trajectory were calibrated using the Siemens standard PDS-2 phantom. The initial reconstruction was forward projected using maximum-intensity projections (MIP), yielding an estimate of a static scan. This estimate was then used to obtain the 2D projection shifts via registration.
For the scan with the most motion, the proposed method reproduced the marker-based results with a mean error of 2.90 mm +/- 1.43 mm (compared to a mean error of 4.10 mm +/- 3.03 mm in the uncorrected case). Bone contour surrounding modeling clay layer was improved. The proposed method is a first step towards automatic image-based, marker-free motion-compensation.
View contact details
No SPIE Account? Create one