PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE-IS&T Proceedings Volume 6814, including the Title Page, Copyright information, Table of Contents, Introduction (if any), and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We survey applications of classical and of time-sequential sampling theory and some of its recent extensions,
respectively, in two complementary areas. First to reduce acquisition requirements for dynamic imaging below
those predicted by classical theory, and second, to reduce the computation for tomographic reconstruction from
O(N3) to O(N2 log N) for an N × N image, with similar acceleration for 3D images. In both areas, the savings
demonstrated in practical examples exceed an order-of-magnitude.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the remote sensing context the goal of imaging polarimetry is to map the state of polarization of a scene
of interest. The polarization state of a scene can be represented by the Stokes parameters. Since the Stokes
parameters are not directly measurable one must first make several individual measurements and then the infer
the Stokes parameters. We approach the Stokes parameter construction problem using penalized-likelihood
estimation. Given the measured linearly polarized images, what is the optimal means by which to deblur and
denoise and construct the Stokes parameters? In traditional image restoration one attempts to restore the
blurred and noise corrupted data directly. In the case of imaging polarimetry we must answer the question of
the optimality of restoring the measured data and then forming the Stokes images or restoring the Stokes images
directly. An alternative approach is to estimate the Stokes parameters directly. We define our cost function
for reconstruction by a weighted least squares data fit penalty and a regularization penalty. We show that for
quadratic regularization the estimators of Stokes and intensity images can be made equal by appropriate choice
of regularization parameters. It is empirically shown that, when using edge preserving regularization, estimating
Stokes parameters directly leads to somewhat lower error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Medical imaging typically requires the reconstruction of a limited region of interest (ROI) to obtain a high
resolution image of the anatomy of interest. Although targeted reconstruction is straightforward for analytical
reconstruction methods, it is more complicated for statistical iterative techniques, which must reconstruct all
objects in the field of view (FOV) to account for all sources of attenuation along the ray paths from x-ray
source to detector. A brute force approach would require the reconstruction of the full field of view in high-resolution,
but with prohibitive computational cost. In this paper, we propose a multi-resolution approach to
accelerate targeted iterative reconstruction using the non-homogeneous ICD (NH-ICD) algorithm. NH-ICD aims
at speeding up convergence of the coordinate descent algorithm by selecting preferentially those voxels most in
need of updating. To further optimize ROI reconstruction, we use a multi-resolution approach which combines
three separate improvements. First, we introduce the modified weighted NH-ICD algorithm, which weights the
pixel selection criteria according to the position of the voxel relative to the ROI to speed up convergence within
the ROI. Second, we propose a simple correction to the error sinogram to correct for inconsistencies between
resolutions when the forward model is not scale invariant. Finally, we leverage the flexibility of the ICD algorithm
to add selected edge pixels outside the ROI to the ROI reconstruction in order to minimize transition artifacts
at the ROI boundary. Experiments on clinical data illustrate how each component of the method improves
convergence speed and image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an algorithm to generate samples from probability distributions on the space of curves. Traditional curve evolution
methods use gradient descent to find a local minimum of a specified energy functional. Here, we view the energy
functional as a negative log probability distribution and sample from it using a Markov chain Monte Carlo (MCMC) algorithm.
We define a proposal distribution by generating smooth perturbations to the normal of the curve, update the curve
using level-set methods, and show how to compute the transition probabilities to ensure that we compute samples from the
posterior. We demonstrate the benefits of sampling methods (such as robustness to local minima, better characterization
of multi-modal distributions, and access to some measures of estimation error) on medical and geophysical applications.
We then use our sampling framework to construct a novel semi-automatic segmentation approach which takes in partial
user segmentations and conditionally simulates the unknown portion of the curve. This allows us to dramatically lower the
estimation variance in low-SNR and ill-posed problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most basic change detection algorithms use a pixel-based approach. Whereas such approach is quite well defined
for monitoring important area changes (such as urban growth monitoring) in low resolution images, an object
based approach seems more relevant when the change detection is specifically aimed toward targets (such as
small buildings and vehicles).
In this paper, we present an approach that mixes radiometric and geometric features to qualify the changed
zones. The goal is to establish bounds (appearance, disappearance, substitution ...) between the detected changes
and the underlying objects. We proceed by first clustering the change map (containing each pixel bitemporal
radiosity) in different classes using the entropy-kmeans algorithm. Assuming that most man-made objects have
a polygonal shape, a polygonal approximation algorithm is then used in order to characterize the resulting zone
shapes. Hence allowing us to refine the primary rough classification, by integrating the polygon orientations in
the state space. Tests are currently conducted on Quickbird data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a new topology extraction approach for 3D objects. We choose a normalized robust
and simplified geodesic-based Morse function to define skeletal Reeb graphs of 3D objects. In addition to scale
invariance, we ensure, by using a geodesic distance, the invariance of these graphs to all isometric transforms.
In our Reeb graph construction procedure, we introduce important improvements and advantages over existing
techniques. We define an efficient sampling rate based on the characteristic resolution intrinsic to each 3D object.
Then, we provide a geometry preserving approach by replacing the traditional intervals of a Morse function by
its exact level curves. Moreover, we take advantage of the resulting ordered adjacency matrices that describe our
Reeb graphs, to introduce a new measure of similarity between the corresponding objects. Experimental results
illustrate the computational simplicity and efficiency of the proposed technique for topological Reeb graphs'
extraction. The experiments also show the robustness of this approach against noise and object remeshing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Improvised explosive devices (IEDs) are common and lethal instruments of terrorism, and linking a terrorist entity to a
specific device remains a difficult task. In the effort to identify persons associated with a given IED, we have
implemented a specialized content based image retrieval system to search and classify IED imagery. The system makes
two contributions to the art. First, we introduce a shape-based matching technique exploiting shape, color, and texture
(wavelet) information, based on novel vector field convolution active contours and a novel active contour initialization
method which treats coarse segmentation as an inverse problem. Second, we introduce a unique graph theoretic approach
to match annotated printed circuit board images for which no schematic or connectivity information is available. The
shape-based image retrieval method, in conjunction with the graph theoretic tool, provides an efficacious system for
matching IED images. For circuit imagery, the basic retrieval mechanism has a precision of 82.1% and the graph based
method has a precision of 98.1%. As of the fall of 2007, the working system has processed over 400,000 case images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We apply stabilized inverse diffusion equations (SIDEs) to segment microscopy images of materials to aid in
analysis of defects. We extend SIDE segmentation methods and demonstrate the effectiveness of our approaches
to two material analysis tasks. We first develop a method to successfully isolate the textured area of a solidification
defect to pixel accuracy. The second task involves utilizing multiple illuminations of the same structure of a
polycrystalline alloy. Our novel approach features the fusion of data extracted from each of these images to
create a composite segmentation which effectively represents all texture boundaries visible in any of the images.
These two methods both propose new techniques to incorporate multiple images to produce segmentations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
By omitting local decay and phase evolution,
traditional MRI models each datum as a sample
from k-space so that reconstruction can be implemented
by FFTs. Single-shot parameter assessment by
retrieval from signal encoding (SS-PARSE) acknowledges
local decay and phase evolution, so it models
each datum as a sample from (k, t)-space rather than
k-space. Local decay and frequency vary continuously
in space. Because of this, discrete models in space
can cause artifacts in the reconstructed parameters.
Increasing the resolution of the reconstructed parameters
can more accurately capture the spatial variations,
but the resolution is limited not only by computational
complexity but also by the size of the acquired data.
For a limited data set used for reconstruction, simply
increasing the resolution may cause the reconstruction
to become an underdetermined problem. This paper
presents a solution to this problem based on cubic
convolution interpolation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the application of the expectation-maximization/maximization of the posterior marginals
(EM/MPM) algorithm to signal detection for functional MRI (fMRI). On basis of assumptions for fMRI 3-D image
data, a novel analysis method is proposed and applied to synthetic data and human brain data. Synthetic data analysis is
conducted using two statistical noise models (white and autoregressive of order 1) and, for low contrast-to-noise ratio
(CNR) data, reveals better sensitivity and specificity for the new method than for the traditional General Linear Model
(GLM) approach. When applied to human brain data, functional activation regions are found to be consistent with those
obtained using the GLM approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have previously developed a novel framework for the analysis of single-slice functional magnetic resonance imaging
(fMRI) data that identifies multi-pixel regions of activation through iterative segmentation-based optimization over
hemodynamic response (HDR) estimates, generated at the level of both individual pixels and regional groupings.
Through the addition of a correction for the disparate sampling times associated with multi-slice acquisitions in fMRI,
the algorithm has been extended to permit analysis of full volumetric data. Additional improvement in performance is
achieved through inclusion of an estimate of the covariance matrix of the fMRI data, previously assumed to be
proportional to the identity matrix across all regions. Simulations using synthetic activation embedded in autoregressive
noise reveal the proposed procedure to be more sensitive and selective than conventional fMRI analysis methods
(reference set: general linear model test, GLM; independent component analysis, ICA; principal component analysis,
PCA) in identification of active regions over the range of average contrast-to-noise ratios of 0.7 to 2.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent technological advances have contributed to the advent of phase contrast magnetic resonance imaging
(PCMRI) as standard practice in clinical environments. In particular, decreased scan times have made using the modality
more feasible. PCMRI is now a common tool for flow quantification, and for more complex vector field analyses that
target the early detection of problematic flow conditions. Segmentation is one component of this type of application that
can impact the accuracy of the final product dramatically. Vascular segmentation, in general, is a long-standing problem
that has received significant attention. Segmentation in the context of PCMRI data, however, has been explored less and
can benefit from object-based image processing techniques that incorporate fluids specific information. Here we present
a fuzzy rule-based adaptive vector median filtering (FAVMF) algorithm that in combination with active contour
modeling facilitates high-quality PCMRI segmentation while mitigating the effects of noise.
The FAVMF technique was tested on 111 synthetically generated PC MRI slices and on 15 patients with congenital
heart disease. The results were compared to other multi-dimensional filters namely the adaptive vector median filter, the
adaptive vector directional filter, and the scalar low pass filter commonly used in PC MRI applications. FAVMF
significantly outperformed the standard filtering methods (p < 0.0001). Two conclusions can be drawn from these
results: a) Filtering should be performed after vessel segmentation of PC MRI; b) Vector based filtering methods should
be used instead of scalar techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper seeks to bridge the two major algorithmic approaches to sparse signal recovery from an incomplete set
of linear measurements - L1-minimization methods and iterative methods (Matching Pursuits). We find a simple
regularized version of the Orthogonal Matching Pursuit (ROMP) which has advantages of both approaches: the
speed and transparency of OMP and the strong uniform guarantees of the L1-minimization. Our algorithm
ROMP reconstructs a sparse signal in a number of iterations linear in the sparsity, and the reconstruction is
exact provided the linear measurements satisfy the Uniform Uncertainty Principle. In the case of inaccurate
measurements and approximately sparse signals, the noise level of the recovery is proportional to &sqrt;log n parallel e parallel 2
where e is the error vector.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We consider the image reconstruction problem when the original image is assumed to be sparse and when partial
knowledge of the point spread function (PSF) is available. In particular, we are interested in recovering the
magnetization density given magnetic resonance force microscopy (MRFM) data, and we present an iterative
alternating minimization algorithm (AM) to solve this problem. A smoothing penalty is introduced on allowable
PSFs to improve the reconstruction. Simulations demonstrate its performance in reconstructing both the image
and unknown point spread function. In addition, we develop an optimization transfer approach to solving a
total variation (TV) blind deconvolution algorithm presented in a paper by Chan and Wong. We compare the
performance of the AM algorithm to the blind TV algorithm as well as to a TV based majorization-minimization
algorithm developed by Figueiredo et al.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Maximum a posteriori (MAP) estimation has been shown to be an effective method for reconstructing images
from optical diffusion tomography data. However, one disadvantage of MAP reconstruction is that it typically
requires the use of iterative methods which are computationally intensive. However, the direct reconstruction
of MAP images is possible when the forward model is linear (or linearized) and the noise and image prior are
assumed Gaussian. These non-iterative MAP reconstruction techniques only require the multiplication of an
inverse matrix by a data vector to compute the reconstruction, but they depend on a combination of lossy source
coding techniques and sparse matrix transforms to make the required matrix-vector product computation both
computationally and memory efficient.
In this paper, we show examples of how non-iterative MAP reconstruction methods can be used to dramatically
reduce computation and storage for MAP reconstruction. Simulations of fluorescence optical diffusion
tomography (FODT) measurements and corresponding reconstructions are used to demonstrate the potential
value of these techniques. Numerical examples show the non-iterative MAP reconstruction can substantially
reduce both storage and computation, as compared to traditional iterative reconstruction methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Non-Local Means (NLM) method of denoising has received considerable attention in the image processing
community due to its performance, despite its simplicity. In this paper, we show that NLM is a zero-th order
kernel regression method, with a very specific choice of kernel. As such, it can be generalized. The original
method of NLM, we show, implicitly assumes local constancy of the underlying image data. Once put in the
context of kernel regression, we extend the existing Non-Local Means algorithm to higher orders of regression
which allows us to approximate the image data locally by a polynomial or other localized basis of a given order.
These extra degrees of freedom allow us to perform better denoising in texture regions. Overall the higher order
method displays consistently better denoising capabilities compared to the zero-th order method. The power of
the higher order method is amply illustrated with the help of various denoising experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this work we wish to recover an unknown image from a blurry version. We solve this inverse problem by energy minimization and regularization. We seek a solution of the form u + v, where u is a function of bounded variation (cartoon component), while v is an oscillatory component (texture), modeled by a Sobolev function with negative degree of differentiability. Experimental results show that this cartoon + texture model better recovers textured details in natural images, by comparison with the more standard models where the unknown is restricted only to the space of functions of bounded variation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose using two methods to determine the canonical views of 3D objects: minimum description
length (MDL) criterion and compressive sensing method. MDL criterion searches for the description length that achieves the balance between model accuracy and parsimony. It takes the form of the sum of a likelihood and a penalizing term, where the likelihood is in favor of model accuracy such that more views assists the description of an object, while the second term penalizes lengthy description to prevent overfitting of the model. In order to devise the likelihood term, we propose a model to represent a 3D object as the weighted sum of multiple range images, which is used in the second method to determine the canonical views as well. In compressive sensing method, an intelligent way of parsimoniously sampling an object is presented. We make direct inference from Donoho1 and Candes'2 work, and adapt it to our model. Each range image is viewed as a projection, or a sample, of a 3D model, and by using compressive sensing theory, we are able to reconstruct the object with an overwhelming probability by scarcely sensing the object in a random manner. Compressive sensing is different from traditional compressing method in the sense that the former compress things in the sampling stage while the later collects a large number of samples and then compressing mechanism is carried out thereafter. Compressive sensing scheme is particularly useful when the number of sensors are limited or the sampling machinery cost much resource or time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-photon microscopy has provided biologists with unprecedented opportunities for high resolution imaging deep
into tissues. Unfortunately deep tissue multi-photon microscopy images are in general noisy since they are acquired at
low photon counts. To aid in the analysis and segmentation of such images it is sometimes necessary to initially enhance
the acquired images. One way to enhance an image is to find the maximum a posteriori (MAP) estimate of each pixel
comprising an image, which is achieved by finding a constrained least squares estimate of the unknown distribution. In
arriving at the distribution it is assumed that the noise is Poisson distributed, the true but unknown pixel values assume a
probability mass function over a finite set of non-negative values, and since the observed data also assumes finite values
because of low photon counts, the sum of the probabilities of the observed pixel values (obtained from the histogram of
the acquired pixel values) is less than one. Experimental results demonstrate that it is possible to closely estimate the
unknown probability mass function with these assumptions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a new methodology for calculation of a 2D projection operator for emission tomography
using a content-adaptive mesh model (CAMM). A CAMM is an efficient image representation based on adaptive
sampling and linear interpolation, wherein non-uniform image samples are placed most densely in regions having fine
detail. We have studied CAMM in recent years and shown that a CAMM is an efficient tool for data representation and
tomographic reconstruction. In addition, it can also provide a unified framework for tomographic reconstruction of organs
(e.g., the heart) that undergo non-rigid deformation. In this work we develop a projection operator model suitable for a
CAMM representation such that it accounts for several major degradation factors in data acquisition, namely object
attenuation and depth-dependent blur in detector-collimator response. The projection operator is calculated using a ray-tracing
algorithm. We tested the developed projection operator by using Monte Carlo simulation for single photon
emission tomography (SPECT). The methodology presented here can also be extended to transmission tomography.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tunable diode laser absorption spectroscopy (TDLAS) shows promise for in situ monitoring in high-speed flows.
However, the dynamic nature of typical flows of supersonic combustors, gas turbine engines and augmenters can also
lead to inhomogenities that cannot be captured by a single line-of-sight TDLAS measurement. Instead, multiple
measurements varied over several spatial locations need to be made. In the current study, shock train structure in the
isolator section of the Research Cell 18 supersonic combustion facility at Wright-Patterson AFB is measured. Although
only two view angles are available for measurement, multiple absorption features along with a priori computational fluid
dynamics (CFD) simulations enable estimates of two dimensional flow features to be formed. Vector quantization/kmeans
data clustering is used to identify key flow features from the temporal history of the raw sinograms. Through the
use of multiple absorption features that are measured nearly simultaneously, an approximate two-dimensional image can
be formed. This image can be further refined through the use of an optimal set of basis functions that can be derived
from a set of CFD simulations that describes the flow shapes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Single particle reconstruction is often employed for 3-D reconstruction of diverse macromolecules. However, the
algorithm requires a good initial guess from a priori information to guarantee the convergence to the correct
solution. This paper describes a novel model free 3-D reconstruction algorithm by employing the symmetry
and sparsity of unknown structure. Especially, we develop an accurate and fully automatic iterative algorithm
for 3D reconstruction of unknown helix structures. Because the macromolecule structure assumes only sparse
supports in real space and the helical symmetry provides several symmetric views from a single micrograph,
a reasonably quality 3-D reconstruction can be obtained from the limited views using the compressed sensing
theory. Furthermore, the correct helix parameters usually provide the maximal variance of the reconstructed
volume, facilitating the parameter estimation. Remarkably, the search space of helix parameter can be drastically
reduced by exploiting the diffraction pattern. With the estimated helix parameter and additional 3-D registration,
the multiple helix segments can be combined for the optimal quality reconstruction. Experimental results using
synthetic and real helix data confirm that our algorithm provides superior reconstruction of 3-D helical structure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new iterative algorithm for quantitative image reconstruction for differential-interference contrast (DIC) microscopy
is presented, along with simulations demonstrating key properties of the algorithm. The algorithm is an
alternating minimization (AM) algorithm based on a diffraction imaging model for DIC images. The algorithm
computes a specimen's complex transmittance function (magnitude and phase) from the DIC images. This new
method extends the rotational diversity (RD) method developed by Preza that is based on the assumption that
the specimen does not absorb light (or that its magnitude is constant and can be ignored) and thus only the
specimen's phase function is computed from rotationally-diverse DIC images. The AM approach lifts the optimization
problem to a higher dimensional space that is intimately tied to the underlying physics. The variables
that are introduced in this lifting process are the unknown, unmeasurable phases on the data. The AM algorithm
iterates between estimating the image given the phase of the data and estimating the phase given the image.
The framework allows constraints and penalties on the magnitude and phase estimates to be incorporated in a
principled manner. The performance of the AM method is evaluated using simulated noiseless data. Simulation
results compare specimen magnitude (absorption) and phase estimated with the AM method to the true
specimen parameters showing good quantitative agreement using the mean square error and difference images.
The estimated phase is a significant improvement over the phase estimated with the RD method from the same
DIC images as shown in a comparison between the two methods. The AM method provides a novel approach in extracting quantitative phase and amplitude information of the same field of view from a single microscope mode.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is a quite common that acquired images are noisy and image filtering is a necessary step to enhance them. Usually
image filtering effectiveness is characterized in terms of MSE or PSNR although nowadays it is well understood that
these criteria do not always correspond adequately to visual perception of processed images. Recently several new
measures of image quality have been proposed. In particular, two metrics, called PSNR-HVS and PSNR-HVS-M, were
designed and successfully tested. Both take into account different sensitivity of a human eye to spatial frequencies,
the latter one also accounts for the masking effects. Using these two metrics as well as a traditional PSNR and used by
NASA metric DCTune, we have analyzed performance of five different filters (standard mean and median, sigma, Lee
and DCT based filters) for a set of test images corrupted by an additive Gaussian noise with a wide set of variance values.
It has been shown that there are many situations when PSNR after filtering improves while one or all other metrics
manifest image quality decreasing. Most often this happens if noise variance is small and/or an image contains texture.
Comparisons show that DCT based filter commonly outperforms other considered filters in the sense of denoised image
visual quality. At the same time, the standard mean filter produces worse visual quality of processed images even its
scanning window size is 3x3.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the past decade, information theory has been studied extensively in computational imaging. In particular,
image matching by maximizing mutual information has been shown to yield good results in multimodal image
registration. However, there have been few rigorous studies to date that investigate the statistical aspect of
the resulting deformation fields. Different regularization techniques have been proposed, sometimes generating
deformations very different from one another. In this paper, we present a novel model for multimodal image
registration. The proposed method minimizes a purely information-theoretic functional consisting of mutual
information matching and unbiased regularization. The unbiased regularization term measures the magnitude of
deformations using either asymmetric Kullback-Leibler divergence or its symmetric version. The new multimodal
unbiased matching method, which allows for large topology preserving deformations, was tested using pairs of
two and three dimensional serial MRI images. We compared the results obtained using the proposed model to
those computed with a well-known mutual information based viscous fluid registration. A thorough statistical
analysis demonstrated the advantages of the proposed model over the multimodal fluid registration method when
recovering deformation fields and corresponding Jacobian maps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Dietary intake provides valuable insights for mounting intervention programs for prevention of disease. With
growing concern for adolescent obesity, the need to accurately measure diet becomes imperative. Assessment
among adolescents is problematic as this group has irregular eating patterns and have less enthusiasm for recording
food intake. Preliminary studies among adolescents suggest that innovative use of technology may improve
the accuracy of diet information from young people. In this paper, we propose a novel food record method
using a mobile device that will provide an accurate account of daily food and nutrient intake among adolescents.
Our approach includes the use of image analysis tools for identification and quantification of food consumption.
Images obtained before and after food is consumed can be used to estimate the diet of an individual. In this
paper we describe our initial results and indicate the potential of the proposed system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a new method for analyzing fMRI (functional magnetic resonance imaging) data based on locally
linear embeddings (LLE). The LLE method is useful for analyzing data when there is a local structure
intrinsic to the measurements allowing for reconstruction of measurements from its neighboring points only.
We develop a method to extract the underlying temporal signal in fMRI experiments based on LLE.
Simulations show that improved results can be obtained under certain conditions when compared to
traditional methods such as the principal component analysis (PCA) and independent component analysis
(ICA) methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We compute AM-FM models for infrared video frames depicting military targets immersed in structured clutter
backgrounds. We show that independent correlation based detection processes can be implemented in the pixel
and modulation domains and used to construct useful online track consistency checks that indicate when the
detection process has been degraded due to nonstationary evolution of the target signature. Throughout the
paper, we use the well-known AMCOM closure sequences as exemplars.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the field of computer vision, image mosaicking is achieved using image features, such as textures, colors, and shapes
between corresponding images, or local descriptors representing neighborhoods of feature points extracted from
corresponding images. However, image mosaicking based on feature points has attracted more recent attention due to
the simplicity of the geometric transformation, regardless of distortion and differences in intensity generated by camera
motion in consecutive images. Yet, since most feature-point matching algorithms extract feature points using gray
values, identifying corresponding points becomes difficult in the case of changing illumination and images with a
similar intensity. Accordingly, to solve these problems, this paper proposes a method of image mosaicking based on
feature points using color information of images. Essentially, the digital values acquired from a real digital color camera
are converted to values of a virtual camera with distinct narrow bands. Values based on the surface reflectance and
invariant to the chromaticity of various illuminations are then derived from the virtual camera values and defined as
color-invariant values invariant to changing illuminations. The validity of these color-invariant values is verified in a
test using a Macbeth Color-Checker under simulated illuminations. The test also compares the proposed method using
the color-invariant values with the conventional SIFT algorithm. The accuracy of the matching between the feature
points extracted using the proposed method is increased, while image mosaicking using color information is also
achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this research, we are considering the use of the inverse perspective transformation in video surveillance
applications that observe (and possible influence) scenes consisting of moving and stationary objects; e.g., people
on a parking area. In previous research, objects were detected on video streams and identified as moving
or stationary. Subsequently, distance maps were generated by the Fast Exact Euclidean Distance (FEED)
transformation, which uses frame-to-frame information to generate distance maps for video frames in a fast
manner. From the resulting distance maps, different kinds of surveillance parameters can be derived. The
camera was placed above the scene, and hence, no inverse perspective transformation was needed. In this
work,the case is considered the case that the camera is placed under an arbitrary angle on the side of the scene,
which might be a more feasible placement than on the top. It will be shown that an image taken from a camera
on the side can be easily and fast converted to an image as would be taken by a camera on the top. The allows
the use of the previously developed methods after converting each frame of a video stream or only objects of
interest detected on them.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A problem with face recognition is that lighting conditions can affect performance since facial features are easily
distorted by varying illumination conditions. To address this problem, we present a new preprocessing method using the
census transform. In the proposed method, we generate a number of binary values from the current pixel by comparing it
with the neighbor pixels, and encode the binary values into a single gray scale image. Finally, we apply the principal
component analysis (PCA) to the gray scale images. Experiments show that the proposed method provides promising
results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multimedia systems are required to provide proper synchronization of various components for intelligible presentation.
However, it is challenging to accommodate the heterogeneity of different media characteristics. Audio-video
synchronization is, for instance, required for presenting video chunks with audio frames where video chunk size is
generally large and variable, but audio frame size is small and fixed. Such audio-video synchronization problem has
been widely studied in the literature. The problem involves proper definition and preservation of temporal relationship
between audio and video. Moreover, it is also important to take into account the processing complexity, since the
computational resources and processing power on embedded platforms, such as cell phones and other handheld devices,
are very limited. In this paper, we present the implementation of three audio-video synchronization methods on an
embedded system. We discuss the performance as well as the advantages and disadvantages of each of these techniques.
Based on our evaluation, we reason why one of the presented techniques is superior to the other two.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.