PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 7698, including the Title Page, Copyright
information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an adaptive Order-Statistic Filter (OSF) that can operate in the real and the complex data
domains to maximize the gain in signal to noise and/or clutter ratio. This distribution-independent non-linear
filter approximates the optimal filter when the background is not Gaussian (e.g., speckle-type clutter, Gamma
noise, etc.), producing a "Gaussianized" residual that ensures the near-optimality of subsequent processing
stages that assume Gaussian statistics (e.g., background-normalization/CFAR, signal classification, etc.).
Furthermore, the residual resulting from an adaptive OSF stage can implicitly be re-filtered, driving the
ensuing residuals ever closer to being Gaussian-distributed. The output of such recursive version of our
adaptive OSF can thus approximate optimality in the maximum likelihood sense (e.g., in the case of signal
detection, by maximizing the probability of detection while minimizing the probability of false alarm).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a computationally efficient track-before-detect algorithm that achieves more than 50% true detection
at 10-6 false alarm rate for pixel sized unknown number of multiple targets when the signal-to-noise ratio
is less than 7dB. Without making any assumptions on the distribution functions, we select a small number
of cells, so called as needles, and generate motion hypotheses using the target state transition model. We
accumulate cell likelihoods along each hypothesis in the temporal window and append the accumulated values
to the corresponding queues of the cell positions in the most recent image. We assign a target in case the queue
maximum is greater than a threshold that produces the specified false alarm rate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent research in motion detection has shown that various outlier detection methods could be used for efficient
detection of small moving targets. These algorithms detect moving objects as outliers in a properly defined attribute
space, where outlier is defined as an object distinct from the objects in its neighborhood. In this paper, we compare the
performance of two incremental outlier detection algorithms, namely the incremental connectivity-based outlier factor
and the incremental local outlier factor to modified Stauffer-Grimson algorithm. Each video sequence is represented
with spatial-temporal blocks extracted from the raw video. Principal component analysis (PCA) is applied on these
blocks in order to reduce the dimensionality of extracted data. Extensive experiments performed on several data sets,
including infrared sequences from OSU Thermal Pedestrian Database repository, and data collected at Delaware State
University from FLIR Systems PTZ cameras have shown promising results in using outlier detection for detection of
small moving targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we consider the problem of multiple-input multiple-output (MIMO) radars employing the generalized detector
based on the generalized approach to signal processing in noise and using the space-time coding to achieve desired
diversity. To that end, we derive a suitable generalized detector structure after briefly outlining the model of the received
target return signal. The generalized detector performance is expressed in closed form as a function of the clutter statistical
properties and of the space-time code matrix. We investigate a particular case when the generalized detector requires
a priori knowledge of the clutter covariance, i.e., the decision statistics, under the null hypothesis of no target, is an ancillary
statistic, in the sense that it depends on the actual clutter covariance matrix, but its probability density function (pdf)
is functionally independent of such a matrix. Therefore, threshold setting is feasible with no a priori knowledge as to the
clutter power spectrum. As to the detection performance, a general integral form of the detection probability is provided,
holding independent of the target fluctuation model. The formula is not analytically manageable, nor does it appear to
admit general approximate expressions, which allow giving an insightful look in the system behavior. We thus restrict
our attention to the case of Rayleigh-distributed target attenuation (Swerling-1 model). To code construction we use an
information-theoretic approach. This approach offers a methodological framework for space-time coding in MIMO radar
systems, as well as simple and intuitive bounds for performance prediction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The fusion of Chemical, Biological, Radiological, and Nuclear (CBRN) sensor readings from both point and
stand-off sensors requires a common space in which to perform estimation. In this paper we suggest a common
representational space that allows us to properly assimilate measurements from a variety of different sources
while still maintaining the ability to correctly model the structure of CBRN clouds. We design this space with
sparse measurement data in mind in such a way that we can estimate not only the location of the cloud but also
our uncertainty in that estimate. We contend that a treatment of the uncertainty of an estimate is essential in
order to derive actionable information from any sensor system; especially for systems designed to operate with
minimal sensor data. A companion paper1 further extends and evaluates the uncertainty management introduced
here for assimilating sensor measurements into a common representational space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Raman spectroscopy has been a powerful means of chemical identification in a variety of fields, partly because
of its non-contact nature and the speed at which measurements can be taken. Given a library of known Raman
spectra, a common detection approach is to first estimate the relative amount of each chemical present, and
then compare the estimated mixing coefficients to an ad hoc threshold. We present a more rigorous detection
scheme by formulating the problem as one of Multiple Hypothesis Detection (MHD) and using the maximum a
posteriori (MAP) decision rule to minimize the probability of classification error. The probability that a specific
target chemical is present is estimated by summing the estimated probabilities of all the hypotheses containing
it. Alternatively, since we do not typically have reasonable priors for the hypotheses, it is perhaps preferable to
interpret the result as an abstract score corresponding to the Minimum Description Length (MDL) approach. The
resulting detection performance of this approach is compared to that of several other classification algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Various chemical agents have been known to provide unique Raman spectrum signatures. Practical methods for
chemical detection have to deal with cluttered data where the desired agent's signature is mixed with those of
other chemicals in the immediate environment. It has been found that unmixing is affected by strong background
signatures, such as those from the substrate, and noise. This work investigates use of wavelet transform based
techniques for denoising and baseline correction for the purpose of enhancing the probability of detection of a
desired agent.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Reliable detection of hazardous materials is a fundamental requirement of any national security program. Such
materials can take a wide range of forms including metals, radioisotopes, volatile organic compounds, and
biological contaminants. In particular, detection of hazardous materials in highly challenging conditions - such
as in cluttered ambient environments, where complex collections of analytes are present, and with sensors lacking
specificity for the analytes of interest - is an important part of a robust security infrastructure. Sophisticated
single sensor systems provide good specificity for a limited set of analytes but often have cumbersome hardware
and environmental requirements. On the other hand, simple, broadly responsive sensors are easily fabricated
and efficiently deployed, but such sensors individually have neither the specificity nor the selectivity to address
analyte differentiation in challenging environments. However, arrays of broadly responsive sensors can provide
much of the sensitivity and selectivity of sophisticated sensors but without the substantial hardware overhead.
Unfortunately, arrays of simple sensors are not without their challenges - the selectivity of such arrays can only
be realized if the data is first distilled using highly advanced signal processing algorithms. In this paper we will
demonstrate how the use of powerful estimation algorithms, based on those commonly used within the target
tracking community, can be extended to the chemical detection arena. Herein our focus is on algorithms that
not only provide accurate estimates of the mixture of analytes in a sample, but also provide robust measures of
ambiguity, such as covariances.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Thomas H. Chyba, Brian Fisk, Christin Gunning, Kevin Farley, Amber Polizzi, David Baughman, Steven Simpson, Mohamed-Adel Slamani, Robert Almassy, et al.
A procedure to evaluate and optimize the performance of a chemical identification algorithm is presented. The Joint
Contaminated Surface Detector (JCSD) employs Raman spectroscopy to detect and identify surface chemical
contamination. JCSD measurements of chemical warfare agents, simulants, toxic industrial chemicals, interferents and
bare surface backgrounds were made in the laboratory and under realistic field conditions. A test data suite, developed
from these measurements, is used to benchmark algorithm performance throughout the improvement process. In any one
measurement, one of many possible targets can be present along with interferents and surfaces. The detection results are
expressed as a 2-category classification problem so that Receiver Operating Characteristic (ROC) techniques can be
applied. The limitations of applying this framework to chemical detection problems are discussed along with means to
mitigate them. Algorithmic performance is optimized globally using robust Design of Experiments and Taguchi
techniques. These methods require figures of merit to trade off between false alarms and detection probability. Several
figures of merit, including the Matthews Correlation Coefficient and the Taguchi Signal-to-Noise Ratio are compared.
Following the optimization of global parameters which govern the algorithm behavior across all target chemicals, ROC
techniques are employed to optimize chemical-specific parameters to further improve performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Raman spectroscopy has received a great deal of attention in recent years in the chemical and biological detection
research community because of its unique ability to determine the chemical composition of substances. This has led to
development of fast and numerically efficient algorithms for Raman spectra estimation. There are two types of
algorithms for Raman spectra estimation, namely supervised and unsupervised. In the supervised approach, a number of
reference spectra for known chemicals is used. It is also assumed that the measured spectra of one or more unknown
substances belong to one of the individual substances in the reference library, or that they originate from a linear
combination of a number of reference spectra. The mixing coefficients for a measured spectrum are often estimated
using the nonnegative least squares (NNLS) or nonnegative weighted least squares (NNWLS) algorithms. This is a
constrained parameter estimation problem due to the inherent nonnegativity of the mixing coefficients.
Some previous researchers have used the NNLS method, in which no weight matrix is used, or all measurement error
variances are treated as equal. In our Fusion 2009 paper, we found that the measurement error variances or weights
vary significantly with the wavenumber and that it is therefore necessary to use non-uniform weights in parameter
estimation. Previously we used the true weights and have done limited study using estimated weights. In this paper, we
perform extensive study for Raman spectra estimation using WLS and NNWLS for one, two, and three chemicals, using
simulated data and Monte Carlo simulations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
When tracking a target particle that is interacting with nearest neighbors in a known way, positional data of the
neighbors can be used to improve the state estimate. Effects of the accuracy of such positional data on the target
track accuracy are investigated in this paper, in the context of dusty plasmas. In kinematic simulations, notable
improvement in the target track accuracy was found when including all nearest neighbors in the state estimation
filter and tracking algorithm, whereas the track accuracy was not significantly improved by higher-accuracy
measurement techniques. The state estimation algorithm, involving an extended Kalman filter, was shown to
either remove or significantly reduce errors due to "pixel-locking". For the purposes of determining the precise
particle locations, it is concluded that the simplified state estimation algorithm can be a viable alternative to
using more computationally-intensive measurement techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a real tracking system, track breakages can occur due to highly maneuvering targets, low detection probability,
or clutter. Previously, a track segment association approach (TSA) was developed for an airborne early warning
(AEW) system to improve track continuity by"stitching" broken track segments pertaining to the same target.
However, this technique cannot provide satisfactory association performance in tracking with a GMTI radar
ground moving targets employing evasive move-stop-move maneuvers. To avoid detection by a GMTI radar,
targets can deliberately stop for some time before moving again. Since a GMTI radar does not detect a target
when the radial velocity (along the line-of-sight from the sensor) falls below a certain minimum detectable velocity
(MDV), the move-stop-move maneuvers of the targets usually lead to broken tracks as a result. We present a
new TSA technique which employs a dummy track to formulate a complete association. By using an IMM
estimator with state-dependent mode transition probabilities (IMM-SDP) for track segment prediction (forward
and backward), the proposed algorithm can effectively stitch both "regular" broken tracks and broken tracks due
to targets' move-stop-move maneuvers. Comparisons are given to show the advantages of the proposed algorithm
in broken tracks reduction and track continuity improvement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The authors have developed a simple system for characterizing the muzzle flash duration of common military small-arms
ammunition as a feeder for system design configurations. This paper is a synopsis of the efforts and results of the
effort to characterize the broadband optical signature of modern small arms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a previous conference paper the first author addressed the problem of devising CPHD and PHD filters
that are capable of multitarget detection and tracking in unknown, dynamically changing clutter. That paper
assumed that the clutter process is Poisson with an intensity function that is a finite mixture with unknown
parameters. The measurement-update equations for these CPHD/PHD filters involved combinatorial sums over
all partitions of the current measurement-set. This paper describes an approach that avoids combinatorial sums
and is therefore potentially computationally tractable. Clutter is assumed to be a binomial i.i.d. cluster process
with unknown parameters. Given this, three different and successively more tractable CPHD/PHD filters are
derived, all capable of multitarget track-before-detect capability. The first assumes that the entire intensity
function of the clutter process is unknown. The second and third assume that the clutter spatial distribution is
known but that the clutter rate (number of clutter returns per scan) is unknown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a new "quasi-global" observer design to determine the distance and velocity of projectiles in real-time. The
detection of the projectile is realized by a low power Doppler radar at short distances (5m-10m). The advantage of a
global observer design is the possibility to deal with large initial errors, which is important because of the usually
unknown initial state of the observed system. The transformation to the nonlinear observability canonical form (NOCF)
leads to linear error dynamics and this allows a traceable influence on the dynamic behaviour of the observer, which is
essential for the time critical implementation on a real-time system. To benefit from these two advantages, the nonlinear
system has to be transformed to the NOCF and an explicit expression of the inverse transformation has to be found.
Since with this severe restriction the given problem can not be solved, we propose a numerically approximated inverse
transformation in a bounded region of physical interest to allow the design of a quasi-global observer with linear error
dynamics in the presented case. Based on this design the class of systems where this kind of observer can be applied has
been enlarged considerably.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper an efficient approach to nonlinear non-Gaussian state estimation based on spline filtering is presented.
The estimation of the conditional probability density of the unknown state can be ideally achieved
through Bayes rule. However, the associated computational requirements make it impossible to implement this
online filter in practice. In the general particle filtering problem, estimation accuracy increases with the number
of particles at the expense of increased computational load. In this paper, B-Spline interpolation is used to
represent the density of the state pdf through a low order continuous polynomial. The motivation is to reduce
the computational cost. The motion of spline control points and corresponding coefficients is achieved through
implementation of the Fokker-Planck equation, which describes the propagation of state probability density
function between measurement instants. This filter is applicable for a general state estimation problem as no
assumptions are made about the underlying probability density.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We generalize the theory of particle flow to stabilize the nonlinear filter. We have
invented a new nonlinear filter that is vastly superior to the classic particle filter and the extended
Kalman filter (EKF). In particular, the computational complexity of the new filter is many orders of
magnitude less than the classic particle filter with optimal estimation accuracy for problems with
dimension greater than 4. Our accuracy is typically several orders of magnitude better than the
EKF for nonlinear problems. We do not resample, and we do not use any proposal density from
an EKF or UKF or other filter. Moreover, our new algorithm is deterministic, and we do not use
any MCMC methods; this is a radical departure from other particle filters. The new filter
implements Bayes' rule using particle flow rather than with a pointwise multiplication of two
functions; this avoids one of the fundamental and well known problems in particle filters, namely
"particle degeneracy." In addition, we explicitly stabilize our particle filter using negative
feedback, unlike standard particle filters, which are generally very inaccurate for plants with slow
mixing or unstable dynamics. This stabilization improves performance by several orders of
magnitude for difficult problems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Gaussian Mixture CardinalizedPHD (GM-CPHD) Tracker was applied to the corrected TNO-Blind dataset,
the SNR adjusted datasets in SEABAR07 and to the Metron dataset generated for the MSTWG (Multistatic
TrackingWorking Group). The increasing difficulty of the datasets is handled by improvements on the tracker.
The tracking results (plots and metrics of performance) are included.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the early 1990's, the Maximum Likelihood Probabilistic Data Association (ML-PDA) tracker was developed
in a passive sonar framework, and subsequent research has shown it to be effective for tracking very low SNR
targets. This was done with both active and passive sonar, for targets that have some given type of deterministic
motion. Recent work has focused on applying ML-PDA to bistatic sonar data. Here, we apply ML-PDA in a
sliding window implementation to three bistatic data sets used by the MSTWG (Multistatic Tracking Working
Group): the SEABAR 2007 data set, the TNO Blind 2008 data set, and a new blind data set provided by Metron
in 2009.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Data association is the key component in single or multiple target tracking algorithms with measurement origin.
Probabilistic Data Association (PDA), in which all validated measurements are associated probabilistically to
the predicted estimate, is a well-known method to handle the measurement origin uncertainty. In PDA, the
effect of measurement origin uncertainty is incorporated into the updated covariance by adding the spread of the
innovations term. The updated covariance may become very large after few time steps in high clutter scenarios
due to spread of the innovations term. Large covariance results in a large gate, which is used to limit the possible
measurements that could have originated from the target. Hence, the track will be lost and estimate will just
follow the prediction. Also, large gate will make the well-separated target assumption invalid, even if the targets
are well-separated. Hence, after a few time steps all the targets in the surveillance region come under the same
group, making the Joint Probabilistic Data Association (JPDA). In this paper, adaptive gating techniques are
proposed to avoid the steady increase in the updated covariance in high clutter. The effectiveness of the proposed
techniques is demonstrated on simulated data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Association and fusion of passive direction finding (DF) reports with active radar tracks from airborne targets is challenging
because of the low dimensionality of the common kinematic measurement space. Often, multi-target scenarios lead to
significant data association ambiguity. Classically, the approach to this problem is a simple hypothesis test wherein a
batch of DF sensor measurements is associated with either zero or one of the radar tracks; assignment of multiple DF
tracks to a single radar track is allowed without regard to compatibility, and this can lead to detrimental results. This
paper develops a new approach for managing the ambiguity. The problem is formulated as a two-dimensional assignment,
and any association ambiguity is determined from the k best solutions. Firm association decisions are made only when
the ambiguity is at an acceptable level. The ambiguity information is also available in real time as an output to the
system operator. An improved batch association score, relative to previous works, is formulated that addresses statistical
correlations between individual measurement-to-track residuals; this new score is a likelihood ratio generated from Kalman
Filter residuals. Where previous scoring methods lead to incorrect ambiguity assessments in certain scenarios, the new
approach yields accurate results. Because the score is recursive, the batch may be extended over an arbitrary number of
measurements, helping to manage association ambiguities over time. Simulation results are shown to demonstrate the
algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most modern maximum likelihood multiple target tracking systems (e.g., Multiple Hypothesis Tracking (MHT) and Numerica's
Multiple Frame Assignment (MFA)) need to determine how to separate their input measurements into subsets
corresponding to the observations of individual targets. These observation sets form the tracks of the system, and the
process of determining these sets is known as data association. Real-time constraints frequently force the use of only the
maximum likelihood choice for data association (over some time window), although alternative data association choices
may have been considered in the process of choosing the most likely.
This paper presents a Tracker Adjunct Processing (TAP) system that captures and manages the uncertainty encountered
in making data association decisions. The TAP combines input observation data and the data association alternatives
considered by the tracker into a dynamic Bayesian network (DBN). The network efficiently represents the combined
alternative tracking hypotheses. Bayesian network evidence propagation methods are used to update the network in light of
new evidence, which may consist of new observations, new alternative data associations, newly received late observations,
hypothetical connections, or other flexible queries. The maximum likelihood tracking hypothesis can then be redetermined,
which may result in changes to the best tracking hypothesis. The recommended changes can then be communicated back
to the associated tracking system, which can then update its tracks. In this manner, the TAP's interpretation makes the firm,
fixed (formerly maximum likelihood) decisions of the tracker "softer," i.e., less absolute. The TAP can also assess (and
reassess) track purity regions by ambiguity level.
We illustrate the working of the TAP with several examples, one in particular showing the incorporation of critical, late
or infrequent data. These data are critical in the sense that they are very valuable in resolving ambiguities in tracking and
combat identification; thus, the motivation to use these data is high even though there are complexities in applying it. Some
data may be late because of significant network delays, while other data may be infrequently reported because they come
from "specialized" sensors that provide updates only every once in a while.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target classification is of great importance for modern tracking systems. The classification results could be
fed back to the tracker to improve tracking performance. Also, classification results can be applied for target
identification, which is useful in both civil and military applications. While some work has been done on
Joint Tracking and Classification (JTC), which can enhance tracking results and make target identification
feasible, a common assumption is that the statistical description of classes is predefined or known a prior.
This is not true in general. In this paper, two automatic multiple target classification algorithms, which can
automatically classify targets without prior information, are proposed. The algorithms learn the class description
from the target behavior history. The input to the algorithm is the noisy target state estimate, which in turn
depends on target class. Thus, class description is learnt from the target behavior history rather than being
predefined. This motivates the proposed two-level tracking and classification formulation for automatic multiple
target classification. The first level consists of common tracking algorithm such as the Joint Probability Data
Association (JPDA), the Multiple Hypothesis Tracking (MHT) or the Probability Hypothesis Density (PHD)
filter. In the second level, a Mean-Shift (MS) classifier and a PHD classifier are applied to learn the class
descriptions respectively based on the state estimations from the first level tracker. The proposed algorithms
only require the kinematic measurements from common radar. However, feature information can be easily
integrated. Besides theoretical derivations, extensive experiments based on both simulated and real data are
performed to verify the efficiency of the proposed technique.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target tracking is limited by the resolution of the sensors providing the measurements. If two objects are in close
proximity, they will return just one measurement in most instances. When these two objects separate enough to return
two distinct measurements, the question then is: where did the objects actually separate? The actual separation point
may be of interest and finding that separation point is the topic of this paper. Using the Probabilistic Multi-Hypothesis
Tracking (PMHT) algorithm allows measurements to be "shared" between tracks, and therefore makes an excellent
algorithm when there are closely-spaced unresolved measurements. In this paper, we will give an overview of how we
apply the PMHT algorithm to this separation estimation problem, and then we apply the algorithm to two aircraft flying
in formation and then separating. The results are obtained from a high-fidelity simulation environment and provide a
good test for this developing approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target ambiguity is a major problem in dense urban tracking environments with closely spaced targets. Target
classification, action recognition, and 3D feature-aiding can be used to resolve this ambiguity in situations where
traditional 2D feature-aiding techniques alone are ineffective. Knowledge of target location, track state, and
sensor orientation can be coupled with these techniques to improve accuracy and tracking performance even
further. A combination of synthetic and real data is used to demonstrate these concepts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Digital signal processing techniques employed in modern radar systems enable the generation of measurements
consisting of target 3D position and range rate. However, two crucial issues on range rate measurements utilization
remain unresolved, that is, under what conditions and to what extent the performance improvement can be attained by
incorporating range rate measurements into the tracking filter. This paper aims to address the two problems. Firstly, the
state space models of tracking systems without and with range rate measurements are formulated. Secondly, the lower
bounds, i.e., Posterior Cramér-Rao Bounds (PCRBs), for target state estimation errors of the two tracking systems are
derived. Thirdly, the primary parameters relevant to the PCRB are discussed, and an effective method for analyzing their
effects on state estimation is proposed by comparing the PCRBs of the two tracking systems. Finally, an application
example is presented under a typical tracking scenario. The method and results will be of great interest for persons
developing tracking system with range rate measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Compared to the optimal track-to-track fusion (T2TF) algorithm under linear Gaussian assumption and the
information matrix fusion, the major advantage of the covariance intersection (CI) method for the problem of
T2TF is that it neither needs the crosscovariances between the local tracks, nor does it use local information
increments which are required to be independent. This allows the CI method to be used in scenarios where the
optimal T2TF and the information matrix fusion algorithms are difficult or impractical to implement. However,
a significant drawback of the original CI algorithm is that it is too conservative and will yield unnecessary loss
in its calculated fusion accuracy. Even worse, as shown in this paper, this loss increases with the number of
local tracks for fusion. This greatly degrades the usefulness of the CI algorithm. In this paper, a new "sampling
CI" algorithm is proposed, which is simple to implement and does not have the above problematic feature of the
original CI. Simulation results from various scenarios demonstrate the effectiveness of the proposed algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The initiation of a consistent state covariance or uncertainty which accurately reflects the discrepancy from
truth is a prerequisite to achieving correct data association in tracking. In this paper, the treatment of non-
Gaussian states in the initial orbit determination (IOD) problem for space surveillance or more general track
initiation problem is considered and the accurate and consistent computation of such non-Gaussian uncertainties
is addressed. The main contribution is a framework for achieving uncertainty (covariance) consistency in the
IOD problem based on efficient Gauss-Hermite quadrature methods. The formalism is applicable in general
tracking settings, in particular, multisensor data fusion. Additionally, a series of realtime metrics constructed
from tensors of higher-order cumulants are developed which provide a tool for assessing uncertainty consistency.
The effectiveness of the proposed track initiation method is illustrated through various case studies in space
surveillance tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Networking radars to form a common air picture has provided a significant leap forward in tracking
capability. These advances have existed largely without any capability for coordinating the resources of the
networked sensors. In sensor-networking applications, multi-function radars, which have the ability to
allocate resources to different radar tasks such as surveillance and tracking, operate largely in a sensorcentric
fashion. That is, they make resource decisions based on a local-only tracking capability, and then pass
valid measurements to a sensor-networking function that compiles a common air picture. As the list of
required missions grows, sensors may no longer be able to operate in such a sensor-centric fashion, and will
need to leverage contributions of other networked sensors to meet all performance requirements.
This paper discusses the use of self-organizing principles for managing radar resources in a network-centric
fashion. Radars make resource allocation decisions relative to the common, multi-sensor track picture versus
a local track picture. By proper construction of the resource decision rules, the sensors adapt to an efficient
global resource allocation using indirect coordination. That is, knowledge of other sensors' contributions to
the common air picture is sufficient for the local sensor to apply local resources to tasks where it has a
competitive advantage. This approach can offer significant resource savings to the individual sensors and
improved tracking performance across the network. Further, the ability to coordinate tracking resources
across the network allows for much greater scalability as network size increases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we studied the performance metrics for evaluating Network-Centric Warfare (NCW) battlefield awareness.
We developed a set of novel information awareness metrics to enable responsive situation assessment under mission-critical
conditions. The awareness metrics model (AMM) reflects the global information values of event locations such
as position, terrain information, dangerousness, survivability, cell difficulty, and mission importance. Based on the
enhanced awareness model, we developed an in-network cooperative multi-sensor search and track (ICMS) algorithm by
solving a unified optimization problem in which each cell is searched and all detected objects are tracked for at least a
desired track-lifetime period. We utilize a track-lifetime surface metric to represent the spatial and temporal aspects of
object movements over a region of interest that requires frequent sampling of the known and estimated object positions
(track maintenance) as well as possible object arrivals (track initiation). To demonstrate the effectiveness of our
approach, we implemented our ICMS algorithm in a numerical example and found that it is effective in the sense that
most cells with high activity are well-searched.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Most existing track-to-track fusion (T2TF) algorithms for distributed tracking systems are given assuming that
the local trackers are synchronized. However, in the real world, synchronization can hardly be achieved and
local trackers usually work in an asynchronous fashion, where local measurements are obtained and local tracks
are updated at different times with different rates. Communication delays between local trackers and the fusion
center (FC) also cause delays in the arrival of the local tracks at the FC. This paper presents the optimal
asynchronous T2TF algorithm for distributed tracking systems under the linear Gaussian (LG) assumption,
which is also the linear minimum mean square error (LMMSE) fuser without the Gaussian assumption. The
information configuration of asynchronous T2TF with partial information feedback (AT2TFpf) is used. This is
the most practical configuration for AT2TF with time delays, since communication delays make full information
feedback very complicated. To illustrate the algorithm, a basic scenario of the fusion of two asynchronous local
tracks is used, where one is available at the FC with no delay and the other is transmitted from a local tracker
with a time delay. The algorithm can be extended to scenarios with more than two local trackers. The optimal
asynchronous T2TF algorithm is compared with the approximate algorithms proposed by Novoselsky (denoted
as AT2TFpfApprA-C) and is shown to have performance benefit in consistency as well as in fusion accuracy.
The drawback of the optimal fusion algorithm is that it has high communication and computational cost.
Two new approximate algorithms, AT2TFpfApprD and AT2TFpfApprE, are also proposed which significantly
reduce the cost of the optimal algorithm with little loss in fusion performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The coordinated use of multiple distributed sensors by network communication has the potential to substantially
improve three-dimensional track state estimates even though each individual sensor might offer only low-dimensional
measurements. For example, in the modern electronic warfare environment, hostile electronic countermeasures
will endeavor to deny range and range rate information, leaving friendly sensors to depend on passive
angle information for tracking. Other sensors may provide only range and bearing but not elevation. Herein we
demonstrate algorithms for initiating and maintaining tracks in such operating environments with a focus on
efficient representations of non-Gaussian density functions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In many surveillance problems the observed objects are so closely spaced that they cannot always be resolved
by the sensor(s). Typical examples for partially unresolved measurements are the surveillance of aircraft in
formation, and convoy tracking for ground surveillance. Ignoring the limited sensor resolution in a tracking
system may lead to degraded tracking performance, in particular unwanted track-losses. In this paper we extend
the resolution model by Koch and van Keuk, given for two partially unresolved objects, to the case of arbitrary
object numbers. We also derive the effects of the resolution model to the multi-target likelihood function and
the possible data associations. Further, it is shown how the model can be integrated into the Joint Probabilistic
Data Association Filter (JPDAF).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the design of a target track processor, specific models and parameters are selected for use in the processing of the
sensor data. Some of the parameters may be known and some are not known with certainty so estimates of the values are
used. The values of some of the design parameters may be significantly in error and so adjustments are needed. In the
field of Kalman filters for example, adaptive methods (sometimes called adaptive filters) have been implemented to
estimate in real time (sometimes slowly) more appropriate values for some of the design parameters. While existing
adaptive filter methods might be considered for use in estimating some of the parameters in tracking, the possibility of
misassociations of measurements to tracks introduces substantial anomalies that may require alternative adaptive methods
(possibly more complex). Tracking with data from multiple sensors as in sensor data fusion introduces both additional
advantages and challenges for adaptive methods relative to single sensor tracking. Fusion also involves functions beyond
those typical of tracking and many fusion functions depend on the estimated tracks and/or influence the tracking
performance.
To date, most tracker design efforts have been directed toward improving the accuracy of the estimated target state using
fixed design parameters (and models) and little effort has been directed to real time adaptive processing to improving the
estimates of the design parameters. This paper addresses the degradation in tracker and fusion performance caused by
inaccurate design parameters and addresses some considerations for adaptive tracking in sensor data fusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years the first author has developed a unified, computationally tractable approach to multisensor-multitarget
sensor management. This approach consists of closed-loop recursion of a PHD or CPHD filter with
maximization of a "natural" sensor management objective function called PENT (posterior expected number of
targets). In this paper we extend this approach so that it can be used in unknown, dynamic clutter backgrounds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Conventional tracking algorithms rely on the assumption that the targets under observation are point source
objects. However, due to increasing resolution capabilities of modern sensors, the point source assumption is
often not suitable and estimating the target extension becomes a crucial aspect. Recently, a Bayesian approach
to extended target tracking using random matrices has been proposed. Within this approach, ellipsoidal object
extensions are modeled by random matrices and treated as additional state variables to be estimated. However,
only a single-target solution has been presented so far. In this work we present the multi-target extension of
this approach. We derive a new variant of Probabilistic Multi-Hypothesis Tracking (PMHT) that simultaneously
estimates the ellipsoidal shape and the kinematics of each target. For this purpose, the PMHT auxiliary function
is extended by random matrices representing the target ellipsoids. Both, the ellipsoids and the kinematic states
are iteratively optimized by specific Kalman filter formulae arising directly from the auxiliary function. The new
method is demonstrated and evaluated by simulative examples.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we look at various options for calculating target-measurement association probabilities and updating
the state estimates in the Joint Probabilistic Data Association Filter (JPDAF). In addition to the "standard"
methods, we look at other methods that try to improve the estimation accuracy by coupling the states, discarding
certain joint association events, or by applying random finite set theory to change how the states are updated.
We compare the performance of trackers based on several of these concepts to each other and to the PMHT,
the MHT, and the GNN tracker. We also single out approaches that are "snake oil", in that they are either not
suited for practical use, or that their complexity is higher than that of calculating the probabilities exactly.
Additionally we show how the JPDAF* can be implemented to have a lower worst-case complexity than the
regular JPDAF when the number of targets and/or observations is large. We also review some oft overlooked
references on gating that are useful for implementations in real systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The goal of this paper is to demonstrate the coordination in real-time of the operation of multiple sensors in
such a way that those best-equipped for certain missions should perform those missions for the entire network,
while other sensors fill in the gaps with their capabilities. The networked system of sensors must search, detect,
track, classify, and engage targets of high value in a timely fashion. The information transmitted should be that
which contributes the most toward achieving the performance goals (e.g., track accuracy, track completeness,
and a consistent operational picture or single integrated air picture (SIAP)) subject to the network bandwidth
constraints and the capabilities of the sensors.
We present an overview of an assignment based sensor resource manager, a distributed algorithm for coordinating
the assignment problem, and simulation results that validate this approach. While the assignment
formulation and algorithms could include both sensor resource and bandwidth constraints with versions for single
and multiple time periods, i.e., myopic and non-myopic, the distributed prototype formulation and algorithms
developed for these experiments were restricted to the tasking of certain sensors to make measurements and
transmit them over the network based on the current air picture. The number of measurements put on the
network was controlled by limiting the number of sensors that could transmit measurements on each target.
The communication loading was then measured to demonstrate that indeed one can design a distributed sensor
resource manager capable of achieving the objectives of significantly reducing the communication loading and
maintaining SIAP.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper discusses the integration of a passive ranging method that uses multiple initial assumed range models within a
Multiple Hypothesis Tracking (MHT) data association framework. The paper begins with a description of the use of
multiple assumed range models along with a two model Interacting Multiple Model (IMM) approach for tracking
maneuvering targets. It also discusses the choice of own ship maneuver required to provide observability for an air-toair
encounter and how the own ship maneuver can be chosen to reduce the effects of target maneuver. Unscented
filtering methods designed for use with the nonlinear angle-only measurements will be described.
Computational feasibility is a major issue with any MHT application and can be particularly important for the typical
high false alarm environment envisioned for EO systems, such as the IRST. Thus, the use of multiple range filters is only
employed for selected tracks based upon the track score. This and other logic designed to ensure computational
feasibility are discussed and simulation results are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We show numerical experiments on a new theory of exact particle flow for
nonlinear filters. This generalizes our theory of particle flow that was already many orders
of magnitude faster than standard particle filters and which is several orders of magnitude
more accurate than the extended Kalman filter for difficult nonlinear problems. The new
theory generalizes our recent log-homotopy particle flow filters in three ways: (1) the
particle flow corresponds to the exact flow of the conditional probability density; (2)
roughly speaking, the old theory was based on incompressible flow (like subsonic flight in
air), whereas the new theory allows compressible flow (like supersonic flight in air); (3) the
old theory suffers from obstruction of particle flow as well as singularities in the equations
for flow, whereas the new theory has no obstructions and no singularities. Moreover, our
basic filter theory is a radical departure from all other particle filters in three ways: (a) we
do not use any proposal density; (b) we never resample; and (c) we compute Bayes' rule
by particle flow rather than as a point wise multiplication.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Raman spectroscopy is a powerful technique for determining the chemical composition of a substance. Our objective
is to determine the chemical composition of an unknown substance given a reference library of Raman spectra. The
unknown spectrum is expressed as a linear combination of the reference library spectra and the non-zero mixing
coefficients represent the presence of individual substances, which are not known. This approach is known as the
supervised learning method. The mixing coefficients are usually estimated using the nonnegative least squares (NNLS)
or nonnegative weighted least squares (NNWLS). This problem is a constrained estimation problem due to the presence
of the nonnegativity constraint. In this paper, we present a swarm based algorithm, the particle swarm optimization
(PSO), to estimate the mixing coefficients and Raman spectra. The PSO is used to determine the mixing coefficients.
PSO efficiently finds an optimum solution. Results are presented for simulated data obtained from the Jennifer Kelly
Raman spectra library. The reference library consists of Raman spectra for nine minerals and the measured spectrum is
simulated by using spectrum/spectra of single/multiple minerals. We compare the root mean square error (RMSE) for
parameter estimation and measurement residual and computational time of the NNWLS and nonnegative weighted PSO
(NNWPSO) algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a visual obstacle detection and tracking system based on a dense stereo vision method. We
combine a global stereo matcher with a correlation based cost function for generating a reliable disparity-map. An NCC
algorithm is robust to illumination variation, and a BP based global disparity computation algorithm is efficient for
recovering the disparity information of a large textureless area in real driving scenes. Then an obstacle detector and a
tracker module are implemented and tested under actual driving conditions. Using U-V disparity representation, a road
profile is efficiently extracted, and obstacle ROI can be detected. In the process of obstacle detection, a few heuristic
constraints are applied to exclude wrong candidates, and a further verification step is proceeded by a tracker.
Implemented system offers accurate and reliable range images under various noisy imaging conditions, which results in
robust detection and tracking performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, the previous work multipath-assisted multitarget tracking using multiframe assignment is extended
to the case where there are uncertainties in multipath reflection points at the receiver. An algorithm is proposed
for initiating and tracking multiple targets using multiple transmitters and receivers. This algorithm is capable of
exploiting multipath target returns from distinct and unknown propagation modes. When multipath returns are
not utilized appropriately within the tracker, (e.g., discarded as clutter or incorporated with incorrect propagation
mode assumption) the potential information in the multipath returns is lost. In real scenarios, it is more
appropriate to assume that the locations of the reflection points/surfaces are not accurately known.
Integrating multipath information into the tracker by correctly identifying the multipath mode and identifying
the reflection point can help improve the accuracy of tracking. The challenge in improving tracking results using
multipath measurements is the fusion of direct and multipath measurements from the common target when the
multipath-reflection mode is unknown. The problem becomes even more challenging with false alarms and missed
detections. We propose an algorithm to track the target with uncertainty in multipath reflection points/surface
using the multiframe assignment technique. Simulation results are presented to show the effectiveness of the
proposed algorithm on a ground target tracking problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses multi-sensor surveillance where some sensors provide intermittent, feature-rich information.
Effective exploitation of this information in a multi-hypothesis tracking context requires computationally-intractable
processing with deep hypothesis trees. This report introduces two approaches to address this problem, and compares
these to single-stage, track-while-fuse processing. The first is a track-before-fuse approach that provides computational
efficiency at the cost of reduced track continuity; the second is a track-break-fuse approach that is computationally
efficient without sacrificing track continuity. Simulation and sea trial results are provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we study the mathematics of Synthetic Aperture Radar (SAR) imaging in the cross-range as well as a mathematical
model for SAR imaging. An introduction to the cross-range image scenario is given to establish the relationship
between the received signal and the desired target function. More importantly, we present a detailed derivation of the slowtime
sample spacing, which corrects the formulation by Soumekh.1 A cross-range imaging algorithm and a comparison
of the results with those previously obtained are given based on our slow-time sample spacing. The method of stationary
phase, which is used is used to approximate the spectrum of the echoed signal, is discussed in an appendix. Finally, we
discuss a mathematical model for 3D SAR and ISAR imaging - Maxwell's equations for SAR image reconstruction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is Part VIa of a comprehensive survey of maneuvering target tracking without addressing the so-called
measurement-origin uncertainty. It covers theoretical results of density-based exact nonlinear filtering for handling the uncertainties
induced by potential target maneuvers as well as nonlinearities in the dynamical systems commonly encountered
in target tracking. An emphasis is given to the results of significance for practical considerations, especially those of good
potential for tracking applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper is Part VIb of a comprehensive survey of maneuvering target tracking without addressing the so-called
measurement-origin uncertainty. It provides an in-depth coverage of various approximate density-based nonlinear filters
in mixed time developed particularly for handling the uncertainties induced by potential target maneuvers as well as nonlinearities
in the dynamical systems commonly encountered in target tracking. An emphasis is given to the more recent results,
especially those with good potential for tracking applications. Approximate nonlinear filtering techniques for point estimation
have been covered in a previous part. Approximate nonlinear filtering in discrete time and sampling-based nonlinear filters
will be surveyed in forthcoming parts.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an interacting multiple model (IMM) based procedure to estimate the state of thrusting
ballistic projectiles in the atmosphere for the purpose of impact point prediction (IPP). The modes of the IMM
estimator are for the thrusting and the ballistic phases and different extended Kalman filters (EKF) are used as
the mode-matched filters with different dimension states. The IPP is achieved by using the IMM-predicted most
probable mode at the mid-point of the trajectory.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a navigation algorithm based on aided strapdown inertial navigation (INS) for an underwater
autonomous underwater vehicle (AUV). The AUV is equipped with a long baseline (LBL) acoustic positioning
system, acoustic Doppler current profiler (ADCP) and a depth sensor to aid the INS. They have, however, much
slower data rates than that of the INS. A linearized, quaternion-based dynamic model and measurement model of
the INS output errors are presented. Data from different sensors are fused by applying the extended Kalman filer
(EKF) to estimate and correct the errors. Due to the difficulty of generating realistic simulation scenario, real
data (raw INS measurement) collected from AUV field experiments are processed to test the algorithm. Without
knowing the ground truth, however, performance evaluation becomes much more complicated and needs further
research. In this paper, the problem is circumvented by considering the post-processed real data as the "ground
truth" and noisy raw measurements are generated from this "ground truth" to feed the algorithm. The simulation
results demonstrate the algorithm applicability and show that by incorporating readings from the ADCP and
the depth sensor, the (horizontal) position errors still increase but with a significant lower rate than the case of
stand-alone operation. If the LBL sensor is further included, the navigation errors can be constrained within a
certain bound.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we look at various algorithms for approximating the target-measurement association probabilities
of the Joint Probabilistic Data Association Filter (JPDAF). We consider their computational complexity and
compare their performance with respect to the Mean Optimal Subpattern Assignment (MOSPA) statistic in a
scenario involving closely-spaced targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.