PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
The present paper concerns the statistical analysis of limits to achievable resolution in a so-called "diffraction-limited" imaging system. The canonical case study is that of incoherent imaging of two closely-spaced sources of possibly unequal intensities. The objective is to study how far beyond the classical Rayleigh limit of resolution one can reach at a given signal to noise ratio. We consider the definition of resolution limit from a statistical point of view as the ability of the imaging system to distinguish two closely-located sources in presence of additive noise. This problem can be stated in a
hypothesis testing framework where the hypotheses consider whether
one or two point sources are present. In terms of signal detection/ estimation, this leads to composite detection/estimation problem where a deterministic signal with unknown parameters is being sought. To solve this problem, we use locally optimal statistical tests with respect to a desired range of (small) separations between the point sources. Specifically, we will derive explicit relationships between the minimum detectable distance between two point sources, and the required SNR. For a specific point spread function, the required SNR can be expressed as a function of probabilities of detection and false alarm and the distance between point sources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A particular method of detecting unresolved targets using simulated generic monopulse radar data is examined in detail. The system is assumed to be incorrectly calibrated i.e. the decision boundary is calculated based on erroneous values governing the hypothesis that only a single target is present in the range cell. The system performance is analyzed under varying values for target ranges, angles between the beam pointing direction and the actual off-boresight angle of the targets, waveform power and number of pulses. Design strategies are advanced to maintain good detection probabilities under conditions of miscalibrated decision boundaries.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we will give a general system setup, that allows the formulation of a wide range of Track Before Detect (TBD) problems. A general basic particle filter algorithm for this system is also provided. TBD is a technique, where tracks are produced directly on the basis of raw (radar) measurements, e.g. power or IQ data, without intermediate processing and decision making. The advantage over classical tracking is that the full information is integrated over time, this leads to a better detection and tracking performance, especially for weak targets. In this paper we look at the filtering and the detection aspect of TBD. We will formulate a detection result, that allows the user to implement any optimal detector in terms of the weights of a running particle filter. We will give a theoretical as well as a numerical (experimental) justification for this. Furthermore, we show that the TBD setup, that is chosen in this paper, allows a straightforward extension to the multi-target case. This easy extension is also due to the fact that the implementation of the solution is by means of a particle filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A method to detect and track point targets in infrared imagery is briefly described. The final output of the algorithm is a single image containing scores related to the probability of a target located at any particular pixel. This image will be analyzed to determine the presence and location of suspected targets, based on CFAR requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Tanner Research Wave Process is a moving point target detection algorithm that uses the spatio-temporal correlation of points from a target trajectory to build a large aggregate response, thereby increasing the probability of detection for dim and low-contrast point targets moving amidst dense background and noise. The Wave Process is naturally represented as a 2-D array of linear passive analog components, with each node directly stimulated by its focal
plane detector. The Wave Process can be implemented in compact, low-power hardware: analog VLSI for near-focal-plane integration, and dedicated digital for near-term applications, both with a fine-grain parallel architecture that can accommodate fast-frame-rate sensors. The Wave Process generates a real-time Region of Interest to window focal planes, reducing the data rate and sensor processing throughput requirements, thereby also reducing the overall sensor processor
power, weight, and size requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In telecommunication systems, the receiver diversity combining technique is used for the reliable data transmission. Among many diversity techniques, the selection combining scheme and the equal gain combining scheme are widely used due to their simplicity and high performance. In this paper, we view and analyze the imaging
sensor fusion problem (for a point target detection)in terms of the diversity combining technique. We show that the intensity variance of the matched filtered image is similar to the mean of the matched filtered image, and therefore, the intensity of each pixel after the matched filtering fluctuates severely about its mean, so that
the target can be missed with a high probability. Our suggestion is to use another statistically identical imaging sensor (in the same spectral band),and to combine the two images using the proposed diversit combining schemes. Under the assumptions that the additive noise is spatially white and the target shape is deterministic,
the detection error probabilities are derived analytically each under no diversity combining, selection combining and equal gain combining. Furthermore,the accuracy of the derived error probabilities is verified through extensive Monte-Carlo simulations. Our results clearly reveals the superiority of the diversity combining schemes,
and identify particular situations where the performance gain is especially significant for each of the diversity combining techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we present an algorithm for identifying the parameters of a proportional navigation guidance missile (pursuer) pursuing an airborne target (evader) using angle-only measurements from the latter. This is done for the purpose of classifying the missile so that appropriate countermeasures can be taken. In the literature, there have been numerous studies on how a pursuer tracks an evader and what the optimal guidance law should be. However, not much has been done on identifying/classifying the pursuer from the evader's point of view using angle-only measurements. This provides the motivation for our current work. Mathematical models are constructed for a pursuer with a changing velocity, i.e., a direction change and a speed change. Assuming the pursuer is launched from the ground with an acceleration, its motion can be described by a four-dimensional parameter vector consisting of its proportional navigation constant and three parameters related to thrusting (initial net specific thrust, the relative mass ejection rate and its maximum speed). Consequently, the problem can be solved as a parameter estimation problem, rather than state estimation. In this paper, we provide an estimator based on Maximum Likelihood (ML) to solve this identification problem. The parameter estimates obtained can be mapped into the time-to-go until intercept, thus the time-to-go estimate can also be obtained from the above estimator. Estimation results are presented for different scenarios together with the Cramer-Rao Lower Bound, which quantifies the best achievable estimation accuracy. The accuracy of the time-to-go estimate is also obtained. Simulation results demonstrate that the proposed estimator is efficient by meeting the CRLB.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A surveillance system needs to accurately locate and identify not only single targets, but also groups of targets engaged in a common activity. Most existing tracking systems are capable of tracking individual targets quite accurately; however, they fail to use information related to group behavior in order to improve these estimates. Furthermore, in wide area surveillance situations a military operator is required to sort through hundreds to thousands of individual targets in order to develop an understanding of the situation. Having the ability to collapse the behavior of individual targets into a common, coordinated motion can greatly enhance the productively and situational awareness of the operator. Our long-term approach to solving this problem is to develop an understanding of how to define a group and then to understand the inter-relationships between the various characteristics that describe a group. Then using this information, we will be able to partition the set of target into groups that can be aggregated over the entire military force hierarchy. This goal of this paper is to describe an approach that is based upon genetic algorithms for solving the military force hierarchy problem. This paper will describe the underlying genetic algorithm, scoring function, and some initial results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In ballistic missile defense, knowledge of the threat type during the boost phase is important for accurate prediction, especially in the presence of threat-specific variations (e.g. staging). A combined boost phase tracking/threat-typing algorithm based on multiple tracking filters is presented. Each filter employs a threat-specific thrust-acceleration profile used for state compensation, and model probabilities calculated using Bayes rule for threat typing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tracking moving objects in video can be carried out by correlating a template containing object pixels of the current frame. This approach may produce erronous results under noise. We determine a set of significant pixels on the object by analyzing the wavelet transform of the template and correlate only these pixels with the current frame to determine the position of the object. These significant pixels are easily trackable features of the image and incrase the performance of the tracker.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In many tracking applications, and particularly those in ballistic
missile defense, one concern involves the continuous tracking of
an object that separates into two objects. Reliable tracking
without track breaks demands early recognition of such a split,
preferably well in advance of the two objects becoming resolvable
by the radar. In previous work, signal processing techniques for
detecting the presence of unresolved objects and angle-of-arrival
estimation for unresolved targets have been developed for
monopulse radars. In this paper, these techniques are reviewed and
extended. Techniques for detecting the presence of unresolved
objects are treated for the case of idealized resolution, in which
all of the energy for a target is returned in a single range
resolution sample or cell. The approaches work solely on monopulse
angle statistics and rely on idealized range resolution. The
requirement for idealized range resolution is relaxed by using
joint statistics with adjacent matched filter returns. The AOA
estimation and detection of the presence of unresolved objects for
non-ideal resolution are then addressed. The performances are
demonstrated using a high fidelity software simulation tool for
target tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In most conventional tracking systems, only the target kinematic information is used in measurement-to-track association. Target class information, which is typically used in postprocessing, can also be used to improve data association to give better tracking accuracy. In addition, the use of target class information in data association
can improve discrimination by yielding purer tracks and preserve their continuity. In this paper, we present the integrated use of target classification information and target kinematic information for target tracking. In our approach, target class information is integrated into the data association process using the two-dimensional
(one track list and one measurement list) as well as multiframe (one track list and multiple measurement lists) assignments. The latter is an optimization based MHT. A generic model of the classifier output is considered and its use in association likelihoods is discussed. The multiframe association likelihood is developed to include the classification results based on the confusion matrix that specifies the accuracy of the target classifier. The objective is to improve association results using class information when the kinematic likelihoods are similar for different targets, i.e., there is ambiguity in using kinematic information alone. Performance comparison with and without the use of class information in data association is presented on a ground target tracking problem
where targets are moving in an open field and their tracks can merge, branch and cross. Simulation results quantify the benefits of classification aided data association for improved target tracking, especially in the presence of association uncertainty in kinematic measurements. Also the benefit of S-D (multiframe) association
vs. 2-D association is investigated for different quality classifiers. The main contribution is the development of the methodology to incorporate exactly the classification information into multidimensional (multiframe) association.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tracking ground targets with airborne GMTI sensor measurements proves to be a challenging task due to high target density, high clutter, and low visibility. The exploitation of non-standard background information such as road maps and terrain information is therefore highly desirable for the enhancement of track quality and track continuity. The present paper presents a Bayesian approach to incorporate such information consistently. It is particularly suited to deal with winding roads and networks of roads. Key issues are: modeling the target dynamics in quasi one-dimensional road coordinates and mapping onto ground coordinates using linear road segments. The case of several, intersecting roads with different characteristics, such as mean curvature, slope, or visibility, is treated within an Interacting Multiple Model scheme. The iterative filter equations are formulated within a framework of
Gaussian sum approximations on the one hand and a numerically exact
Particle Filter approach on the other hand. Simulation results for single targets taken from a realistic ground scenario show strongly reduced target location errors compared to the case of neglecting road-map information. By using a realistic GMTI sensor model, early detection of stopping targets is demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hybrid motion models are useful for deriving algorithms for tracking maneuvering targets. Performance analysis of such trackers usually focuses on the accuracy of estimates of the kinematic states; e.g., location and velocity. The performance metric discussed here emphasizes motion mode estimation. It is shown that a well-studied
tracker can be used for modal interval estimation and smoothing. An example displays the striking advantage of the approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper provides a brief history of some operational particle filters that were used by the U. S. Coast Guard and U. S. Navy. Starting in 1974 the Coast Guard system provided Search and Rescue Planning advice for objects lost at sea. The Navy systems were used to plan searches for Soviet submarines in the Atlantic, Pacific, and Mediterranean starting in 1972.
The systems operated in a sequential, Bayesian manner. A prior distribution for the target’s location and movement was produced using both objective and subjective information. Based on this distribution, the search assets available, and their detection characteristics, a near-optimal search was planned. Typically, this involved visual searches by Coast Guard aircraft and sonobuoy searches by Navy antisubmarine warfare patrol aircraft. The searches were executed, and the feedback, both detections and lack of detections, was fed into a particle filter to produce the posterior distribution of the target’s location. This distribution was used as the prior for the next iteration of planning and search.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Out-of-sequence measurements (OOSMs) frequently arise in a multi-platform central tracking system due to delays in communication networks and varying pre-processing times at the sensor platforms. During the last few years, multiple-lag OOSM filtering algorithms have received a great deal of attention. However, a comparative analysis of these algorithms for multiple OOSMs is lacking. This paper analyzes a number of multiple-lag OOSM filtering algorithms in terms of optimality, accuracy, statistical consistency, and computational speed. These factors are important for realistic multi-target multi-sensor tracking systems. We examine the performance of these algorithms using a number of examples with Monte Carlo simulations. We present numerical results using simulated data, which includes two-dimensional position and velocity measurements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Batch maximum likelihood (ML) and maximum a posteriori (MAP) estimation with process noise is now more than thirty-five years old, and its use in multiple target tracking has long been considered to be too computationally intensive for real-time applications. While this may still be true for general usage, it is ideally suited for special needs such as bias estimation, track initiation and spawning, long-term prediction of track states, and state estimation during periods of rapidly changing target dynamics. In this paper, we examine the batch estimator formulation for several cases: nonlinear and linear models, with and without a prior state estimate (MAP vs. ML), and with and without process noise. For the nonlinear case, we show that a single pass of an extended Kalman smoother-filter over the data corresponds to a Gauss-Newton step of the corresponding nonlinear least-squares problem. Even the iterated extended Kalman filter can be viewed within this framework. For the linear case, we develop a compact least squares solution that can incorporate process noise and the prior state when available. With these new views on the batch approach, one may reconsider its usage in tracking because it provides a robust framework for the solution of the aforementioned problems. Finally, we provide some examples comparing linear batch initiation with and without process noise to show the value of the new approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This is the fifth part of a series of papers that provide a comprehensive survey of techniques for tracking maneuvering targets without addressing the so-called measurement-origin uncertainty. Part I and Part II deal with target motion models. Part III covers measurement models and associated techniques. Part IV is concerned with tracking techniques that are based on decisions regarding target maneuvers. This part surveys the multiple-model methods---the use of multiple models (and filters) simultaneously---which is the prevailing approach to maneuvering target tracking in the recent years. The survey is presented in a structured way, centered around three generations of algorithms: autonomous, cooperating, and variable structure. It emphasizes on the underpinning of each algorithm and covers various issues in algorithm design, application, and performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In a Bayesian framework, all single target tracking problems reduce to recursive computation of the posterior density of the target state. Particle filters approximate the optimal Bayesian recursion by propagating a set of random samples with associated weights. In the last decade, there have been numerous contributions to the theory and applications of particle filters. Much study has focussed on design issues such as appropriate selection of the importance density, the use of resampling techniques which mitigate sample degeneracy and the choice of a suitable random variable space upon which to implement the particle filter in order to minimise numerical complexity. Although the effect of these design choices is, in general, well known, their relevance to target tracking problems has not been fully established. These design issues are considered for single target tracking applications involving target manoeuvres and clutter. Two choices of importance density are studied and methods for enhancing particle diversity through the avoidance of particle duplication in the resampling step are considered for each importance density. The possibility of reducing the dimension of the space over which the particle filter is implemented is considered. Based on simulation results, a few key observations are drawn about which aspects of particle filter design most influence their performance in target tracking applications. The numerical simulations also provide insights into the relationship between the state dimension and the number of particles needed to improve upon the performance of the standard tracking filters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Highly maneuvering threats are a major concern for the Navy and the DoD and the technology discussed in this paper is intended to help address this issue. A neural extended Kalman filter algorithm has been embedded in an interacting multiple model architecture for target tracking. The neural extended Kalman filter algorithm is used to improve motion model prediction during maneuvers. With a better target motion mode, noise reduction can be achieved through a maneuver. Unlike the interacting multiple model architecture which uses a high process noise model to hold a target through a maneuver with poor velocity and acceleration estimates, a neural extended Kalman filter is used to predict corrections to the velocity and acceleration states of a target through a maneuver. The neural extended Kalman filter estimates the weights of a neural network, which in turn are used to modify the state estimate predictions of the filter as measurements are processed. The neural network training is performed on-line as data is processed. In this paper, the simulation results of a tracking problem using a neural extended Kalman filter embedded in an interacting multiple model tracking architecture are shown. Preliminary results on the 2nd Benchmark Problem are also given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The target tracking literature has traditionally been most interested in the "hit" model for the observations process, and the community has developed many techniques for data association. On the other hand, matched field processing (MFP) research has focused on signal processing with the main emphasis on target detection and localization. Treatments of combined tracking/MFP systems are not common, but most concentrate on signal processing, with the idea that a "track" is really a sequence or track-segment of detections that make sense from dynamics considerations. Thus, here we explore the MFP tracking problem, with the key that we attempt to use traditional target-tracking algorithms. In particular, we use an IMMPDAF-AI (interacting multiple-model probabilistic data association filter with amplitude information). It is shown that the use of such an advanced
tracking algorithm – plus a number of MFP-specific refinements – produces tracking performance that is far superior to that obtained for a more traditional tracking (a strongest-neighbor Kalman filter), with the added advantage of a significantly reduced numerical load as measured in terms of the number of MFP replicas to be computed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Initializing and maintaining a track for a low observable (low SNR, low target detection probability and high false alarm rate) target can be very challenging because of the low information content of measurements. In addition, in some scenarios, target-originated measurements might not be present in many consecutive scans because of mispointing, target maneuvers or erroneous preprocessing. That is, one might have a set of noninformative scans that could result in poor track initialization and maintenance. In this paper an algorithm
based on the Expectation-Maximization (EM) algorithm combined with Maximum Likelihood (ML) estimation is presented for tracking slowly maneuvering targets in heavy clutter and possibly non-informative scans. The adaptive sliding-window EM-ML approach, which operates in batch mode, tries to reject or weight down non-informative scans using the Q-function in the M-step of the EM algorithm. A track validation technique is used to confirm the validity of the EM-ML estimates. It is shown that target features in the form of, for
example, amplitude information, can also be used to improve the estimates. In addition, performance bounds based on the supplemented EM (SEM) technique are also presented. The effectiveness of new algorithm is first demonstrated on a 78-frame Long Wave Infrared (LWIR) data sequence consisting of an F1 Mirage fighter jet in
heavy clutter. Previously, this scenario has been used as a benchmark for evaluating the performance of other track initialization algorithms. The new EM-ML estimator confirms the track by frame 20 while the ML-PDA (Maximum Likelihood estimator combined with Probabilistic Data Association) algorithm, the IMM-MHT (Interacting
Multiple Model estimator combined with Multiple Hypothesis Tracking) and the IMM-PDA estimator previously required 28, 38 and 39 frames, respectively. The benefits of the new algorithm in terms of accuracy,
early detection and computational load are illustrated using simulated scenarios as well.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses the problem of tracking multiple moving targets by estimating their joint multitarget probability density (JMPD). The JMPD technique is a Bayesian method for tracking multiple targets that allows nonlinear, non-Gaussian target motions and measurement to state coupling. JMPD simultaneously estimates both the target states and the number of targets. In this paper, we give a new grid-free implementation of JMPD based on particle filtering techniques and explore several particle proposal strategies, resampling techniques, and particle diversification methods. We report the effect of these techniques on tracker performance in terms of tracks lost, mean squared error, and computational burden.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tracking midcourse objects in multiple IR-sensor environments is a significant and difficult scientific problem that must be solved to provide a consistent set of tracks to discrimination. For IR
sensors, the resolution is limited due to the geometry and distance
from the sensors to the targets. Viewed on the focal plane for a
single IR sensor, the targets appear to transition from an unresolved phase (merged measurements) involving pixel-clusters into a mostly resolved phase through a possibly long partially unresolved phase. What is more, targets can appear in different resolution phases at the same time for different sensors. These resolution problems make multi-sensor tracking most difficult. Considering a centralized multi-sensor tracking architecture we discuss robust methods for identification of merged measurements at the fusion node and develop a method for pixel-cluster decomposition that allows the tracking system to re-process focal-plane image data for improved tracking performance. The resulting system can avoid inconsistent measurement data at the fusion node. We then present a more general multiple hypothesis pixel-cluster decomposition approach based on finding k-best assignments and solving a number of $n$-dimensional assignment problems over n frames to find a decomposition among several pixel-cluster decomposition hypotheses that best represents a frame of data based on the information from n
frames of data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Closely spaced targets can result in merged measurements, which complicate data association. Such merged measurements violate any assumption that each measurement relates to a single target. As a result, it is not possible to use the auction algorithm in its simplest form (or other two-dimensional assignment algorithms) to solve the two-dimensional target-to-measurement assignment problem. We propose an approach that uses the auction algorithm together with Lagrangian relaxation to incorporate the additional constraints resulting from the presence of merged measurements. We conclude with some simulated results displaying the concepts introduced, and discuss the application of this research within a particle filter context.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In most solutions to state estimation problems like, for example, target tracking, it is generally assumed that the state evolution and measurement models are known a priori. The model parameters include process and measurement matrices or functions as well as the corresponding noise statistics. However, there are situations
where the model parameters are not known a priori or are known only partially (i.e., with some uncertainty). Moreover, there are situations where the measurement is biased. In these scenarios, standard estimation algorithms like the Kalman filter and the extended Kalman Filter (EKF), which assume perfect knowledge of the model parameters, are not accurate. In this paper, the problem with uncertain model parameters is considered as a special case of maximum likelihood estimation with incomplete-data, for which a standard solution called the expectation-maximization (EM) algorithm exists. In this paper a new extension to the EM algorithm is proposed to solve the more general problem of joint state estimation and model parameter identification for nonlinear systems with possibly non-Gaussian noise. In the expectation (E) step, it is shown that the best
variational distribution over the state variables is the conditional posterior distribution of states given all the available measurements and inputs. Therefore, a particular type of particle filter is used to estimate and update the posterior distribution. In the maximization (M) step the nonlinear measurement process parameters are approximated using a nonlinear regression method for adjusting the parameters of a mixture of Gaussians (MofG). The proposed algorithm is used to solve a nonlinear bearing-only tracking problem similar to the one reported recently with uncertain measurement process. It is shown that the algorithm is capable of accurately tracking the state vector while identifying the unknown measurement dynamics. Simulation results show the advantages of the new technique over standard algorithms like the EKF whose performance degrades rapidly in the presence of uncertain models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Previous papers (1997, 1999, 2000, 2001) have described a tracking approach which utilized a Combined Kalman Filter (CKF), adaptive tracking for maneuver tracking, the JVC association algorithm for association and Interacting Multiple Model (IMM) tracking for use in Airborne Early Warning (AEW) applications. In this paper we present our incorporation of IFF measurement data in addition to radar measurement data for tracking.
First our previous AEW radar tracking approach is briefly reviewed as most of this approach is still utilized for the incorporation of IFF measurement data in addition to radar measurements. Then we describe IFF sensor data and how we modeled it. Our modifications of correlation and association to account for the different types of radar and IFF sensor data and different types of tracks, Radar, IFF, or Radar/IFF tracks. We then introduce the notion of a combined cost for association of IFF measurements to tracks.
Finally we present simulation results showing results of just radar tracking, IFF tracking, and combined Radar/IFF tracking with performance Measures of Effectiveness, (MOE's).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The concept of selecting the best hypothesis in the minimum mean square error (MMSE) sense was introduced in 1999 to provide alternative data association algorithms for data association with hard decisions using data from one or more sensors. The motivations for using the estimate based on the best hypothesis in the MMSE sense are two-fold. First, there are situations where there is a natural preference to make hard decisions rather than soft decisions. Secondly, given that a state estimate is based on a single hypothesis as in a typical hard decision, there is the desire to minimize the mean square of the estimation error, since that is a common metric in evaluating performance. For example, for estimation that involves data association, the traditional MMSE criterion leads to so called soft decisions that may not be appropriate for an interceptor with a small region of lethality while, in contrast, hard decisions might increase the probability of a successful engagement. In addition, in processing features for use in target typing, classification or discrimination, soft decisions may degrade performance more than would a reasonable hard decision.
While the best hypothesis method may be preferred for certain applications, the improved performance might be at the expense of increased processing load. Since the capability of available processors is increasing rapidly, emphasis can be expected to lean toward algorithms that take advantage of this enhanced capability to provide improved performance based on the specific needs of a target tracking application.
The emphasis of this paper is on the use of data from multiple sensors in multiple-frame methods for data association, such as in multiple hypothesis tracking, using as the criteria the best hypothesis in the MMSE sense rather than the most probable hypothesis or the traditional MMSE that leads to soft decisions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Discussed is a number of mechanisms for complex systems that can lead to unexpected behavior. All are related to graph theoretic models of interactions. In addition, the theory of random interval graphs can be applied to the characterization of simultaneously occurring variable finite length events. The theory is sufficiently general to provide simple parametric description of the natural relationships between a system's task processing rate and associated decision and communication rates required to control it. One can estimate conditions which induce temporal decoupling and provide general insight into control methodologies which can be used to avoid decoupling. The usage of intervals to characterize resource allocation problems has widespread applications to processing and decision making algorithms which consumed finite bounded time intervals. Finally, various aspects of track quality are discussed as a measure of the strength of interaction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a Wide Area Integrated Maritime Surveillance (WAIMS) system based on High Frequency Surface Wave Radar (HFSWR) for monitoring surface and low-level air activity within the 200 nm Exclusive Economic Zone (EEZ). The system is designed to help authorities to locate and identify illegal activities. In particular, we investigate the bias problem from a nonlinear latitude and longitude transformation. The geometric relationship between a sensor and a target position is formulated. Then using Taylor series, an analysis of the nonlinear transformation bias is done with comparison of calculated and simulated results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we build on an established framework for dynamic multi-sensor resource management. The framework is based on the control of the posterior Cramer-Rao lower bound, which provides a bound on the achievable accuracy of target state estimation. We further develop this framework to increase the utility of each sensor by deploying new sensors ahead of the target and in its path. The modified framework is demonstrated in the deployment and utilization of azimuth-only sensors in tracking a single target. Simulation results show that the enhanced strategy is able to accurately track the
target while significantly reducing the number of sensors required. However this is at a cost of a significant increase in computation time which may make the technique unsuitable for some sensor management applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this contribution, the maximum likelihood estimation of sensor registration parameters, such as range, azimuth and elevation biases in radar measurements, using airlane information is proposed and studied. The motivation for using airlane information for sensor registration is that it is freely available as a source of reference and it provides an alternative to conventional techniques that rely on synchronised and correctly associated measurements from two or more sensors. In the paper, the problem is first formulated in terms of a measurement model that is a nonlinear function of the unknown target state and sensor parameters, plus sensor noise. A probabilistic model of the target state is developed based on airlane information. The maximum likelihood and also maximum a posteriori solutions are given. The Cramer-Rao lower bound is derived and simulation results are presented for the case of estimating the biases in radar range, azimuth and elevation measurements. The accuracy of the proposed method is compared against the Cramer-Rao lower bound and that of an existing two-sensor alignment method. It is concluded that sensor registration using airlane information is a feasible alternative to existing techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a recursive Bayesian solution for the problem of joint tracking and classification of airborne targets. In our system, we allow for complications due to multiple targets, false alarms, and missed detections. More importantly, though, we utilize the full benefit of a joint approach by implementing our tracker using an aerodynamically valid flight model that requires aircraft-specific coefficients such as wing area and vehicle mass, which are provided by our classifier. A key feature that bridges the gap between tracking and classification is radar cross section (RCS). By modeling the true deterministic relationship that exists between RCS and target aspect, we are able to gain both valuable class information and an estimate of target orientation. However, the lack of a closed-form relationship between RCS and target aspect prevents us from using the Kalman filter or its variants. Instead, we rely upon a sequential Monte Carlo-based approach known as particle filtering. In addition to allowing us to include RCS as a measurement, the particle filter also simplifies the implementation of our nonlinear non-Gaussian flight model.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In tracking applications, target dynamics is usually modeled in the Cartesian coordinates, while target measurements are directly available in the original sensor coordinates. Measurement conversion is widely used to do linearization such that the Kalman filter can be applied in the Cartesian coordinates. A number of improved measurement-conversion techniques have been proposed recently. However, they have fundamental limitations, resulting in performance degradation, as pointed out in Part III of a recent survey conducted by the authors. This paper proposes a recursive filter that is theoretically optimal in the sense of minimizing the mean-square error among all linear unbiased filters in the Cartesian coordinates. The proposed filter is free of the fundamental limitations of the measurement-conversion approach. Results of an approximate implementation for measurements in the spherical coordinates are compared with those obtained by two state-of-the-art conversion techniques. Simulation results are provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The real world presents a much messier tracking environment than the usual pristine world of tracking simulations. In particular, simulations often do not properly account for the presence of CSOs in the vicinity of the objects being tracked, thereby producing potentially misleading results. CSOs have several nasty effects on trackers, which, if not mitigated, may result in show-stopping surprises when attempting to use a tracker operationally. This paper will describe and quantify some of the limitations that CSOs impose on single sensor and/or multisensor trackers. These limitations include delaying the expected time that a given object can be confidently resolved, interfering with the ability to asso-ciate objects between sensors properly, contaminating track files with spurious signature information, and forcing some form of cluster tracking to be employed. Analytic predictions of these limitations, based on local object density over time, will be presented, backed up by various Monte Carlo simulations. A more robust metric is proposed to allow the prediction of a more operationally meaningful probability of correct target association in a multisensor environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Also in fusing measurements from multiple sensors with misalignment, the specific characteristics of MHT achieve a superior tracking performance if compared to less complex algorithms. The parameters of the sensors, the scenario and the tracking process define the extent and type of degradation by misalignment. Their influence is analyzed here. In addition, the effect of misalignment on MHT tracking is compared with the NN (nearest neighbor) and PDAF tracking algorithms. Due to the details of the sensor modeling and a sensor data processing, which includes track initiation and cancellation, the results widely comply with real world applications. The study covers centralized and decentralized tracking (track correlation with measurement fusion) for collocated and distributed fusion. The detection and correction of misalignment itself is not included here.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A key assumption in monopulse based angle-of-arrival (AOA) estimators is that at most one return from a single object is present in each range cell, or equivalently in each sample of the matched filter output. These algorithms break down if the data consists of merged measurements-multiple target returns contained in the same range cell. The proposed technique makes use of data from a three channel monopulse radar to estimate the AOA of two targets from merged measurements. Specifically, the technique capitalizes on the structure of squint beams in conjunction with multiple range samples to resolve the multiple targets. The paper focuses on the development of the new algorithm along with results from computer simulations that demonstrate its viability.
To illustrate the utility of this technique to target tracking problems, comparative Monte Carlo results of performance of a tracker with the new technique and conventional monopulse AOA estimates are provided.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We describe a new hybrid particle filter that has two novel features: (1) it uses quasi-Monte Carlo samples rather than the conventional Monte Carlo sampling, and (2) it implements Bayes’ rule exactly using smooth densities from the exponential family. Theory and numerical experiments over the last decade have shown that quasi-Monte Carlo sampling is vastly superior to Monte Carlo samples for certain high dimensional integrals, and we exploit this fact to reduce the computational complexity of our new particle filter. The main
problem with conventional particle filters is the curse of dimensionality. We mitigate this issue by avoiding particle depletion, by implementing Bayes’ rule exactly using smooth densities from the exponential family.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Radar signal processing is a key part in tracking closely spaced targets and targets in the presence of sea-surface-induced multipath. These issues are the salient features of the benchmark problem for tracking unresolved targets combined with radar management, for which this paper presents the only complete solution to date. In this paper a modified version of a recently developed "superresolution" maximum likelihood (ML) angle estimator for closely spaced targets as well as targets in the presence of multipath are presented. Efficient radar resource allocation algorithms for two closely spaced targets and targets flying close to the sea surface are also presented. Finally, the IMMPDAF (interacting multiple model estimator with probabilistic data association filter modules) is used to track these targets. It is found that a two-model IMMPDAF performs better than the three model version used in the previous benchmark. Also, the IMMPDAF with a coordinated turn model works better than the one using a Wiener process acceleration model. The signal processing and tracking algorithms presented here, operating in a feedback manner, form a comprehensive solution to the most realistic tracking and radar management problem to date.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we apply highly ordered statistics filters to hyperspectral data to enable the detection of anomalous targets whose signatures are known. Each frame has subtracted from it an estimate based on an ordered statistics filter; the resulting frames are then combined optimally based on the covariance data of the cube and the spectral signature of the target. We show that the effect of the ordered statistic filter is to eliminate false alarms at edge points.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a unique method for modeling fractional Brownian motion type data sets with ordinary differential equations (ODE) and a unique fractal operator. To achieve such modeling, a new method is introduced using Turlington polynomials to obtain continuous and differentiable functions. These functions are then fractal interpolated to yield fine structure. Spectral decomposition is used to obtain a differential equation model which is then fractal interpolated to forecast a fBm trajectory. This paper presents an overview of the theory and our modeling approach along with example results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With multiple radar targets the sidelobe of matched filtering outputs from the stronger targets become the interferences for the detection of weaker targets. An effective approach based on traditional CLEAN algorithm is proposed for sidelobe reduction for the matched filtering output of binary coding radar signals. A novel multilevel sidelobe reduction method is also developed for closely located multiple targets of nearly equally strong scatterings. The effectiveness of the sidelobe reduction algorithms is demonstrated through the numerous processing results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Un-cooled infrared focal plane array (FPA) has been developed sufficiently in recent years. The performance of drive circuit influence on FPA’s characteristic directly. Based on the construct introduce of 320*240 microbolometer un-cooled focal plane array (UFPA), the analysis and description of system design for drive circuit of UFPA is emphasized. The XC9500 series products are adapted in this design, which have some virtues such as high integrative degree, high precision, lower cost and simple layout. At last, the output timing sequence of drive circuit is presented, which indicate that the drive circuit is satisfied the miniaturized requirement and has well performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sequential Monte Carlo methods have attracted the attention of the tracking community as a solution to Bayesian estimation particularly for nonlinear problems. Several attributes of particle filters support their use in jointly tracking and identifying ground targets in a road-constrained network. First, since the target dynamics are simulated, propagating a target within a constrained state space is handled quite naturally since the particle filter is not restricted to propagating Gaussian PDFs. Furthermore, a particle filter can approximate a PDF which is a mixture of continuous random variables (the target kinematic state) and discrete random variables (the target ID) which is necessary for the joint tracking and identification problem. Given HRRGMTI measurements of a target, we propose to jointly estimate a target's kinematic state and identification by propagating the joint PDF of the target kinematic state (position and velocity) and target ID. In this way, we capitalize on the inherent coupling between the target's feature measurement (the HRR profile) and the target's kinematic state. In addition to the coupling between a target's feature measurement and the target's kinematic state, there exists a coupling between a target's dynamics and the target's ID which can also be exploited through particle filtering methods. We develop the particle filtering algorithm for tracking and identifying ground targets in a road-constrained environment and present simulation results for a two-class problem.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An obvious use for feature and attribute data is for target classification (discrimination, type, identification, or recognition) and in combat identification. An additional use is in the data (or track) association process to reduce the misassociations. In target tracking, the data association function is often decomposed into two steps. The first step, the gating process, is a preliminary threshold process to eliminate unlikely measurement-track pairs. This is followed by the second step, the process of selecting measurement-track pairs or assigning weights to measurement-track pairs so that the tracks can be updated by a filter. Previous papers discussed for integrating features and attributes into target track processing. The primary concern of this paper is to further clarify the distinction between simple features and categorical features and the differences in the processing methods in the data association process for tracking small targets with data from one or multiple sensors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present in this paper an information based method for sensor management that is based on tasking a sensor to make the measurement that maximizes the expected gain in information. The method is applied to the problem of tracking multiple targets. The underlying tracking methodology is a multiple target tracking scheme based
on recursive estimation of a Joint Multitarget Probability Density (JMPD), which is implemented using particle filtering methods. This Bayesian method for tracking multiple targets allows nonlinear, non-Gaussian target motion and measurement-to-state coupling. The sensor management scheme is predicated on maximizing the expected Renyi Information Divergence between the current JMPD and the JMPD after a measurement has been made. The Renyi Information Divergence, a generalization of the Kullback-Leibler Distance, provides a way to measure the dissimilarity between two densities. We use the Renyi Information Divergence to evaluate the expected information gain for each of the possible measurement decisions, and select the measurement that maximizes the expected information gain for each sample.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Large-scale sensor array management has applications in a number of target tracking problems. For example, in ground target tracking, hundreds or even thousands of unattended ground sensors (UGS) may be dropped over a large surveillance area. At any one time it may then only be possible to utilize a very small number of the available sensors at the fusion center because of bandwidth limitations. A similar situation may arise in tracking sea surface or underwater targets using a large number of sonobuoys. The general problem is then to select a subset of the available sensors in order to optimize tracking performance. The Posterior Cramer-Rao Lower Bound (PCRLB), which quantifies the obtainable accuracy of target state estimation, is used as the basis for network management. In a practical scenario with even hundreds of sensors, the number of possible sensor
combinations would make it impossible to enumerate all possibilities in real-time. Efficient local (or greedy) search techniques must then be used to make the computational load manageable. In this paper we introduce an efficient search strategy for selecting a subset of the sensor array for use during each sensor change interval in multi-target tracking. Simulation results illustrating the performance of the sensor array manager are also presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Rescuing operators of small recreational vessels is a constant resource drain on the limited operating budget of the Canadian Coast Guard. As a result, a new and innovative application of small target surveillance techniques is being developed at the Department of Geodesy and Geomatics Engineering, UNB, Canada. This work is being done in support of the development of a strategic decision making tool based on risk modeling to be used to predict where in Canadian
waters marine incidents are most likely to occur in support of best resource allocation.
Previous research in the use of hyperspectral imaging for search and rescue, resulted in the development of fast, nonparametric
"spatio-spectral" template subpixel object detection algorithm. The results of this work are being adapted and enhanced for use with the new, commercially available spaceborne high-resolution optical imagery. Investigations are being made regarding the utility of the Minkowski distance metrics for use in small target detection within a
multispectral imagery environment. Further, research is being performed on the employment of the Mahalanobis distance metric to enhance the "spatio-spectral" template by exploiting the variance/covariance information surrounding a potential target.
The detection results for the two target vessels were excellent using the Manhattan and Euclidean distance. The best results were had using the Manhattan distance metric with a 5x5 kernel with all 16 yachts detected, no false negatives, and six false positives.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we implement a previously developed sensor management framework within the domain of multistatic radar resource management. The methodology is based on controlling the Posterior Cram´er-Rao Lower Bound (PCRLB) which provides a bound on the performance of any unbiased target state estimator. In the second part of the paper, the additional complication of the Doppler Blind Zone, inside which the target cannot be detected, is considered. In the case of missed
detections, the PCRLB has been shown to be overly optimistic, so we use a performance measure which more accurately accounts for missed detections. However, existing measures fail to accommodate constraints arising from the blind zone, so we modify the measure to incorporate this extra information. The modified measure is shown to give a more accurate estimate of tracking performance, facilitating efficient resource management.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Here we discuss intervisibility - the existence of an unobstructed line of sight (LOS) between two points - accounting for the vertical and horizontal errors in the estimated locations of both points as well as elevation errors in the database of terrain that could obstruct the LOS between these points. The errors are first simply
treated as a "white" noise sequence: we assume no correlation between the intervisibility at two different times, and only the probability of the intervisibility event is in this case developed. This is useful; but perhaps of greater concern is whether or not a target remains visible long enough and/or often enough that something can
be done about it. Consequently we present a second treatment in which the errors are stochastic processes of given and width, and both the probability density function of the intervisible time and the average number of intervisible intervals over a certain time period interval are developed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multiple frame data association, whether it is based on multiple
hypothesis tracking or multi-dimensional assignment problems, has
established itself as the method of choice for difficult tracking
problems, principally due to the ability to hold difficult data
association decisions in abeyance until additional information is
available. Over the last twenty years, these methods have focused
on one-to-one assignments, many-to-one, and many-to-many
assignments. Group tracking, on the other hand, introduces new
complexity into the association process, especially if some soft
decision making capability is desired. Thus, the goal of this
work is to combine multiple grouping hypotheses for each frame
of data (tracks or measurements) with matching these hypotheses
across multiple frames of data using one-to-one, many-to-one, or
many-to-many assignments to determine the correct hypothesis on
each frame of data and connectivity across the frames. The resulting formulation is sufficiently general to cover four broad classes of problems in multiple target tracking, namely (a) group cluster tracking, (b) pixel (clump) IR cluster tracking, (c) the merged measurement problem, and (d) MHT for track-to-track fusion.
What is more, the cluster assignment problem for either two or multiple dimensions represents a generalized data association problem in the sense that it reduces to the classical assignment problems when there are no overlapping groups or clusters. The formulation of the assignment problem for resolved object tracking and candidate group methods for use in multiple frame group tracking are briefly reviewed. Then, three different formulations of the group assignment problem are developed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently a general framework for sensor resource management, which has been shown to allow efficient and effective utilization of a multisensor system was introduced in5. The basis of this technique is to use the Posterior Cramer-Rao Lower Bound (PCRLB) to quantify and control the optimal achievable accuracy of target state estimation. In the current paper we extend this framework by addressing the issues of imperfect sensor placement and uncertain sensor movement (e.g., sensor drift). In contrast the previous work considered only
the case where the sensor location is known exactly. The crucial consideration is then how these two forms of uncertainty affect the sensor management strategy. If unaccounted for, these uncertainties will render the output of the resource manager useless. We adjust the PCRLB to account for sensor location uncertainty, and we also allow for measurement origin uncertainty (missed target originated detections and false alarms). The work is motivated by the problem of tracking a submarine by adaptively deploying sonobuoys from a helicopter. Simulation results are presented to show the advantages of accounting for sensor location uncertainty within this focal domain of anti-submarine warfare. The same technique can be used for tracking using unattended ground sensors (UGS) or unmanned aerial vehicles (UAV).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper provides a solution for sensor bias estimation based on local tracks at a single time without a priori association for a decentralized multiple sensor tracking system. Each local tracker generates its own local state estimates ignoring the bias. The fusion center then performs track-to-track fusion occasionally after estimating the sensor biases based on the common targets tracked by different sensors. The likelihood function of the bias in a multisensor-multitarget scenario is derived. Using this likelihood, it is shown that the difference of the local estimates is the sufficient statistic for estimating the biases. A least squares solution of the bias estimates and corresponding Cramer-Rao Lower Bound (CRLB) are presented assuming uncorrelatedness as well as accounting for the crosscorrelation between the local estimation errors. Two approaches to estimate the sensor biases in the absence of known track-to-track association, namely, the Maximum Likelihood estimator combined with Probabilistic Data Association (ML-PDA) and an estimator based on soft data association, are proposed. These methods are compared with the baseline solution with known (perfect) track-to-track association by Monte Carlo simulations. The experimental results indicate that the bias estimator based on the soft data association provides nearly optimal performance and has less computational load than the one using ML-PDA.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The authors have developed a toolbox for hybrid estimator evaluation which allows rapid comparison of algorithms in different scenarios. The toolbox is flexible in implementing, simulating, and evaluating various algorithms, particularly those for hybrid estimation - state estimation under parametrical and/or structural uncertainties. While the toolbox is extensible, numerous models, filters, estimators, and error measures are provided by default. In this paper, examples are given of short programs written in Matlab that illustrate some of the benefits that such a toolbox can bring to researchers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tracking using the ground moving target indicator (GMTI) sensor measurements plays an important role in situation awareness of the battlefield, surveillance, and precision tracking of ground moving targets. The GMTI sensor measurements range, azimuth, and range-rate are nonlinear functions of the target state. The extended Kalman filter (EKF) is widely used to solve the GMTI filtering problem. Since the GMTI measurement model is nonlinear, the use of an EKF is sub-optimal. The sub-optimality depends on the degree of nonlinearity of the measurement function and GMTI measurement error covariance. We can convert polar measurements range and azimuth to Cartesian measurements and approximately treat the range-rate as a linear function of the target velocity by considering the radar line-of-sight (RLOS) vector as a constant. This allows the use linear Kalman filter (KF) with linearized measurements in an approximate way. The unscented Kalman filter (UKF) and particle filter (PF) have been shown recently as robust alternate algorithms for a wide range of nonlinear estimation problems. This paper compares the performance of the KF with linearized measurements, EKF, iterated EKF (IEKF), UKF, and PF for the GMTI measurement filtering problem using a wide range of operating conditions. Estimation accuracy, statistical consistency, and computational speed and storage are used to evaluate the performance of these estimators. We use Monte-Carlo simulations and calculate the average mean square error (MSE) matrix, normalized estimation error squared (NEES), and normalized innovation squared (NIS) to analyze the accuracy and statistical consistency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Electronically scanned narrowband radar systems detect non-extended targets in one or two range cells depending on whether the object straddles the range cell boundary. For two detections, the range estimate may be refined using a fusion process. However, for scenarios with multiple closely spaced objects ambiguity exists in how many objects are present and how the range cells should be paired to produce the refined estimates. In this paper, we present a new algorithm that first segments the primitive radar measurements, and second fuses paired measurements to produce object reports used by a tracking system. The segmentation algorithm is developed by forming a hypothesis partition model for a set of consecutive range cells with detections, and then evaluating the joint likelihood function for each feasible partition of the cells into pairs or singletons. Simulation results that demonstrate the utility of the algorithm are provided using a modern missile tracking simulation environment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.