Currently there is a considerable development of small, lightweight, lidar systems, for applications in autonomous cars. The development gives possibilities to equip small UAVs with this type of sensor. Adding an active sensor component, beside the more common passive UAV sensors, can give additional capabilities. This paper gives experimental examples of lidar data and discusses applications and capabilities for the platform and sensor concept including the combination with data from other sensors. The lidar can be used for accurate 3D measurements and has a potential for detection of partly occluded objects. Additionally, positioning of the UAV can be obtained by a combination of lidar data and data from other low-cost sensors (such as inertial measurement units). The capabilities are attainable both for indoor and outdoor shortrange applications.
Single-photon counting lidar using Geiger-mode avalanche photodiode (GmAPD) arrays can provide high resolution 3D images at kilometer stand-off distances through coincidence processing. 3D data is useful for detection and identification of targets, especially those so occluded by vegetation that only small patches, smaller than the instantaneous field-of-view of a sensor pixel, have free line-of-sight. To cover an area of interest, e.g. the edge of a forest, with spatial resolution high enough to identify targets, a multimegapixel 3D image is needed. Current GmAPD arrays are limited to tens of kilopixels. Even if the technical challenges of larger arrays could be solved, the necessary pulse energy per pixel will still limit the effective number of pixels at longer ranges, especially if nominal ocular hazard distance (NOHD) is a concern or if short pulse fiber lasers should be used. Thus scanning of the sensor field-of-view will probably always be necessary. In this paper we describe activities at FOI to explore the potential of single-photon counting panoramic 3D imaging using a GmAPD array detector. Results from outdoor experiments at up to 1.2 km stand-off distances, in day and night conditions, are shown. The impact of background light, and how this is handled by changing the aperture stop size, is considered. Signal processing techniques to go from scattered photon detections via 3D point clouds to voxel-based scene analysis are described. The results support the position that single-photon counting with GmAPD arrays is suitable for 3D imaging in military applications with kilometer stand-off distances.
KEYWORDS: Single photon detectors, LIDAR, Sensors, Detection and tracking algorithms, Expectation maximization algorithms, Signal detection, Target detection, Signal to noise ratio, Optical engineering, Signal processing
Time-correlated single-photon counting lidar provides very high-resolution range measurements, making the technology interesting for 3D imaging of objects behind foliage or other obscuration. We study six peak detection approaches and compare their performance from several perspectives: detection of double surfaces within the instantaneous field of view, range accuracy, performance under sparse sampling, and the number of outliers. The results presented are based on reference measurements of a characterization target. Special consideration is given to the possibility of resolving two surfaces closely separated in range within the field of view of a single pixel. An approach based on fitting a linear combination of impulse response functions to the collected data showed the best overall performance.
Time-correlated single-photon-counting (TCSPC) lidar provides very high resolution range measurements. This makes the technology interesting for three-dimensional imaging of complex scenes with targets behind foliage or other obscurations. TCSPC is a statistical method that demands integration of multiple measurements toward the same area to resolve objects at different distances within the instantaneous field-of-view. Point-by-point scanning will demand significant overhead for the movement, increasing the measurement time. Here, the effect of continuously scanning the scene row-by-row is investigated and signal processing methods to transform this into low-noise point clouds are described. The methods are illustrated using measurements of a characterization target and an oak and hazel copse. Steps between different surfaces of less than 5 cm in range are resolved as two surfaces.
The purpose of this study is to present and evaluate the benefit and capabilities of high resolution 3D data from unmanned aircraft, especially in conditions where existing methods (passive imaging, 3D photogrammetry) have limited capability. Examples of applications are detection of obscured objects under vegetation, change detection, detection in dark or shadowed environments, and an immediate geometric documentation of an area of interest. Applications are exemplified with experimental data from our small UAV test platform 3DUAV with an integrated rotating laser scanner, and with ground truth data collected with a terrestrial laser scanner. We process lidar data combined with inertial navigation system (INS) data for generation of a highly accurate point cloud. The combination of INS and lidar data is achieved in a dynamic calibration process that compensates for the navigation errors from the lowcost and light-weight MEMS based (microelectromechanical systems) INS. This system allows for studies of the whole data collection-processing-application chain and also serves as a platform for further development. We evaluate the applications in relation to system aspects such as survey time, resolution and target detection capabilities. Our results indicate that several target detection/classification scenarios are feasible within reasonable survey times from a few minutes (cars, persons and larger objects) to about 30 minutes for detection and possibly recognition of smaller targets.
A UAV (Unmanned Aerial Vehicle) with an integrated lidar can be an efficient system for collection of high-resolution
and accurate three-dimensional (3D) data. In this paper we evaluate the accuracy of a system consisting of a lidar sensor
on a small UAV. High geometric accuracy in the produced point cloud is a fundamental qualification for detection and
recognition of objects in a single-flight dataset as well as for change detection using two or several data collections over
the same scene. Our work presented here has two purposes: first to relate the point cloud accuracy to data processing
parameters and second, to examine the influence on accuracy from the UAV platform parameters. In our work, the
accuracy is numerically quantified as local surface smoothness on planar surfaces, and as distance and relative height
accuracy using data from a terrestrial laser scanner as reference. The UAV lidar system used is the Velodyne HDL-32E
lidar on a multirotor UAV with a total weight of 7 kg. For processing of data into a geographically referenced point
cloud, positioning and orientation of the lidar sensor is based on inertial navigation system (INS) data combined with
lidar data. The combination of INS and lidar data is achieved in a dynamic calibration process that minimizes the
navigation errors in six degrees of freedom, namely the errors of the absolute position (x, y, z) and the orientation (pitch,
roll, yaw) measured by GPS/INS. Our results show that low-cost and light-weight MEMS based
(microelectromechanical systems) INS equipment with a dynamic calibration process can obtain significantly improved
accuracy compared to processing based solely on INS data.
KEYWORDS: Unmanned aerial vehicles, Sensors, LIDAR, 3D modeling, 3D acquisition, Visualization, Target detection, Clouds, Signal processing, Data modeling
This paper summarizes on-going work on 3D sensing and imaging for unmanned aerial vehicles UAV carried laser sensors. We study sensor concepts, UAVs suitable for carrying the sensors, and signal processing for mapping and target detection applications. We also perform user studies together with the Swedish armed forces, to evaluate usage in their mission cycle and interviews to clarify how to present data.
Two ladar sensor concepts for mounting in UAV are studied. The discussion is based on known performance in commercial ladar systems today and predicted performance in future UAV applications. The small UAV is equipped with a short-range scanning ladar. The system is aimed for quick situational analysis of small areas and for documentation of a situation. The large UAV is equipped with a high-performing photon counting ladar with matrix detector. Its purpose is to support large-area surveillance, intelligence and mapping operations. Based on these sensors and their performance, signal and image processing support for data analysis is analyzed. Generated data amounts are estimated and demands on data storage capacity and data transfer is analyzed.
We have tested the usage of 3D mapping together with military rangers. We tested to use 3D mapping in the planning phase and as last-minute intelligence update of the target. Feedback from these tests will be presented. We are performing interviews with various military professions, to get better understanding of how 3D data are used and interpreted. We discuss approaches of how to present data from 3D imaging sensor for a user.
We present algorithm evaluations for ATR of small sea vessels. The targets are at km distance from the sensors, which
means that the algorithms have to deal with images affected by turbulence and mirage phenomena. We evaluate
previously developed algorithms for registration of 3D-generating laser radar data. The evaluations indicate that some
robustness to turbulence and mirage induced uncertainties can be handled by our probabilistic-based registration method.
We also assess methods for target classification and target recognition on these new 3D data.
An algorithm for detecting moving vessels in infrared image sequences is presented; it is based on optical flow
estimation. Detection of moving target with an unknown spectral signature in a maritime environment is a challenging
problem due to camera motion, background clutter, turbulence and the presence of mirage. First, the optical flow caused
by the camera motion is eliminated by estimating the global flow in the image. Second, connected regions containing
significant motions that differ from camera motion is extracted. It is assumed that motion caused by a moving vessel is
more temporally stable than motion caused by mirage or turbulence. Furthermore, it is assumed that the motion caused
by the vessel is more homogenous with respect to both magnitude and orientation, than motion caused by mirage and
turbulence. Sufficiently large connected regions with a flow of acceptable magnitude and orientation are considered
target regions. The method is evaluated on newly collected sequences of SWIR and MWIR images, with varying targets,
target ranges and background clutter.
Finally we discuss a concept for combining passive and active imaging in an ATR process. The main steps are passive
imaging for target detection, active imaging for target/background segmentation and a fusion of passive and active
imaging for target recognition.
KEYWORDS: Sensors, 3D modeling, 3D acquisition, Image registration, Data modeling, 3D metrology, Clouds, Image segmentation, Reflection, 3D applications
The new generation of laser-based imaging sensors enables collection of range images at video rate at the expense of
somewhat low spatial and range resolution. Combining several successive range images, instead of having to analyze
each image separately, is a way to improve the performance of feature extraction and target classification. In the robotics
community, occupancy grids are commonly used as a framework for combining sensor readings into a representation
that indicates passable (free) and non-passable (occupied) parts of the environment. In this paper we demonstrate how
3D occupancy grids can be used for outlier removal, registration quality assessment and measuring the degree of
unexplored space around a target, which may improve target detection and classification. Examples using data from a
maritime scene, acquired with a 3D FLASH sensor, are shown.
Object detection and material classification are two central tasks in electro-optical remote sensing and hyperspectral
imaging applications. These are challenging problems as the measured spectra in hyperspectral images
from satellite or airborne platforms vary significantly depending on the light conditions at the imaged surface,
e.g., shadow versus non-shadow. In this work, a Digital Surface Model (DSM) is used to estimate different
components of the incident light. These light components are subsequently used to predict what a measured
spectrum would look like under different light conditions. The derived method is evaluated using an urban
hyperspectral data set with 24 bands in the wavelength range 381.9 nm to 1040.4 nm and a DSM created from
LIDAR 3D data acquired simultaneously with the hyperspectral data.
The new generation laser-based FLASH 3D imaging sensors enable data collection at video rate. This opens up for realtime
data analysis but also set demands on the signal processing. In this paper the possibilities and challenges with this
new data type are discussed. The commonly used focal plane array based detectors produce range estimates that vary
with the target's surface reflectance and target range, and our experience is that the built-in signal processing may not
compensate fully for that. We propose a simple adjustment that can be used even if some sensor parameters are not
known. The cost for the instantaneous image collection is, compared to scanning laser radar systems, lower range accuracy.
By gathering range information from several frames the geometrical information of the target can be obtained. We
also present an approach of how range data can be used to remove foreground clutter in front of a target. Further, we
illustrate how range data enables target classification in near
real-time and that the results can be improved if several
frames are co-registered. Examples using data from forest and maritime scenes are shown.
A Bayesian approach for data reduction based on spatial filtering is proposed that enables detection of targets partly occluded by natural forest. The framework aims at creating a synergy between terrain mapping and target detection. It is demonstrates how spatial features can be extracted and combined in order to detect target samples in cluttered environments. In particular, it is illustrated how a priori scene information and assumptions about targets can be translated into algorithms for feature extraction. We also analyze the coupling between features and assumptions because it gives knowledge about which features are general enough to be useful in other environments and which are tailored for a specific situation. Two types of features are identified, nontarget indicators and target indicators. The filtering approach is based on a combination of several features. A theoretical framework for combining the features into a maximum likelihood classification scheme is presented. The approach is evaluated using data collected with a laser-based 3-D sensor in various forest environments with vehicles as targets. Over 70% of the target points are detected at a false-alarm rate of <1%. We also demonstrate how selecting different feature subsets influence the results.
This paper describes the development of a high resolution waveform recording laser scanner and presents results
obtained with the system. When collecting 3-D data on small objects, high range and transverse resolution is needed. In
particular, if the objects are partly occluded by sparse materials such as vegetation, multiple returns from a single laser
pulse may limit the image quality. The ability to resolve multiple echoes depends mainly on the laser pulse width and the
receiver bandwidth. With the purpose to achieve high range resolution for multiple returns, we have developed a high
performance 3-D LIDAR, called HiPer, with a short pulse fibre laser (500 ps), fast detectors (70 ps rise time) and a 20
GS/s oscilloscope for fast sampling. HiPer can acquire the full waveform, which can be used for off-line processing. This
paper will describe the LIDAR system and present some image examples. The signal processing will also be described,
with some examples from the off-line processing and the benefit of using the complete waveform.
Laser-based 3D sensors measure range with high accuracy and allow for detection of objects behind various type of
occlusion, e.g., tree canopies. Range information is valuable for detection of small objects that are typically represented
by 5-10 pixels in the data set. Range information is also valuable in tracking problems when the tracked object is
occluded under parts of its movement and when there are several objects in the scene. In this paper, on-going work on
detection and tracking are presented. Detection of partly occluded vehicles is discussed. To detect partly occluded
objects we take advantage of the range information for removing foreground clutter. The target detection approach is
based on geometric features, for example local surface detection, shadow analysis and height-based detection. Initial
results on tracking of humans are also presented. The benefits with range information are discussed. Results are
illustrated using outdoor measurements with a 3D FLASH LADAR sensor and a 3D scanning LADAR.
Laser-based 3D sensors measure range with high accuracy and allow for detection of several reflecting surfaces for each
emitted laser pulse. This makes them particularly suitable for sensing objects behind various types of occlusion, e.g.
camouflage nets and tree canopies. Nevertheless, automatic detection and recognition of targets in forested areas is a
challenging research problem, especially since foreground objects often cause targets to appear as fragmented.
In this paper we propose a sequential approach for detection and recognition of man-made objects in natural forest
environments using data from laser-based 3D sensors. First, ground samples and samples too far above the ground (that
cannot possibly originate from a target) are identified and removed from further processing. This step typically results in
a dramatic data reduction. Possible target samples are then detected using a local flatness criterion, based on the
assumption that targets are among the most structured objects in the remaining data. The set of samples is reduced
further through shadow analysis, where any possible target locations are found by identifying regions that are occluded
by foreground objects. Since we anticipate that targets appear as fragmented, the remaining samples are grouped into a
set of larger segments, based on general target characteristics such as maximal dimensions and generic shape. Finally,
the segments, each of which corresponds to a target hypothesis, undergo automatic target recognition in order to find the
best match from a model library. The approach is evaluated in terms of ROC on real data from scenes in forested areas.
In this paper we study the potential of using deconvolution techniques on full-waveform laser radar data for pulse
detection in cluttered environments, e.g. when a land-mine is partly occluded by vegetation. A pulse width greater than
the distance between the reflecting surfaces within the footprint results in a signal that is composed by overlapping
reflections that may be very difficult to analyze successfully with standard pulse detection techniques. We demonstrate
that deconvolution improves the chance of successful decomposition of waveform signals into the components
corresponding to the reflecting objects in the path of the laser beam. Experimental data were analyzed in terms of pulse
extraction capability and distance accuracy. It was found that deconvolution increases the pulse extraction performance,
but that surfaces closer than about 40% of the laser pulse width are still very difficult to detect and that the number of
spurious, erroneously extracted, points is the price to pay for increased pulse detection probability.
In this paper, we present techniques related to registration and change detection using 3D laser radar data. First, an experimental evaluation of a number of registration techniques based on the Iterative Closest Point algorithm is presented. As an extension, an approach for removing noisy points prior to the registration process by keypoint detection is also proposed. Since the success of accurate registration is typically dependent on a satisfactorily accurate starting estimate, coarse registration is an important functionality. We address this problem by proposing an approach for coarse 2D registration, which is based on detecting vertical structures (e.g. trees) in the point sets and then finding the transformation that gives the best alignment. Furthermore, a change detection approach based on voxelization of the registered data sets is presented. The 3D space is partitioned into a cell grid and a number of features for each cell are computed. Cells for which features have changed significantly (statistical outliers) then correspond to significant changes.
The objective of this paper is to present the Swedish land mine and UXO detection project "Multi Optical Mine Detection System", MOMS. The goal for MOMS is to provide knowledge and competence for fast detection of mines, especially surface laid mines. The first phase, with duration 2005-2009, is essentially a feasibility study which focuses on the possibilities and limitations of a multi-sensor system with both active and passive EO-sensors. Sensor concepts used, in different combinations or single, includes 3-D imaging, retro reflection detection, multi-spectral imaging, thermal imaging, polarization and fluorescence. The aim of the MOMS project is presented and research and investigations carried out during the first years will be described.
In this paper, a number of techniques for segmentation and classification of airborne laser scanner data are presented. First, a method for ground estimation is described, that is based on region growing starting from a set of ground seed points. In order to prevent misclassification of buildings and vegetation as ground, a number of non-ground regions are first extracted, in which seed points should be discarded. Then, a decision-level fusion approach for building detection is proposed, in which the outputs of different classifiers are combined in order to improve the final classification results. Finally, a technique for building reconstruction is briefly outlined. In addition to being a tool for creating 3D building models, it also serves as a final step in the building classification process since it excludes regions not belonging to any roof segment in the final building model.
As a part of the Swedish mine detection project MOMS, an initial field trial was conducted at the Swedish EOD and
Demining Centre (SWEDEC). The purpose was to collect data on surface-laid mines, UXO, submunitions, IED's, and
background with a variety of optical sensors, for further use in the project. Three terrain types were covered: forest,
gravel road, and an area which had recovered after total removal of all vegetation some years before. The sensors used in
the field trial included UV, VIS, and NIR sensors as well as thermal, multi-spectral, and hyper-spectral sensors, 3-D laser
radar and polarization sensors. Some of the sensors were mounted on an aerial work platform, while others were placed
on tripods on the ground. This paper describes the field trial and the presents some initial results obtained from the
subsequent analysis.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.