Multi-modal data fusion for situational awareness is of interest because fusion of data can provide more information than the individual modalities alone. However, many questions remain, including what data is beneficial, what algorithms work the best or are fastest, and where in the processing pipeline should data be fused? In this paper, we explore some of these questions through a processing pipeline designed for multi-modal data fusion in an autonomous UAV landing scenario. In this paper, we assess landing zone identification methods using two data modalities: hyperspectral imagery and LIDAR point clouds. Using hyperspectral image and LIDAR data from two datasets of Maui and a university campus, we assess the accuracies of different landing zone identification methods, compare rule-based and machine learning based classifications, and show that depending on the dataset, fusion does not always increase performance. However, we show that machine learning methods can be used to ascertain the usefulness of individual modalities and their resulting attributes when used to perform classification.
We have developed a prototype real-time computer for a bathymetric lidar capable of producing point clouds attributed with total propagated uncertainty (TPU). This real-time computer employs a “mixed-mode” architecture comprised of an FPGA, CPU, and GPU. Noise reduction and ranging are performed in the digitizer’s user-programmable FPGA, and coordinates and TPU are calculated on the GPU. A Keysight M9703A digitizer with user-programmable Xilinx Virtex 6 FPGAs digitizes as many as eight channels of lidar data, performs ranging, and delivers the data to the CPU via PCIe. The floating-point-intensive coordinate and TPU calculations are performed on an NVIDIA Tesla K20 GPU. Raw data and computed products are written to an SSD RAID, and an attributed point cloud is displayed to the user. This prototype computer has been tested using 7m-deep waveforms measured at a water tank on the Georgia Tech campus, and with simulated waveforms to a depth of 20m. Preliminary results show the system can compute, store, and display about 20 million points per second.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.