Presentation + Paper
8 June 2022 Joint data learning panel summary
Author Affiliations +
Abstract
Artificial Intelligence/Deep Learning (AI/DL) techniques are based on learning a model using large available data sets. The data sets typically are from a single modality (e.g., imagery) and hence the model is based on a single modality. Likewise, multiple models are each built for a common scenario (e.g., video and natural language processing of text describing the situation). There are issues of robustness, efficiency, and explainability that need to be addressed. A second modality can improve efficiency (e.g., cueing), robustness (e.g., results cannot be fooled such as adversary systems), and explainability from different sources. The challenge is how to organize the data needed for joint data training and model building. For example, what is needed is (1) structure for indexing data as an object file, (2) recording of metadata for effective correlation, and (3) supporting methods of analysis for model interpretability for users. The Panel presents a variety of questions and responses discussed, explored, and analyzed for data fusion-based AI data fusion tools.
Conference Presentation
© (2022) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Erik Blasch, Andreas Savakis, Yufeng Zheng, Genshe Chen, Ivan Kadar, Uttam Majumder, and Ali K. Raz "Joint data learning panel summary", Proc. SPIE 12122, Signal Processing, Sensor/Information Fusion, and Target Recognition XXXI, 121220K (8 June 2022); https://doi.org/10.1117/12.2619537
Advertisement
Advertisement
RIGHTS & PERMISSIONS
Get copyright permission  Get copyright permission on Copyright Marketplace
KEYWORDS
Synthetic aperture radar

Data modeling

Data fusion

Systems engineering

Information fusion

Systems modeling

Image fusion

Back to Top