Presentation
8 June 2024 Adapting self-supervised learning to the hyperspectral domain: methods, challenges, and lessons learned
Noriaki Kono, Eleanor B. Byler, Charles W. Godfrey, Timothy J. Doster, Tegan H. Emerson
Author Affiliations +
Abstract
Given the scale and complexity of forthcoming HSI data, producing labeled datasets at the scale required to improve state-of-the-art performance is impractical and prohibitively costly. Unsupervised pre-training algorithms have revolutionized deep learning for natural language processing and computer vision by tapping into vast troves of unlabeled data, but these advances have seen little adoption in the HSI domain. We present some early results from self-supervised pre-training for hyperspectral imagery using masked auto-encoders early and compare different pre-training approaches and masking techniques; specifically masking size, dimension (spatial, spectral, both), mask fraction, and mask coherence (spatially independent or consistent). We summarize our lessons learned and highlight the most promising approaches towards building a foundation model for hyperspectral data.
Conference Presentation
© (2024) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is permitted for personal use only.
Noriaki Kono, Eleanor B. Byler, Charles W. Godfrey, Timothy J. Doster, and Tegan H. Emerson "Adapting self-supervised learning to the hyperspectral domain: methods, challenges, and lessons learned", Proc. SPIE 13031, Algorithms, Technologies, and Applications for Multispectral and Hyperspectral Imaging XXX, 130310J (8 June 2024); https://doi.org/10.1117/12.3013487
Advertisement
Advertisement
KEYWORDS
Data modeling

Performance modeling

Computer vision technology

Deep learning

Hyperspectral imaging

Library classification systems

Back to Top