PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 12519, including the Title Page, Copyright information, Table of Contents, and Conference Committee listings.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral imaging is not only a very special type of imaging, but also a special type of spectroscopy. What is more, there are many different architectures of hyperspectral imagers. While this paper concentrates on push-broom architecture only, there are still several parameters for which a hyperspectral camera may be optimized in design. This optimization means that the best fitting use cases are in fact chosen simultaneously with the design decisions. It is not always self-evident how hyperspectral camera parameters affect the measured data and its interpretation. Hyperspectral cameras consist essentially of two separate optical parts and a camera. Each of those parts processes the information originating from the target. The selection of the optimization parameters is based on camera design principles, which may differ significantly from one manufacturer to another, or one instrument to another. Comparing these instruments is not easy, but it is often also unnecessary as not all the parameters are equally important in every practical use case. When a user chooses a hyperspectral camera, they should study the specifications of the performance parameters with the intended application in mind. Highlighting one of the parameters in the design results in a loss of performance in another. These parameter pairs include spectral sampling vs. light collection efficiency, number of pixels vs. frame rate, and optical sharpness vs. sampling artifacts in the image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In our method, we calibrate the high-resolution frequency domain optical coherence tomography (FD-OCT) spectrometer using a single mode fiber optic Michelson interferometer coupled with a broadband super-luminescent light emitting diode (SLED) source, used in the very same FD-OCT system. The SLED light is filtered by the Michelson interferometer and exhibits equally spaced in the frequency domain fringes. The spacing of these fringes is controlled by the difference in arm's length of the spectrometer. By performing measurements for several values of differences in optical paths between arms in Michelson interferometers, we find the nonlinear dispersion of the grating spectrometer. We use this dispersion when employing the FD-OCT system as a distance gauge.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Reflectance spectroscopy, especially at infrared wavelengths, is often used for contact, standoff, and remote sensing of solid materials. The reflectance spectra of solids, however, are complex, relying on many factors, even for the same material. Such phenomena can be modeled if the optical constants as a function of wavelength or wavenumber, n(ν) and k(ν), are known. Methods to measure the optical constants of solids, however, are challenging, particularly for powdered materials. For powdered materials, a pressed pellet of the neat material is often used when a crystalline specimen is not available. In this work, three techniques, including ellipsometry, single-angle reflectance and KBr transmission spectroscopy, are applied to an organic material, acetaminophen, for comparison, and the effects on the modeled spectra are shown.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spectral Remote Sensing for Space Situational Awareness I: Joint Session with Conferences 12519 and 12546
Multi-spectral infrared remote-sensing is a powerful tool for characterization of space objects. Unfortunately, it is rarely applied due to lack of applicable data. Fortunately, a few relevant historical databases, such as the NASA-Wide Field Infrared Explorer (NASA-WISE) space debris database, exist and permit investigation of the properties of space objects. This space debris database includes measurements in the mid-wave and long-wave infrared bands. In this work we demonstrate statistical characterization of unresolved multi-spectral infrared space debris data, through analysis of statistics of the space object populations’ observed infrared properties. Space objects are categorized according to derived thermal properties. Statistical tests are developed that allow classification based on new measurements of thermal properties.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents preliminary results on the characterization of DirecTV-10 satellite with photometric observations during a glint season from 04:00 – 08:00 UTC on 23 February 2021 with U.S. Air Force Academy’s USAFA-16 telescope and simulations of the scene with the physics-based simulator; Digital Imaging and Remote Sensing Image Generation (DIRSIGTM) developed by the Rochester Institute of Technology (RIT). The objective of this work is to find the best set of operator-tuned parameters needed by the simulator to match as close as possible to observations. To accomplish this, the parameters of the optical system, the latitude/longitude and altitude of the telescope, the two-line element (TLE) set of the satellite, and atmospheric conditions at the time of the observation are input into DIRSIGTM to carry out the simulations. Furthermore, it is assumed that all parameters remain constant throughout the observations. The optical system USAFA-16 is a small aperture telescope equipped with a filter wheel which provides photometric, spectroscopic, and polarimetric images of the satellite. The results reported in this paper consist of an effort to correlate wide-band photometric images of the satellite with simulated images of these same wavebands. We use a high-fidelity CAD model of the satellite, and material properties such as pristine reflectance values, and BRDF measurements of the many components of the model which are provided by the Air Force Research Laboratory (AFRL), and ancillary information. We show preliminary results that demonstrate that DIRSIGTM may be used to characterize the satellite to some degree through the process of correlating calibrated magnitude patterns observed on photometric images. Further investigation is required to do the search of parameters in a systematic way, and move towards better agreement between observed and simulated data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The United States Air Force Academy (USAFA) operates the Falcon Telescope Network (FTN) to support its research program in the utility of satellite optical signatures in Space Situational Awareness. In addition to collecting photometric, spectroscopic, and polarimetric data, the FTN sensors which are equipped with diffraction grating elements also operate as slitless spectrographs. FTN spectroscopic data has been used to demonstrate that it can effectively distinguish different stable geosynchronous satellites (GEO). Because the attitude of the GEO's unarticulated parts (e.g. bus) and the axis of rotation of the articulated parts (e.g., solar panel) are predominantly fixed, the light curves and the time-resolved spectra are expected to be nearly repeatable from night to night. Furthermore, the spectra of GEOs may be effective identifying signatures. To demonstrate the ability to distinguish GEOs using spectroscopic data, we reduce the spectra to vectors of features with smaller dimensionality. That can be accomplished by applying a linear dimensionality-reduction technique, e.g., Principal Component Analysis (PCA) or using a physics-based transformation that consists of smoothing and under-sampling the spectra. The PCA features consists of up to the five most prominent principal components. The physics-based feature vector is the smoothed GEO spectral reflectance sampled at 37 fixed and equally spaced wavelengths. The first approach also generates a visualizable 2-dimensional representation using the first two PCA components, while the second approach preserves as much information as allowed by the effective spectrograph's resolution. Using satellite names or numbers as labels of the classes, we trained a number of classifiers with the GEO's feature vectors. Our analyses showed that multi-GEO classification can achieve accuracy as high as 98%. We also demonstrated that instead of collecting many spectra in the range of solar phase angles as training data, we can synthesize training spectra with a limited number of reference spectra and still achieve satisfactory classification accuracies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spectral Remote Sensing for Space Situational Awareness II: Joint Session with Conferences 12519 and 12546
Hyperspectral remote sensing has been proposed as a method to extract quantitative information about resident space objects (RSO) for space domain awareness. Measured spectral signatures can be used to extract information about material composition, satellite pose, satellite classification, and other quantities about the state of a RSO. This is particularly of interest to extract information of unresolved RSOs (URSO) as the high spectral resolution can help us resolve the object spectrally even though it is not resolved spatially. A challenge is the limited amount of spectral data available for algorithm development, testing and validation. Physics-based modeling and simulation tools such as the Digital Imaging and Remote Sensing Image Generation (DIRSIG™) can help us develop an understanding of RSO spectral signatures and to generate spectral signature databases for design, testing and validation of exploitation algorithms. This paper presents preliminary results of simulated resolved and unresolved imagery of the DirecTV-10 and AMC-1 satellites using DIRSIG™. Simulation results illustrate the spatial, spectral and temporal variability of the spectral signatures for both multi-spectral and hyperspectral signatures as well as mixing phenomena when going from resolved to unresolved imagery. Simulated data can help us develop an understanding of RSO behavior that can inform design, development and testing of algorithms for image exploitation for SDA.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent work demonstrates that convolutional neural networks can be trained to recognize artificial satellites from spatially unresolved ground-based observations (SpectraNet). SpectraNet enables space domain awareness (SDA) catalogs to be enriched with object identity, a critical source of information for space domain stakeholders. As learned spectral SDA matures, conditions for training and deploying performant and calibrated neural network recognition algorithms must be measured. In this work we present a simulated three year baseline of observations using a longslit spectrograph on a single telescope. We use this dataset to develop a framework for measuring baseline data requirements for performant SpectraNet models, and for testing the performance of those models after deployment. On this limited (single telescope, longslit spectrograph) setup, the presented framework returns a performant model after three weeks of collections. Further, we find that a model can be deployed for a full annual cycle after twenty six weeks of data collection, and the model reaches maximum sustained inference performance after a year. Thus a SpectraNet powered longslit spectrograph can provide tactical inferences after a few weeks and be retrained to infer through seasonal variability during deployment. We find that the simulated system and dataset regularly exceed 82% classification accuracy, and discuss performance improvements with enhanced instrumentation and/or multi-telescope networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent work demonstrates recognition of artificial satellites in spatially unresolved observations by utilizing learned spectroscopic classification (SpectraNet1 ). That proof of concept exposes critical identifying information currently lacking in catalogs used by space domain awareness stakeholders. In this work we present experiments to increase the accessibility and efficiency of SpectraNet enabled systems by probing the bandpass and resolution requirements for learned recognition of satellites. To enable affordable, off the shelf instrumentation, this work focuses on wavelength ranges accessible by Silicon-based detectors (400-1000 nanometers). While the SpectraNet proof of concept utilized a medium resolution spectrograph on a 3.6 meter telescope at 10,000 feet elevation, we show that the identifying spectral features relate to an object’s overall spectral energy density and are accessible at significantly lower spectral resolution. This finding relaxes the need for large telescopes at high altitude. We further demonstrate that the technology can be utilized via simultaneous multi-band filter photometry. Design considerations for properly obtaining simultaneous photometry are discussed. Thus this work demonstrates that−in simulation−learned spectral recognition is an effective technology from high resolution spectrographs through simultaneous multi-filter photometric instruments. We provide experiments to understand the minimum engineered system needed to perform effective learned recognition, such that the technology can be hardened and widely proliferated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Applications of Machine Learning in Spectral Imaging I
The operationalization of precision agriculture imaging-based systems, especially in staple crops like maize (Zea mays L.), requires a quantitative comparison of yield forecast approaches toward improved crop management. Here, we compare the implementation of linear and exponential based sileage yield models for maize to machinelearning (ML) based yield models utilizing spaceborne multispectral imagery (MSI) and unmanned aerial system (UAS) collected hyperspectral imagery (HSI), respectively. We collected UAS imagery in a maize field in upstate New York at the V10 growth stage using a Headwall Nano-Hyperspec 272-band visible and near-infrared imaging system to test the accuracy a feed forward neural network yield estimation regression model as well as a support vector regression (SVR). Landsat imagery of the same field was collected over ten separate instances throughout the season for use in linear and exponential regressions, while ground truth sileage yield data were provided by an on-board yield monitor during harvest. The neural network regression response induced between 4.6-13% mean absolute error (MAE), the linear and exponential regression yielded a best performance of 5.5%, while the SVR model ranged from 1.16-4.56% MAE. These results bode well for future implementation of such silage maize yield modeling approaches leveraging hyperspectral data that include the spectral red edge. However, we suggest that model efficacy should be evaluated for use in other regions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We are developing machine learning algorithms to identify chemicals of interest by their diffuse infrared (IR) reflectance signatures. For capturing the signatures themselves, we are developing a cart-based mobile system for the detection of trace explosives on surfaces by active infrared (IR) backscatter hyperspectral imaging (HSI). We refer to this technology as Infrared Backscatter Imaging Spectroscopy (IBIS). A wavelength tunable multi-chip infrared quantum cascade laser (QCL) is used to interrogate a surface while an MCT focal plane array (FPA) collects backscattered images to comprise a hyperspectral image (HSI) cube. The HSI cube is processed and the extracted spectral information is fed into an algorithm to detect and identify chemical traces. The machine learning algorithm utilizes a 1-dimensional convolutional neural network (CNN) that has been trained on augmented FTIR diffuse reflectance spectra. In this manuscript, we implement a 1-D CNN to identify chemicals within an IBIS hypercube. This demonstrates a form of active chemical imaging where the CNN identifies a chemical within each pixel of an IBIS hypercube. Chemical imaging capability goes beyond point detection and identification to indicate where each chemical is within the field of view, as well as identifying multiple target chemicals simultaneously.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Spatial enhancement of low-resolution hyperspectral imagery using high-resolution multispectral imagery is often done via image fusion algorithms. Regardless of the algorithm used, pixels containing edges, corners, shadows, dark/low-contrast materials, etc., present the most challenge to any algorithm. Due to this, confidence on sharpening results are often low at these ’trouble pixels’. This paper presents our initial experiments and results in leveraging spatial information to drive and improve the fusion process. We present an adaptive algorithm workflow that adjusts to the spatial conditions identified for those pixels. We also designed a novel edge detection scheme based on spectral angle calculations on either high- or low-resolution imagery. Target signatures were synthetically implanted on pixels identified as strong edges, and an ACE detector is run on all fused and reference imagery. Our results show that, based on calculated ACE target detection ROC curves, modifying the NNDiffuse algorithm to include factors that leverage spatial features (i.e., spectral differences between neighboring pixels, differences in ‘edgeness’ of neighboring pixels) produced significant improvements in detection rates compared to the classical (non-modified) NNDiffuse algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Atmospheric correction is the process for removing atmospheric effects from spectral data; a necessary step for recovering salient spectral properties. The complex interactions between the atmosphere and light are dominated by absorbance and scattering physics. Existing methods for modeling atmospheric interactions typically rely on deep knowledge of relevant environmental conditions and high-fidelity numerical simulations of the governing physics in order to obtain accurate estimates of these effects. Additionally, existing approaches often require a subject matter expert for pre/post-processing of the data. Model-based approaches for removing atmospheric effects struggle in situations where such domain expertise is not available, and require significant human effort and computational power even when that expertise is available. In contrast, we propose a data-driven approach the uses Neural Differential Equations (NDEs) to accurately learn the interactions between electromagnetic radiation and the atmospheric without access to location specific environmental information. Once trained, the NDE can be applied bi-directionally; to apply or remove atmospheric effects. We demonstrate the effectiveness and utility of these techniques on an example multi-spectral scene.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
From agriculture to mining, to energy, surface water quality monitoring is an essential task. As oil and gas operators work to reduce the consumption of freshwater, it is increasingly important to actively manage fresh and non-fresh water resources over the long term. For large-scale monitoring, manual sampling at many sites has become too time-consuming and unsustainable, given the sheer number of dispersed ponds, small lakes, playas, and wetlands over a large area. Therefore, satellite-based environmental monitoring presents great potential. Many existing satellite-based monitoring studies utilize index-based methods to monitor large water bodies such as rivers and oceans. However, these existing methods fail when monitoring small ponds–the reflectance signal received from small water bodies is too weak to detect. To address this challenge, we propose a new Water Quality Enhanced Index (WQEI) Model, which is designed to enable users to determine contamination levels in water bodies with weak reflectance patterns. Our results show that 1) WQEI is a good indicator of water turbidity validated with 1200 water samples measured in the laboratory, and 2) by applying our method to commonly available satellite data (e.g. LandSat8), one can achieve high accuracy water quality monitoring efficiently in large regions. This provides a tool for operators to optimize the quality of water stored within surface storage ponds and increasing the readiness and availability of non-fresh water.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The THIA instrument is a visible through extended short-wave infrared (SWIR) imaging spectrometer. Designed using a solid block optical system and a single camera, the sensor is extremely compact with low power requirements. The spectrometer, manufactured by Corning, consists of reflective optical and grating surfaces diamond turned onto a single block of CaF2. The system has been flown repeatedly on a Matrice 600 hexacopter and on small aircraft for data collections. It operates from 0.4-2.45 microns, with high throughput due to the fast f/1.5 optics and has a total weight of 2.4 kg. THIA SNR was designed to exceed 100 over the full spectral range from 400 to 2450 nanometers under normal operating conditions and exceed 250 below 1700 nanometers. The first prototype system exhibits degraded throughput below 500 nanometers, but meets the SNR threshold over the rest of the range. Stray light backgrounds in the initial prototype require software correction. Despite these issues, the system has been used to obtain meaningful data. Here we characterize THIA Signal-to-Noise in flight conditions and compare results to predicted and benchtop performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The study of land use and land cover (LULC) changes is essential to understand the impact of human activities on the environment. The North of Algeria is a region that experiences high rates of change in LULC, making it a suitable study area. In this research, the potential of Sentinel-2 attributes for LULC classification in this region is evaluated using a deep learning-based approach. To improve the efficiency of the model, six reflectance-based indices are calculated to highlight the region of interest. The results are compared to the USGS land cover change data and show promising LULC change detection. In order to verify the presence of missed classes in our land use/land cover classification results, we employed a CNN-object detection method using high-resolution Planetscope images. This study demonstrates the potential of Sentinel-2 attributes for accurate LULC classification and change detection in the North of Algeria, which can be useful for monitoring land use patterns and planning sustainable land management practices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The process water of the petrochemical industry contains particles and oil droplets which need to be removed by the water treatment process before the water can safely be reinjected into the well. For the detection and analysis of the particles and oil droplets a particle analysis system based on hyperspectral imaging, fluorescence imaging and white-light imaging has been developed. The particle and droplet size range for this application varies from 5 µm to 150 µm. Currently used particle analyzers use monochrome cameras with backlight illumination. The thereby obtained monochromatic image is used to derive shape and size of the particles and droplets, but no chemical information. The shape information is used to differentiate between solid particles and oil droplets. The solid particles appear black in the images and therefore a more detailed analysis of their material is not possible. The combination of different imaging systems presented in this work allow for a more detailed and robust analysis of the particles and oil droplets in the process water. The fluorescence imaging system is used for a reliable detection of the oil droplets. The white-light imaging system and hyperspectral imaging system acquire high quality color information of particles and oil droplets. This information is vital for the process water treatment during oil production. An ultrasound particle manipulation system is used to guide the particles and oil droplets into the focal plane of the imaging systems. The alpha version of the particle analysis system and the initial results of measurements on solid particles and oil droplets in aqueous suspension are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Defence Research and Development Canada, through a Canadian government innovation support program, has tested the Telops Hyper-Cam Airborne Mini (HCAM) thermal infrared hyperspectral imager to assess its performances, including its Noise Equivalent Spectral Radiance (NESR), its radiometric accuracy and its ability to detect gas-phase targets both in laboratory and in flight. The results are compared to those obtained with the Telops Hyper-Cam LW system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Sharjah-Sat-2 is a 6U Earth Observation (EO) CubeSat currently being developed by the Sharjah Academy for Astronomy, Space Sciences, and Technology (SAASST) and The University of Sharjah (UoS). The 6U CubeSat design is currently being Designed and Integrated with two payloads on board: (1) a High-Resolution Hyperspectral Imager with less than 5 Meters of Ground Sampling distance (GSD) and (2) an experiential payload consisting of a GNSS receiver. The mission’s primary scientific objective is to capture High-Resolution Hyperspectral images of the United Arab Emirates to provide a constant stream of reliable data that will be utilized to improve the country’s infrastructure and resource management. The secondary mission objective is to monitor and validate the integrity of GNSS signals. This paper will provide insight into the preliminary mission design of the Sharjah-Sat-2 Microsatellite, highlighting the mission’s payload, orbit determination, and coverage study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Applications of Machine Learning in Spectral Imaging II
Including information from additional spectral bands (e.g., near-infrared) can improve deep learning model performance for many vision-oriented tasks. There are many possible ways to incorporate this additional information into a deep learning model, but the optimal fusion strategy has not yet been determined and can vary between applications. At one extreme, known as “early fusion,” additional bands are stacked as extra channels to obtain an input image with more than three channels. At the other extreme, known as “late fusion,” RGB and non-RGB bands are passed through separate branches of a deep learning model and merged immediately before a final classification or segmentation layer. In this work, we characterize the performance of a suite of multispectral deep learning models with different fusion approaches, quantify their relative reliance on different input bands and evaluate their robustness to naturalistic image corruptions affecting one or more input channels.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In overhead image segmentation tasks, including additional spectral bands beyond the traditional RGB channels can improve model performance. However, it is still unclear how incorporating this additional data impacts model robustness to adversarial attacks and natural perturbations. For adversarial robustness, the additional information could improve the model’s ability to distinguish malicious inputs, or simply provide new attack avenues and vulnerabilities. For natural perturbations, the additional information could better inform model decisions and weaken perturbation effects or have no significant influence at all. In this work, we seek to characterize the performance and robustness of a multispectral (RGB and near infrared) image segmentation model subjected to adversarial attacks and natural perturbations. While existing adversarial and natural robustness research has focused primarily on digital perturbations, we prioritize on creating realistic perturbations designed with physical world conditions in mind. For adversarial robustness, we focus on data poisoning attacks whereas for natural robustness, we focus on extending ImageNet-C common corruptions for fog and snow that coherently and self-consistently perturbs the input data. Overall, we find both RGB and multispectral models are vulnerable to data poisoning attacks regardless of input or fusion architectures and that while physically realizable natural perturbations still degrade model performance, the impact differs based on fusion architecture and input data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Performance of hyperspectral target detection algorithms is determined by the spectral variability and separability of target and background materials within the scene. Practical matched filter detectors typically utilize only background statistics due to the assumed rarity of target materials. Background materials are additionally modeled scene-wide as Gaussian, which allows for straightforward estimation of statistics but oversimplifies the complex manifold on which spectra are typically distributed. These simplifications can lead to detection errors in the form of both missed detections and false alarms. The variational autoencoder (VAE) is a general neural network architecture that implements a generative probabilistic model for data. This is accomplished via a deep latent variable model in which data generation is modeled by the mapping of a lower dimensional isotropic Gaussian latent variable through a neural network to a conditional distribution on the observation. VAEs are thus capable of learning distributions over complex, high dimensional manifolds. We propose utilizing the VAE as a probabilistic model for hyperspectral data to aid in target detection, discrimination, and false alarm mitigation. We fit a VAE to a hyperspectral cube and make use of the learned latent space. The VAE encoder is trained to map each pixel to a posterior Gaussian distribution, which we compare to encoded library and scene background posteriors to determine the presence or absence of target materials. Comparative analysis of posteriors is placed in an information theoretic framework, and we establish a connection to standard detection statistics. We demonstrate detection and discrimination performance using two real hyperspectral datasets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Point target detection algorithms in hyperspectral imaging commonly use the spectral inverse covariance matrix to whiten the natural noise of the image. Since the noise in hyperspectral data cubes often suffer from a lack of stationarity, segmentation appears to be an attractive preprocessing operation. However, the literature contains examples of successful and unsuccessful segmentation with no plausible explanation for why some succeed, and others do not. Focusing on one representative algorithm and assuming a target additive model, this paper tracks the underlying causes of when segmentation does improve detection for different target spectra. It then characterizes a real dataset and concludes with ways to improve the detector performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In remote sensing image analysis, change detection approaches typically compare two images captured by the same airborne or spaceborne sensor at different points in time. However, as airborne and spaceborne imaging platforms have become increasingly more accessible, the variety of sensor designs has grown in tandem. The ability to combine these multi-modal remote sensing images for change detection would provide a far more frequent view of the earth, but traditional approaches are challenged by the intrinsic data variation across sensor designs. The recently introduced multi-sensor anomalous change detection (MSACD) framework addresses this challenge by using a data-driven machine learning approach that can effectively account for differences in sensor modality and design, and does not require any signal resampling of the pixels. This flexible framework enables the use of satellite image pairs from different sensor platforms. Here, we perform experiments to further evaluate the efficacy of the MSACD change detection framework; these experiments include augmenting the images with engineered features that seek to increase the mutual information of the image backgrounds and, in turn, better emphasize the anomalous changes. While these initial results are demonstrated on same-sensor spectral data, the experiments naturally extend to the multi-sensor domain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Reed-Xiaoli Detection (RX) algorithm is a classic algorithm commonly used to detect anomalies in hyperspectral image data, i.e. regions which are spectrally distinct from the image background. Such regions may represent interesting objects to human observers. We investigate the possibility of applying the RX algorithm to a VNIR pushbroom hyperspectral image sensor in real time onboard a small uncrewed aerial system (UAS). The generated anomaly information is much more concise and can be transmitted much faster than the raw hyperspectral data. This would enable anomalies to be automatically detected, then communicated to a ground station for immediate attention by a human observer. However, the UAS payload capacities impose strict size, weight, and power constraints. We show in what contexts the algorithm can be successfully applied and how the UAS constraints bound algorithm performance and parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Algorithms are derived for detecting targets in cluttered backgrounds, where the background is modeled as a product of univariate distributions independently fit to each of the principal component projections. Thus, fatter-than-Gaussian tails are fit to the data, with a different fatness parameter for each principal component. Comparisons are made to elliptically-contoured distributions (which, unlike these product distributions, are isotropic in the whitened space), including the multivariate t and the Gaussian. Numerical experiments are performed on hyperspectral data from the SHARE 2012 exercise, with target detection performance evaluated on both actual and simulated targets. Both direct and residual data are considered, with the residual data obtained from local background subtraction – these residual data are found to exhibit not only lower variance, but qualitatively different tail statistics. More direct target-agnostic measures are also employed to asses how well these models fit the different kinds of background clutter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
While RGB imaging is reaching its limits, Hyperspectral Imaging (HSI) is being widely used especially for medical applications. This study points out the ability of HSI technique to help in planning the surgical procedure in orthopedic surgery by automatically identifying anatomical structures and surgical instruments thanks to their spectral signatures. Four segmentation methods have been explored: (i) average spectra method that uses the Euclidean distance between the spectrum of each pixel and the average spectrum of each specific structure, (ii) segmentation using kmeans, (iii) segmentation based on indices in which we identify reflectances ratios at specific wavelengths that allow materials to be correctly classified, (iv) and finally a pixel-based classification method based on neural networks. Experiments on anatomical objects whose physical characteristics are known to have been carried out. Selecting specific wavelengths to reduce the cost of the final device was also discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Traditional hyperspectral unmixing is focused on subpixel material composition extraction for low and moderate resolution imagery. Technological advances are making affordable hyperspectral imagers that can be used for very high spatial resolution imaging in many applications. A question that we want to address in this work is whether a traditional hyperspectral image analysis technique like unmixing still has value in the context of very high spatial resolution hyperspectral imaging (VHSR-HSI). In this paper, we will present preliminary results on how unsupervised hyperspectral unmixing algorithms can be used to extract spectral signatures of materials in a VHSR-HSI to map their spatial distribution and capture their spectral variability. Examples using hyperspectral images collected at close range using a standoff hyperspectral imager and an unmanned airborne system are used to illustrate our approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.