The Consultative Committee for Space Data Systems (CCSDS) 121.0-B-2 lossless data compression standard defines a lossless adaptive source coding algorithm which is applicable to a wide range of imaging and nonimaging data. We introduce a field-programmable gate array (FPGA) implementation of CCSDS 121.0-B-2 as an intellectual property (IP) core with the following features: (a) it is enhanced with a two-dimensional (2-D) second-order predictor making it more suitable for image compression, (b) it is enhanced with near-lossless compression functionality, (c) its parallel, pipelined architecture provides high data-rate performance with a maximum achievable throughput of 205 Msamples/s (3.2 Gbps at 16 bit) when targeting the Xilinx Virtex-5QV FPGA, and (d) it requires very low FPGA resources. When mission requirements impose lossless image compression, the CCSDS 121.0-B-2 IP core provides a very low implementation cost solution. According to European Space Agency PROBA-3 Bridging Phase, the CCSDS 121.0-B-2 IP core will be implemented in a Microsemi RTAX2000 FPGA, hosted in the data processing unit of the Coronagraph Control Box, of the Association of Spacecraft for Polarimetric and Imaging Investigation of the Corona of the Sun Coronagraph System Payload. To the best of our knowledge, it is the fastest FPGA implementation of CCSDS 121.0-B-2 to date, also including a 2-D second-order predictor making it more suitable for image compression.
Multi-Chromatic Analysis (MCA) of SAR images relays on exploring sub-band images obtained by processing portions
of range spectrum located at different frequency positions. It has been applied to interferometric pairs for phase
uwrapping and height computation. This work investigates two promising applications: the comparison between the
frequency-persistent scatterers (PSfd) and the temporal-persistent scatterers (PS), and the use of inter-band coherence of a single SAR image for vessel detection. The MCA technique introduces the concept of frequency-stable targets, i.e.
objects exhibiting stable radar returns across the frequency domain which is complementary to that of temporal stability
at the base of PS interferometry. Both spotlight and stripmap TerraSAR-X images acquired on the Venice Lagoon have
been processed to identify PSfd and PS. Different populations have been analyzed to evaluate the respective
characteristics and the physical nature of PSfd and PS. Concerning the spectral coherence, it is derived by computing the
coherence between sub-images of a single SAR acquisition. In the presence of a random distribution of surface
scatterers, spectral coherence must be proportional to sub-band intersection of sub-images. This model is fully verified
when observing measured spectral coherence on open see areas. If scatterers distribution departs from this distribution,
as for manmade structures, spectral coherence is preserved. We investigated the spectral coherence to perform vessel
detection on sea background by using spotlight images acquired on Venice Lagoon. Sea background tends to lead to very
low spectral coherence while this latter is preserved on the targeted vessels, even for very small ones. A first analysis
shows that all vessels observable in intensity images are easily detected in the spectral coherence images which can be
used as a complementary information channel to constrain vessel detection.
Next-generation instruments to be carried onboard spacecraft are collecting a large quantity of information at an increasing rate, due to recent technological improvements. The huge amount of data generated onboard is competing with the limited channel resources available for the transmission of data on-ground. The result of this scenario is the increasing importance that onboard payload data compression is gaining in the framework of a spacecraft design.
There is an intense necessity for the development of new hardware architectures for the implementation of algorithms for hyperspectral image compression on board satellites. Graphics processing units (GPUs) represent a very attractive opportunity, offering the possibility to dramatically increase the computation speed in applications that are data and task parallel. An algorithm for the lossy compression of hyperspectral images is implemented on a GPU using Nvidia computer unified device architecture (CUDA) parallel computing architecture. The parallelization strategy is explained, with emphasis on the entropy coding and bit packing phases, for which a more sophisticated strategy is necessary due to the existing data dependencies. Experimental results are obtained by comparing the performance of the GPU implementation with a single-threaded CPU implementation, showing high speedups of up to 15.41. A profiling of the algorithm is provided, demonstrating the high performance of the designed parallel entropy coding phase. The accuracy of the GPU implementation is presented, as well as the effect of the configuration parameters on performance. The convenience of using GPUs for on-board processing is demonstrated, and solutions to the potential difficulties encountered when accelerating hyperspectral compression algorithms are proposed, if space-qualified GPUs become a reality in the near future.
In this paper we propose a lossless compression algorithm for hyperspectral images based on distributed source coding;
this algorithm represents a significant improvement over our prior work on the same topic, and has been developed during
a project funded by ESA-ESTEC. In particular, the algorithm achieves good compression performance with very low
complexity; moreover, it also features a very good degree of error resilience.
These features are obtained taking inspiration from distributed source coding, and particularly employing coset codes
and CRC-based decoding. As the CRC can be used to decode blocks using a reference different from that used to compress
the image, this yields error resilience. In particular, if a block is lost, decoding using the closest collocated block in the
second previous band is successful about 70% of the times.
This paper focuses on the 'spectral' aliasing phenomenon that may produce distortions on remotely sensed spectra acquired by hyper-spectral push-broom sensors and that arises because of an inadequate sampling rate. The analysis of the aliasing appearance has been performed on a set of at-sensor radiance spectra computed stemming from some spectral libraries with spectral resolution sufficiently high for our aims. A general procedure to evaluate aliasing in spectral remote sensing data has been proposed. A model for the system modulation transfer function of a hyper-spectral push-broom sensor (like PRISM) has been developed by taking into account the different contributions due to optics, electronics, detector, spectrometer dispersion. By using this sensor model, the set of high resolution spectra has been processed in order to obtain the related set of simulated acquired spectra; also a corresponding set of not aliased spectra has been produced. Several score indexes have been considered among those proposed in literature and the three most effective have been implemented and applied to evaluate the aliasing produced in the acquired data. Aliasing evaluation has been first performed onto the simulated spectra without atmospheric and radiometric correction. Afterwards 'ideal' correction based on the knowledge of the ground irradiance and the atmospheric transmittance spectrum has been implemented, hence the aliasing evaluation has been performed also onto the reconstructed set of spectra (atmospherically and radiometrically corrected). Results are presented in the paper. To remove the constraints of the 'ideal correction', a simple atmospheric correction model has been implemented and applied to the simulated spectra radiometrically corrected; a qualitative evaluation of the reconstructed spectra has been performed.
The paper focuses on the aliasing phenomenon that may produce distortions on remotely sensed images acquired by hyper-spectral push-broom sensors and that arises because of its sampling rate. The analysis is performed on images recorded over different targets at a resolution that is high for the sensor under investigation. A model for the system modulation transfer function of PRISM hyper-spectral push- broom sensor is developed by taking into account the different contributions due to optical layout, electronics, detector, satellite motion. By using the sensor model, the high resolution images are pre-filtered and spatially re- sampled in order to obtain simulated images of the sensor. Such images are compared with those obtained by an ideal pre-filtering and re-sampling process in order to evidence possible aliasing effects. Quantitative indexes are adopted in order to assess the presence of aliasing. Filtering procedures are utilized to mitigate aliasing effects; to this aim multi-resolution filtering and a fuzzy filtering scheme are evaluated by means of the adopted indexes and by visual inspection. Quantitative and qualitative results show that, due to the efficacy of the proposed filters, aliasing mitigation is obtained with negligible penalties on spatial resolution.
Recent improvements in CCD technology make hexagonal sampling attractive for practical applications and bring a new interest on this topic. In the following the performances of hexagonal sampling are analyzed under general assumptions and compared with the performances of conventional rectangular sampling. This analysis will take into account both the lattice form (squared, rectangular, hexagonal, and regular hexagonal), and the pixel shape. The analyzed hexagonal grid will not based a-priori on a regular hexagon tessellation, i.e., no constraints will be made on the ratio between the sampling frequencies in the two spatial directions. By assuming an elliptic support for the spectrum of the signal being sampled, sampling conditions will be expressed for a generic hexagonal sampling grid, and a comaprison with the well-known sampling conditions for a comparable rectangular lattice will be performed. Further, by considering for sake of clarity a spectrum with a circular support, the comparison will be performed under the assumption of same number of pixels for unity of surface, and the particular case of regular hexagonal sampling grid will also be considered. Regular hexagonal lattice with regular hexagonal sensitivity shape of the detector elements will result as the best trade-off between the proposed sampling requirement. Concerning the detector shape, the hexagonal is more advantageous than the rectangular. To show that a figure of merit is defined which takes into account that the MTF (modulation transfer function) of a hexagonal detector is not separable, conversely from that of a rectangular detector. As a final result, octagonal shape detectors are compared to those with rectangular and hexagonal shape in the two hypotheses of equal and ideal fill factor, respectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.