PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE
Proceedings Volume 8157, including the Title Page, Copyright
information, Table of Contents, and the Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Past and current optical Earth observation systems designed by CNES are using a fixed-rate data compression processing
performed at a high-rate in a pushbroom mode (also called scan-based mode). This process generates fixed-length data to
the mass memory and data downlink is performed at a fixed rate too. Because of on-board memory limitations and high data
rate processing needs, the rate allocation procedure is performed over a small image area called a "segment". For both
PLEIADES compression algorithm and CCSDS Image Data Compression recommendation, this rate allocation is realised
by truncating to the desired rate a hierarchical bitstream of coded and quantized wavelet coefficients for each segment.
Because the quantisation induced by truncation of the bit planes description is the same for the whole segment, some parts
of the segment have a poor image quality. These artefacts generally occur in low energy areas within a segment of higher
level of energy. In order to locally correct these areas, CNES has studied "exceptional processing" targeted for DWT-based
compression algorithms. According to a criteria computed for each part of the segment (called block), the wavelet
coefficients can be amplified before bit-plane encoding. As usual Region of Interest handling, these multiplied coefficients
will be processed earlier by the encoder than in the nominal case (without exceptional processing). The image quality
improvement brought by the exceptional processing has been confirmed by visual image analysis and fidelity criteria. The
complexity of the proposed improvement for on-board application has also been analysed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recent compressed sensing (CS) results show that it is possible to accurately reconstruct images from a small
number of linear measurements via convex optimization techniques. In this paper, according to the correlation analysis
of linear measurements for hyperspectral images, a joint sparsity reconstruction algorithm based on interband prediction
and joint optimization is proposed. In the method, linear prediction is first applied to remove the correlations among
successive spectral band measurement vectors. The obtained residual measurement vectors are then recovered using the
proposed joint optimization based POCS (projections onto convex sets) algorithm with the steepest descent method. In
addition, a pixel-guided stopping criterion is introduced to stop the iteration. Experimental results show that the proposed
algorithm exhibits its superiority over other known CS reconstruction algorithms in the literature at the same
measurement rates, while with a faster convergence speed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For the ultraspectral sounder data which features thousands of channels at each observation location, lossless
compression is desirable to save storage space and transmission time without losing precision in retrieval of geophysical
parameters. Predictive partitioned vector quantization (PPVQ) has been proven to be an effective lossless compression
scheme for ultraspectral sounder data. It consists of linear prediction, bit-depth partitioning, vector quantization, and
entropy coding. In our previous work, the two most time consuming stages of linear prediction and vector quantization
were identified for GPU implementation. For GIFTS data, using a spectral division strategy for sharing the compression
workload among four GPUs, a speedup of ~42x was achieved. To further enhance the speedup, this work will explore a
spatial division strategy for sharing workload in processing the six parts of a GIFTS datacube. As result, the total
processing time of a GIFTS datacube on four GPUs can be less than 13 seconds which is equivalent to a speedup of
~72x. The use of multiple GPUs for PPVQ compression is thus promising as a low-cost and effective compression
solution for ultraspectral sounder data for rebroadcast use.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Nowadays, providing reliable broadcast and multicast transmission over wireless networks is still a challenging problem,
due to the erratic and time-varying nature of a wireless channel. An efficient retransmission strategy is very important to
the reliability of transmission and the bandwidth utility of the wireless network. In this paper, an opportunistic network
coding retransmission algorithm based on packet loss pattern is proposed to improve the transmission efficiency of
broadcast and multicast over wireless networks. The theoretical analysis reveals the feasibility and effectiveness of the
proposed algorithm. The simulation results show that the algorithm can effectively reduce the retransmission times and
increase the transmission efficiency over wireless networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper proposes a new method to extract the endmembers of a hyperspectral datacube using the geometry of the
datacube. The criterion used to find the endmembers in this method is the volume of the simplex. Unlike to the widely
used endmember extraction method "N-FINDR", which calculates the volume of a simplex as many times as the number
of the vertices of the simplex for each pixel of the datacube in searching for the replacers for the vertices, the proposed
method calculates the volume only once for each pixel of the datacube by taking into account of the geometry of the
hyperspectral datacube that is tackled. For each pixel, the proposed method finds the closest vertex of the simplex to that
pixel. Then the closest vertex is replaced with the pixel for updating the simplex. Computational complexity of the
proposed method is one order of magnitude less than the N-FINDR. As the proposed method is using the same criterion
as N-FINDR we refer it to as fast N-FINDR (FN-FINDR). The performance of the proposed method was compared with
N-FINDR using an AVIRIS datacube and a HYDICE datacube. The performance of the proposed method was evaluated
using three different distance measures. The comparison was also made using two different dimensionality reduction
methods. It is observed that the FN-FINDR with a modified Euclidean distance works as well as N-FINDR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Anomaly detection is considered a very important task for hyperspectral data exploitation. It is now routinely
applied in many application domains, including defence and intelligence, public safety, precision agriculture,
geology, or forestry. Many of these applications require timely responses for swift decisions which depend
upon high computing performance of algorithm analysis. However, with the recent explosion in the amount
and dimensionality of hyperspectral imagery, this problem calls for the incorporation of parallel computing
techniques. In the past, clusters of computers have offered an attractive solution for fast anomaly detection
in hyperspectral data sets already transmitted to Earth. However, these systems are expensive and difficult
to adapt to on-board data processing scenarios, in which low-weight and low-power integrated components are
essential to reduce mission payload and obtain analysis results in (near) real-time, i.e., at the same time as
the data is collected by the sensor. An exciting new development in the field of commodity computing is the
emergence of commodity graphics processing units (GPUs), which can now bridge the gap towards on-board
processing of remotely sensed hyperspectral data. In this paper, we develop a new morphological algorithm
for anomaly detection in hyperspectral images along with an efficient GPU implementation of the algorithm.
The algorithm is implemented on latest-generation GPU architectures, and evaluated with regards to other
anomaly detection algorithms using hyperspectral data collected by NASA's Airborne Visible Infra-Red Imaging
Spectrometer (AVIRIS) over the World Trade Center (WTC) in New York, five days after the terrorist attacks
that collapsed the two main towers in the WTC complex. The proposed GPU implementation achieves real-time
performance in the considered case study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a new algorithm for color/mono classification of scanned images. During the scanning process,
various artifacts were produced by scanner sensors. These artifacts made it difficult to design a classifier for color/mono
classification. The proposed algorithm utilized a pixel color index that reflected pixel colorfulness. For each pixel in the
scanned image, its neighboring block was extracted and the pixel color index was computed using the neighboring block
in the RGB space. To compute the pixel color index, we determined whether the center pixel had homogeneous
neighbors or not. If the center pixel had homogeneous neighbors, the pixel color index was calculated by averaging the
achromatic distances of the homogeneous neighbors. If the maximum value of the pixel color indexes in an image was
larger than the given threshold, the image was classified as a color document.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
N-FINDR has been widely used for endmember extraction in hyperspectral imagery. Due to its high computational
complexity developing fast computing N-FINDR has become interest. One approach is to design field programmable
gate array (FPGA) architecture for N-FINDR to reduce computing time. However, two major issues still need to be
addressed. One is that the number of endmembers must be fixed regardless of applications. The other is computation of
simplex volumes. This paper investigates a progressive version of N-FINDR, previously known as simplex growing
algorithm (SGA) for its FPGA implementation which basically resolves these two issues.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Heavy development on cliffs place a heavy emphasis on maintaining a healthy natural environment.
The ability to explore, conceptualize and correlate spatial and temporal changes of topographical
records, is required for the development of new analytical models that capture the mechanisms
contributing towards cliff erosion. This paper presents a visualization based approach using large
displays in a digital immersive environment. Visual analytics are performed for cliff erosion
assessment from a terrestrial LIDAR (LIght Detection And Ranging) data, including visualization
techniques for the delineation, segmentation, and classification of features, change detection and
annotation. Research findings are described in the context of a cliff failure observed in Solana Beach
in California. The visualization system presented in this paper demonstrates the insights that can be
gained by observing the temporal change of a failure mass using frequent site monitoring.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral unmixing is a very important task for remotely sensed hyperspectral data exploitation. It amounts
at estimating the abundance of pure spectral signatures (called endmembers) in each mixed pixel of the original
hyperspectral image, where mixed pixels arise due to insufficient spatial resolution and other phenomena. The
full spectral unmixing chain comprises three main steps: 1) dimensionality reduction, in which the original
hyperspectral data is brought to an adequate subspace; 2) endmember extraction, in which endmembers are
automatically identified from the image data; and 3) abundance estimation, in which the fractional coverage of
each endmember is estimated for each pixel of the hyperspectral scene. The hyperspectral unmixing process can
be time-consuming, particularly for high-dimensional hyperspectral images. Parallel computing architectures
have offered an attractive solution for fast unmixing of hyperspectral data sets, but these systems are expensive
and difficult to adapt to on-board data processing scenarios, in which low-weight and low-power integrated
components are essential to reduce mission payload and obtain analysis results in (near) real-time. In this paper,
we develop a real-time implementation of a full unmixing chain for hyperspectral data on graphics processing
units (GPUs). These hardware accelerators can bridge the gap towards on-board processing of this kind of data.
The considered chain comprises principal component analysis (PCA) for dimensionality estimation, extraction
of endmembers using the N-FINDR algorithm, and unconstrained linear spectral unmixing. The proposed GPU
implementation is shown to perform strictly in real-time for hyperspectral data sets collected by the NASA's
Airborne Visible Infra-Red Imaging Spectrometer (AVIRIS).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have developep a Graphics Processing Unit (GPU)-based high-performance radiative transfer model (RTM) for the
Infrared Atmospheric Sounding Interferometer (IASI). We propose two different types of GPU RTMs. The first one,
processes one profile at a time. The second proposed GPU RTM processes more than one profile at a time in order to
gain a significant speedup compared to the case of processing one profile at a time. Using single-profile processing, we
reached 364x speedup for 1 GPU and 1455x speedup for all 4 GPUs. Both with respect to the original CPU-based
single-threaded Fortran code. Similarly, using multi-profile processing, to compute 10 IASI radiance spectra
simultaneously on a GPU, we reached 756x speedup for 1 GPU and 3024x speedup for all 4 GPUs. The significant
3024x speedup means that the proposed GPU-based high-performance forward model is able to compute one day's
amount of 1,296,000 IASI spectra within 6 minutes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Future space missions are based on a new generation of instruments. These missions find a serious constraint in the
telemetry system, which cannot download to ground the large volume of data generated. Hence, data compression
algorithms are often mandatory in space, despite the modest processing power usually available on-board. We present
here a compact solution implemented in hardware for such missions. FAPEC is a lossless compressor which typically
can outperform the CCSDS 121.0 recommendation on realistic data sets. With efficiencies higher than 90% of the
Shannon limit in most cases - even in presence of noise or outliers - FAPEC has been successfully validated in its
software version as a robust low-complexity alternative to the recommendation. This work describes the FAPEC
implementation on an FPGA, targeting the space-qualified Actel RTAX family. We prove that FAPEC is hardwarefriendly
and that it does not require external memory. We also assess the correct operation of the prototype for an initial
throughput of 32 Mbits/s with very low power consumption (about 20 mW). Finally, we discuss further potential
applications of FAPEC, and we set the basis for the improvements that will boost FAPEC performance beyond the
100 Mbit/s level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The researching on aurora images is playing an important role in scientific and living fields. However, the aurora
images have to face the problems of transmission and storage in China's Arctic station. This paper proposes a lossless
compression algorithm aiming at the long distance transmission of aurora images for real-time requirement. The special
correlation characters of 3D aurora images are discussed firstly, and then an adaptive context-based prediction algorithm
is proposed. The proposed algorithm can effectively reduce inter-frame and intra-frame correlations according to the
characteristics of 3D aurora images using our proposed prediction modeling. Compared with the state-of-art algorithms,
the proposed algorithm not only can achieve better compression performance, but also satisfy the complexity
requirement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For applying to various error patterns, including random errors, burst errors, and inhomogeneously distributed errors, in
the holographic data storage (HDS) channel, a three-dimensional error correcting with matched interleaving (3DEC-MI)
scheme is proposed in this paper. The 3DEC-MI scheme combines the advantages of the three-dimensional error
correcting scheme and the matched interleaving scheme, makes full use of the priori knowledge of the error patterns in
the HDS channel, distributes errors more uniformly, and decodes data iteratively in three dimensions. It is able to
eliminate the influences of non-uniform distribution of errors within a page and across pages, overcome the effects of
burst errors, correct random errors, and effectively reduce the symbol error rate (SER) of the HDS channel.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Classification and spectral unmixing are two very important tasks for hyperspectral data exploitation. Although
many studies exist in both areas, the combined use of both approaches has not been widely explored in the literature.
Since hyperspectral images are generally dominated by mixed pixels, spectral unmixing can particularly
provide a useful source of information for classification purposes. In previous work, we have demonstrated that
spectral unmixing can be used as an effective approach for feature extraction prior to supervised classification
of hyperspectral data using support vector machines (SVMs). Unmixing-based features do not dramatically
improve classification accuracies with regards to features provided by classic techniques such as the minimum
noise fraction (MNF), but they can provide a better characterization of small classes. Also, these features are
potentially easier to interpret due to their physical meaning (in spectral unmixing, the features represent the
abundances of real materials present in the scene). In this paper, we develop a new strategy for feature extraction
prior to supervised classification of hyperspectral images. The proposed method first performs unsupervised
multidimensional clustering on the original hyperspectral image to implicitly include spatial information in the
process. The cluster centres are then used as representative spectral signatures for a subsequent (partial) unmixing
process, and the resulting features are used as inputs to a standard (supervised) classification process.
The proposed strategy is compared to other classic and unmixing feature extraction methods presented in the
literature. Our experiments, conducted with several reference hyperspectral images widely used for classification
purposes, reveal the effectiveness of the proposed approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid advances of the internet and other multimedia technologies, the digital document market has been
growing steadily. Since most digital images use halftone technologies, quality degradation occurs when one tries to scan
and reprint them. Therefore, it is necessary to extract the halftone areas to produce high quality printing. In this paper, we
propose a low complexity pixel-based halftone detection algorithm. For each pixel, we considered a surrounding block.
If the block contained any flat background regions, text, thin lines, or continuous or non-homogeneous regions, the pixel
was classified as a non-halftone pixel. After excluding those non-halftone pixels, the remaining pixels were considered to
be halftone pixels. Finally, documents were classified as pictures or photo documents by calculating the halftone pixel
ratio. The proposed algorithm proved to be memory-efficient and required low computation costs. The proposed
algorithm was easily implemented using GPU.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The SMOS mission is a European Space Agency (ESA) project aimed at global monitoring of surface Soil Moisture and
Ocean Salinity from radiometric L-band observations. The single payload of the mission is MIRAS, the very first
Microwave Imaging Radiometer using Aperture Synthesis ever launched into space. This work is concerned with the
contamination of the data collected by MIRAS by radio frequency interferences (RFI) which degrade the performance of
the mission. RFI events are evidenced and it is explained why well-known standard RFI detection methods cannot be
used. Accounting for specificities of MIRAS, an early detection method tailored to SMOS measurements is presented
and illustrated with data acquired with the reference radiometers during the first year of the mission. The aim of this
method is not to localize nor to quantify the RFI sources but only to detect, to quantify and possibly to mitigate the
corresponding RFI effects in the signals measured by these radiometers. This is done as soon as possible in the
processing pipeline so that the propagation of such undesirable effects is known and under control from measurements to
final products.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The need for High Data Rate (HDR) communications and Near Real Time (NRT) meteorological information for the
Canadian North led by the Canadian Space Agency (CSA) to propose the Polar Communication and Weather (PCW)
mission to facilitate sovereignty operations in the Canadian North by providing reliable communications and increase the
ability to model and predict environmental changes occurring in the northern regions. Rapid coverage of the full Earth disk
from the highly elliptical PCW orbit requires that the scanning pattern of the Meteorological Payload be well understood.
To that effort, we carried out a study to simulate and then analyze the scan mirror geometry and error sources. Multiple scan
patterns and mirror geometry (gimbaled, two mirrors) have been investigated to guide the system design to minimize mirror
displacements (duty cycle) and image distortions due to viewing geometry and Earth curvature. Results from simulations
and comparative evaluations of both mirror geometry and scanning patterns (gimbaled, two mirrors) are provided with
interpretations and conclusions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral unmixing is a very important task for remotely sensed hyperspectral data exploitation. It amounts
at estimating the abundance of pure spectral signatures (called endmembers) in each mixed pixel of the original
hyperspectral image, where mixed pixels arise due to insufficient spatial resolution and other phenomena. A
challenging problem in spectral unmixing is how to automatically derive endmembers from hyperspectral images,
particularly due to the presence of mixed pixels which generally prevents the localization of pure spectral
signatures in transition areas between different land-cover classes. A possible strategy to address this problem
is to guide the endmember extraction process to spatially homogeneous areas. For this purpose, several preprocessing
methods (intended to be applied prior to the endmember extraction stage) have been developed in
the literature. However, most of these methods only include spatial information during the preprocessing and
disregard spectral information until the subsequent endmember extraction stage. In this paper, we develop a
new joint spatial and spectral preprocessing method which can be combined with any endmember extraction
algorithm from hyperspectral images. The proposed method is intended to retain spectrally pure pixels which
belong to spatially homogeneous areas. Our assumption is that spectrally pure signatures are more likely to be
found in spatially homogeneous areas rather than in transition areas between different land-cover classes, which
are expected to be dominated by mixed pixels. Our experimental results, conducted with a variety of hyperspectral
images, reveal the robustness of the proposed method when compared to other similar preprocessing
strategies.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to the restrained resources on board, compression methods with low complexity are desirable for hyperspectral
images. A low-complexity scalar coset coding based distributed compression method (s-DSC) has been proposed for
hyperspectral images. However there still exists much redundancy since the bitrate of the block to be encoded is
determined by its maximum prediction error. In this paper, a classified coset coding based lossless compression method
is proposed to further reduce the bitrate. The current block is classified to make the pixels with similar spectral
correlation cluster together. Then each class of pixels is coset coded respectively. The experimental results show that the
classification could reduce the bitrate efficiently.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Hyperspectral remote sensing has been widely utilized notably in high-resolution climate observation, environment
monitoring, resource mapping. However, it brings undesirable difficulties for transmission and storage due to the huge
amount of the data. The compression of the cube has been demonstrated to be an efficient strategy to solve these
problems. Moreover, the data features have strong similarity in disjoint spectral regions due to the same type of
absorbing gases. That is why a pre-processing scheme based on a similarity measurement and a reordering strategy
permits to enhance the compression ratio.
In this work, we first propose a review of similarity measurements and reordering strategies, and we give the field of
application of each of them. In particular, we propose a pre-selection of these measurements and re-ordering strategies
with respect to the expected performance, the complexity and the robustness to an on-board implementation. In a second
part, we give the performance gap between a high performance / complex approach and a spatializing approach for two
compression schemes: a 3D transform and a 3D predictive algorithm. Finally, we present the capability to implement the
reordering in a semi-optimal, semi-fixed or fixed manner, and thereby characterize the performances in a space borne
system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In EDCA-based wireless networks, all video packets are identically mapped without differentiation into one of four
access categories to be transmitted so that the delivery performance is restricted. Even though some researches remapped
video packets by differentiating their significance according to packets types, they refrained from more gains since they
adopted a type of fixed significance model and mapping scheme. In this paper, a new model for video packet
significance is built and then a dynamically mapping algorithm based on the packet significance model is proposed to
improve the performance of video delivery over EDCA-based wireless networks. The proposed algorithm detects
periodically the available resources of each AC and makes full use of the all ACs to transmit video packets. Simulation
results demonstrate that the proposed algorithm improves performance of video delivery and increases the image quality.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, high quality video services have become widely available. To transmit or store these HD video programs,
compression is required and various lossy compression schemes have been developed. On the other hand, there are some
applications which require lossless compression. However, most conventional lossless coding methods have high
complexity and require a long processing time. In this paper, a parallel lossless compression algorithm with low
complexity is proposed. The proposed compression algorithm reduced HD video sequences about by half. Furthermore,
the processing time was significantly reduced when using a GPGPU. The algorithm can be implemented in real time for
HD video sequences.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The discrete wavelet transform (DWT)-based Set Partitioning in Hierarchical Trees (SPIHT) algorithm is widely
used in many image compression systems. In order to perform real-time Reed-Solomon channel decoding and
SPIHT+DWT source decoding on a massive bit stream of compressed images continuously down-linked from the
satellite, we propose a novel graphic processing unit (GPU)-accelerated decoding system. In this system the GPU
is used to compute the time-consuming inverse DWT, while multiple CPU threads are run in parallel for the
remaining part of the system. Both CPU and GPU parts were carefully designed to have approximately the same
processing speed to obtain the maximum throughput via a novel pipeline structure for processing continuous
satellite images. Through the pipelined CPU and GPU heterogeneous computing, the entire decoding system
approaches a speedup of 84x as compared to its single-threaded CPU counterpart.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.