PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Proceedings Volume Mathematics of Data/Image Coding, Compression, and Encryption VIII, with Applications, 591501 (2005) https://doi.org/10.1117/12.619896
Advances in network communications have necessitated secure local-storage and transmission of multimedia content. In particular, military networks need to securely store sensitive imagery which at a later stage may be transmitted over bandwidth-constrained wireless networks. This work investigates compression efficiency of JPEG and JPEG 2000 standards for encrypted images. An encryption technique proposed by Kuo et al. in [4] is employed. The technique scrambles the phase spectrum of an image by addition of the phase of an all-pass pre-filter. The post-filter inverts the encryption process, provided the correct pseudo-random filter coefficients are available at the receiver. Additional benefits of pre/post-filter encryption include the prevention of blocking effects and better robustness to channel noise [4]. Since both JPEG and JPEG 2000 exploit spatial and perceptual redundancies for compression, pre/post-filtered (encrypted) images are susceptible to compression inefficiencies. The PSNR difference between the unencrypted and pre/post-filtered images after decompression is determined for various compression rates. Compression efficiency decreases with an increase in compression rate. For JPEG and JPEG 2000 compression rates between 0.5 to 2.5 bpp, the difference in PSNR is negligible. Partial encryption is proposed wherein a subset of image phase coefficients are scrambled. Due to the phase sensitivity of images, even partial scrambling of the phase information results in unintelligible data. The effect of compression on partially encrypted images is observed for various bit-rates. When 25% of image phase coefficients are scrambled, the JPEG and JPEG 2000 compression performance of encrypted images is almost similar to that of unencrypted images for compression rates in the 0.5 to 3.5 bpp range.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Mathematics of Data/Image Coding, Compression, and Encryption VIII, with Applications, 591503 (2005) https://doi.org/10.1117/12.617867
A number of methods have been recently proposed in the literature for the encryption of 2-D information using linear optical systems. In particular the double random phase encoding system has received widespread attention. This system uses two Random Phase Keys (RPK) positioned in the input spatial domain and the spatial frequency domain and if these random phases are described by statistically independent white noises then the encrypted image can be shown to be a white noise. Decryption only requires knowledge of the RPK in the frequency domain. The RPK may be implemented using a Spatial Light Modulators (SLM). In this paper we propose and investigate the use of SLMs for secure optical multiplexing. We show that in this case it is possible to encrypt multiple images in parallel and multiplex them for transmission or storage. The signal energy is effectively spread in the spatial frequency domain. As expected the number of images that can be multiplexed together and recovered without loss is proportional to the ratio of the input image and the SLM resolution. Many more images may be multiplexed with some loss in recovery. Furthermore each individual encryption is more robust than traditional double random phase encoding since decryption requires knowledge of both RPK and a lowpass filter in order to despread the spectrum and decrypt the image. Numerical simulations are presented and discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Mathematics of Data/Image Coding, Compression, and Encryption VIII, with Applications, 591504 (2005) https://doi.org/10.1117/12.620955
In this paper an attack strategy on the Data Encryption Standard (DES) that is different from the existing ones is developed. The attack strategy is based on a theorem proved by the author, called LR theorem. The attack can be done by means of a personal computer, i.e. a Pentium IV based machine. Obviously, more information than a sample of plaintext and ciphertext is required. However, this additional requirement is reduced to a minimum of just 24 bits. The LR theorem uncovers 16 trapdoors of which 2 are of importance. With this knowledge, an attack on Triple-DES can be performed using the aforementioned personal computer. A simple solution to this kind of attack is proposed, preserving the complexity of the Triple-DES, 2112.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Mathematics of Data/Image Coding, Compression, and Encryption VIII, with Applications, 591505 (2005) https://doi.org/10.1117/12.618077
In digital security and authentication, watermarking has emerged as a solution to unauthorized digital copies, monitoring of broadcasts, information embedding, as well as end-user and transaction authentication. In the field of watermarking, the discrete cosine transform (DCT) domain, as well as other transform domains, have been shown to be advantageous over most spatial domain techniques by its increased robustness to image processing operations and possible distortions. In this research an adaptive watermarking scheme and its implementation are investigated in images in which the watermarks are embedded in the discrete cosine transform domain. The adaptive scheme has an advantage in that watermark strength can be adjusted according to image characteristics. In addition, the watermarked image degradation is also analyzed. Finally, the system's resistance to attacks is demonstrated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Mathematics of Data/Image Coding, Compression, and Encryption VIII, with Applications, 591506 (2005) https://doi.org/10.1117/12.614054
Authentication watermarking approaches can be classified into two kinds: fragile and semi-fragile. In contrast to the latter one, fragile watermarking does not tolerate modifications of any single bit of the watermarked data. Since the transmission of digital data often requires lossy compression, an authentication system should accept non-malicious modifications such as JPEG compression. Semi-fragile techniques aim to discriminate malicious manipulations from admissible manipulations. In our approach, we extract image content dependent information, which is hashed afterwards and encrypted using secure methods known from the classical cryptography. The image data is partitioned into nonoverlapping 4x4 pixel blocks in the spatial domain. The mean values of these blocks form n-dimensional vectors, which are quantized to the nearest lattice point neighbours. Based on the changed vector values, a hash is calculated and asymmetrically encrypted, resulting in a digital signature. Traditional dual subspace approaches divide the signal space into a region for signature generation and a region for signature embedding. To ensure the security of the whole image, we join the two subspaces. The vectors, where to embed the bits using quantization-based data hiding techniques, are predistorted and also used for the signature generation. Our scheme applies error correction coding to gain the robustness of the embedded signature to non-malicious distortions. A second quantization run finally embeds the signature.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Mathematics of Data/Image Coding, Compression, and Encryption VIII, with Applications, 591507 (2005) https://doi.org/10.1117/12.617711
One of the challenges in digital watermarking is to develop a
statistically powerful blind detector and decoder for multi-bit watermarks. In this paper, we propose a general statistical procedure for blind multi-bit watermark detection and decoding based on the total efficient scores vector and obtain the asymptotic distribution of the proposed detectors. The main idea and mathematics of the general method are illustrated by two widely-used parametric distributions: the generalized Gaussian distribution and the Weibull distribution. Some optimality properties of the procedure are discussed and probability plots are proposed to graphically assess the finite sample performance. The robustness and survivability of the proposed watermarking method for digital images are evaluated and tested by embedding the watermarks in a band of arcs from low to middle frequencies in the DFT domain.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Mathematics of Data/Image Coding, Compression, and Encryption VIII, with Applications, 59150A (2005) https://doi.org/10.1117/12.615602
With the advent of digital cinema and digital broadcasting, copyright protection of video data has been one of the most important issues. We present a novel method of watermarking for video image data based on the hardware and digital wavelet transform techniques and name it as "traceable watermarking" because the watermarked data is constructed before the transmission process and traced after it has been received by an authorized user. In our method, we embed the watermark to the lowest part of each image frame in decoded video by using a hardware LSI. Digital Cinema is an important application for traceable watermarking since digital cinema system makes use of watermarking technology during content encoding, encryption, transmission, decoding and all the intermediate process to be done in digital cinema systems. The watermark is embedded into the randomly selected movie frames using hash functions. Embedded watermark information can be extracted from the decoded video data. For that, there is no need to access original movie data. Our experimental results show that proposed traceable watermarking method for digital cinema system is much better than the convenient watermarking techniques in terms of robustness, image quality, speed, simplicity and robust structure.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Mathematics of Data/Image Coding, Compression, and Encryption VIII, with Applications, 59150C (2005) https://doi.org/10.1117/12.613451
In this paper, we study the impact of the design freedom brought about by oversampling in Perfect Reconstruction oversampled filter banks, on the choice of synthesis filters that maximize the coding gain. In particular, based on the expression of the coding gain for the oversampled case, we show the dependency of the coding gain on both analysis and synthesis filters. We explore how the choice of synthesis filters can affect the coding gain, given a fixed set of analysis filters. We show that the para-pseudo-inverse is a good choice for the maximization of the coding gain when the corresponding analysis filters have been optimized for coding gain under critical sampling, or show a good frequency selectivity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Mathematics of Data/Image Coding, Compression, and Encryption VIII, with Applications, 59150F (2005) https://doi.org/10.1117/12.613204
Image compression via transform coding applied to small rectangular regions or encoding blocks appears to be approaching asymptotic rate-distortion performance. However, an emerging compression technology, called object-based compression (OBC) promises significantly improved performance via compression ratios ranging from 200:1 to as high as 2,500:1. OBC involves segmentation of image regions, followed by efficient encoding of each region's content and boundary. During decompression, such regions can be approximated by objects from a codebook, yielding a reconstructed image that is semantically equivalent to the corresponding source image, but has pixel- and featural-level differences. Semantic equivalence between the source and decompressed image facilitates fast decompression through efficient substitutions, albeit at the cost of codebook search in the compression step. Given small codebooks, OBC holds promise for information-push technologies where approximate context is sufficient, for example, transmission of surveillance images that provide the gist of a scene. However, OBC is not necessarily useful for applications requiring high accuracy, such as medical image processing, because substitution of source content can be inaccurate at small spatial scales. The cost of segmentation is a significant disadvantage in current OBC implementations. Several innovative techniques have been developed for region segmentation, as discussed in a previous paper [4]. Additionally, tradeoffs between representational fidelity, computational cost, and storage requirement occur, as with the vast majority of lossy compression algorithms. This paper analyzes the computational (time) and storage (space) complexities of several recent OBC algorithms applied to single-frame imagery. A time complexity model is proposed, which can be associated theoretically with a space complexity model that we have previously published [2]. The result, when combined with measurements of representational accuracy described in a companion paper [5], supports estimation of a time-space-error bandwidth product that could facilitate dynamic optimization of OBC algorithms. In practice, this would support efficient compression with visually acceptable reconstruction for a wide variety of military and domestic applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Mathematics of Data/Image Coding, Compression, and Encryption VIII, with Applications, 59150H (2005) https://doi.org/10.1117/12.617930
An adaptive discrete cosine transform (DCT) algorithm was presented by Hontsch et. al. under the perceptual metric which provided good bit rate. However, this method requires carefully calibrated viewing conditions. In this paper, we present a new adaptive discrete cosine transform where the quantization steps are adapted for each DCT coefficient using the new universal quality index. This new quality index does not require carefully calibrated viewing conditions and it is based on a combination of luminance and contrast distortions, and loss of correlation. Results obtained show
that the procedure converges and provides a good bit rate.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Mathematics of Data/Image Coding, Compression, and Encryption VIII, with Applications, 59150I (2005) https://doi.org/10.1117/12.613206
Image compression based on transform coding appears to be approaching a bit-rate limit for visually acceptable distortion levels. Although an emerging compression technology called object-based compression (OBC) promises significantly improved bit rate and computational efficiency, OBC is epistemologically distinct in a way that renders existing image quality measures (IQMs) for compression transform optimization less suitable for OBC. In particular, OBC segments
source image regions, then efficiently encodes each region's content and boundary. During decompression, region contents are often replaced by similar-appearing objects from a codebook, thus producing a reconstructed image that corresponds semantically to the source image, but has pixel-, featural-, and object-level differences that are apparent visually. OBC thus gains the advantage of fast decompression via efficient codebook-based substitutions, albeit at the cost of codebook search in the compression step and significant pixel- or region-level errors in decompression. Existing IQMs are pixel- and region oriented, and thus tend to indicate high error due to OBC's lack of pixel-level correlation between source and reconstructed imagery. Thus, current IQMs do not necessarily measure the semantic correspondence that OBC is designed to produce. This paper presents image quality measures for estimating semantic correspondence between a source image and a corresponding OBC-decompressed image. In particular, we examine the semantic assumptions and models that underlie various approaches to OBC, especially those based on textural as well as high-level name and spatial similarities. We propose several measures that are designed to quantify this type of high-level similarity, and can be combined with existing IQMs for assessing compression transform performance. Discussion also highlights how these novel IQMs can be combined
with time and space complexity measures for compression transform optimization.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Mathematics of Data/Image Coding, Compression, and Encryption VIII, with Applications, 59150J (2005) https://doi.org/10.1117/12.618262
A new parameterization approach for the construction of biorthogonal wavelets has revealed the possibility of deriving optimized biorthogonal filterbanks. Such a design methodology incorporates various optimization criteria and the regularity requirement into a single procedure while maintaining the usual perfect reconstruction principle. Because of the parameterization, the additional design procedure becomes fundamentally an unconstrained optimization problem. The method provides examples of new biorthogonal dual filters of slightly shorter length with comparable performance characteristics to that of traditional biorthogonal dual filters of longer length. Image coding using such new filterbanks becomes more efficient while maintaining similar quality. Preliminary studies in image coding will be reported in the presentation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Mathematics of Data/Image Coding, Compression, and Encryption VIII, with Applications, 59150L (2005) https://doi.org/10.1117/12.616485
The goal of this work is to study the feasibility of a low-complexity encoder for lossless compression of hyperspectral images. Since on-board bandwidth and power resources are limited for remote sensing systems, we adopted the distributed source coding (DSC) paradigm as a starting point for moving the computational complexity from the encoder to the decoder. The advantages from locating a simple encoder on the aerial platform are far more relevant than the increased costs of a more complex decoder at the ground station. Two lossless compression algorithms have been developed, the former performing a scalar encoding of the syndromes transmitted for each band of the hyperspectral image, the latter implementing a vectorial approach and yielding a slightly better compression ratio than the scalar one. No information about the spatial correlation is taken into account, while spectral correlation is explicitly exploited only at the decoder side. Experimental results confirm the asymmetrical distribution of computational complexity between encoder and decoder, with a strong increase of decoding times although the recorded encoding times are still higher than those achieved by JPEG-LS. As to the compression rate, our codecs perform very well compared to JPEG-LS or CALIC 2D, and worse than CALIC 3D, which also carries out inter-band decorrelation thus requiring a quite long processing time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Mathematics of Data/Image Coding, Compression, and Encryption VIII, with Applications, 59150M (2005) https://doi.org/10.1117/12.618643
This paper presents a novel scheme for lossless/near-lossless hyperspectral image compression, that exploits a classified spectral prediction. MMSE spectral predictors are calculated for small spatial blocks of each band and are classified (clustered) to yield a user-defined number of prototype predictors for each wavelength, capable of matching the spatial features of different classes of pixel spectra. Unlike most of the literature, the proposed method employs a purely spectral prediction, that is suitable for compressing the data in band-interleaved-by-line (BIL) format, as they are available at the output of the on-board spectrometer. In that case, the training phase, i.e., clustering of predictors for each wavelength, may be moved off-line. Thus, prediction will be slightly less fitting, but the overhead of predictors calculated on-line is saved. Although prediction is purely spectral, hence 1D, spatial correlation is removed by the training phase of predictors, aimed at finding statistically homogeneous spatial classes matching the set of prototype spectral predictors. Experimental results on AVIRIS data show improvements over the most advanced methods in the literature, with a computational complexity far lower than that of analogous methods by other authors.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Mathematics of Data/Image Coding, Compression, and Encryption VIII, with Applications, 59150N (2005) https://doi.org/10.1117/12.613327
Feature extraction from hyperspectral imagery consumes large amounts of memory. Hence the algorithms to do this have high computational complexity and require large amounts of additional computer memory. To address these issues previous work has concentrated on algorithms that are combinations of a fast integer-based hyperspectral discrete wavelet transform (HSDWT) with a specialized implementation of the Haar basis and improved implementations of linear spectral unmixing. Extensions of that previous work are presented here that modify and extend these algorithms to investigate feature extraction of arbitrary shaped spatial regions and incorporate more general biorthogonal bases for processing of spectral signatures. Finally, these wavelet transform implementations have also been used to simulate linear spectral unmixng techniques on spatially unresolved objects such as binary stars and globular star clusters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Mathematics of Data/Image Coding, Compression, and Encryption VIII, with Applications, 59150O (2005) https://doi.org/10.1117/12.616823
This paper describes an image compression strategy potentially capable of preserving the scientific quality of astrophysical data, simultaneously allowing a consistent bandwidth reduction to be achieved. Unlike strictly lossless techniques, by which moderate compression ratios are attainable, and conventional lossy techniques, in which the mean squared error of the decoded data is globally controlled by user, near-lossless methods are capable to locally constrain the maximum absolute error, based on user's requirements. An advanced lossless/near-lossless differential pulse code modulation (DPCM) scheme, recently introduced by the authors and relying on a causal
spatial prediction, is adjusted to the specific characteristics of astrophysical image data (high radiometric resolution, generally low noise, etc.). The background noise is preliminarily estimated to drive the quantization stage for high quality, which is the primary concern in most of astrophysical applications. Extensive experimental results of lossless, near-lossless, and lossy compression of astrophysical images acquired by the Hubble Space Telescope show the advantages of the proposed method compared to standard techniques like JPEG-LS and JPEG2000. Eventually, the rationale of virtually-lossless compression, that is a noise-adjusted lossles/near-lossless compression, is highlighted and found to be in accordance with concepts well established for the astronomers' community.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Mathematics of Data/Image Coding, Compression, and Encryption VIII, with Applications, 59150P (2005) https://doi.org/10.1117/12.616926
Remote Sensing and Geographic Information Systems applications are becoming an important issue in research projects, territorial management and many fields of our society. These applications present some special necessities and requirements, using high resolution and hyperspectral images. The huge size of these images implies high computational resources for their processing, storage and, in some cases, high bandwidth channels for their transmission. These disadvantages can be compensated with the use of compression techniques with the capacity to widely reduce this amount of information. Recently, some image compression schemes have been used to develop novel standards and proprietary formats. ECW, MrSID, and JPEG2000 are some of them, presenting advanced features and capabilities that can be used for Geographic Information Systems to extend their functionalities. This work addresses two main topics: first, a review of the most common formats used in Remote Sensing and Geographic Information Systems environments is provided. Secondly the JPEG2000 standard is briefly explained and J2K is presented. J2K is a novel JPEG2000 implementation that allows an easy extension and modification of some coding parameters of the standard, so that an improvement of the compression performance may be achieved for some particular images in GIS scenarios.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Mathematics of Data/Image Coding, Compression, and Encryption VIII, with Applications, 59150Q (2005) https://doi.org/10.1117/12.616007
We present an interpolation scheme for deformable block motion compensation based on a pseudo-perspective mapping model. The motion compensation method based on this model is targeted for coding aerial surveillance video sequences. In aerial video sequences, distortion occurs as the perspective of the surveillance camera relative to the scene changes. The traditional motion compensation method based on translational model is inadequate to compensate for this perspective distortion. One solution to overcome this problem is to use a deformable block motion compensation method with a more sophisticated model. Polynomial approximations of the perspective motion model, such as the affine and bilinear models, have been reported in the literature. Nevertheless, the affine model lacks the correct degree of freedom required to capture the effect of perspective distortion between frames. Though the bilinear model has the correct degrees of freedom, it still cannot accurately model the perspective distortion, especially near the boundaries of an image block.
In this paper we propose to apply an eight-parameter pseudo-perspective mapping model for deformable block motion compensation. We developed an efficient interpolation scheme for this model based on finite element shape functions. We tested our interpolation scheme on aerial video frames in the NTSC format (720 x 480 pixels). The proposed interpolation scheme, motion-compensation algorithm, and simulation results are described. The motion compensation scheme based on the proposed mapping model yields an improvement of 1.5dB to 2.3dB in peak signal-to-noise ratio compared to a bilinear model based scheme.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Mathematics of Data/Image Coding, Compression, and Encryption VIII, with Applications, 59150S (2005) https://doi.org/10.1117/12.615824
We present the optimum approximation of FIR filter bank that minimizes various measures of error of approximation, simultaneously. The presented approximation is quite flexible in choosing band-width of sub-bands, sample points and analysis filters. A kind of reciprocal relation holds for this approximation. Based on the reciprocal relation, in many examples, we can obtain perfect-reconstruction filter bank, iteratively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Mathematics of Data/Image Coding, Compression, and Encryption VIII, with Applications, 59150T (2005) https://doi.org/10.1117/12.615271
Steganography is the study of hiding information within a covert channel in order to transmit a secret message. Any public media such as image data, audio data, or even file packets, can be used as a covert channel. This paper presents an embedding algorithm that hides a message in an image using a technique based on a nonlinear matrix transform called the minimax eigenvector decomposition (MED). The MED is a minimax algebra version of the well-known singular value decomposition (SVD). Minimax algebra is a matrix algebra based on the algebraic operations of maximum and addition, developed initially for use in operations research and extended later to represent a class of nonlinear image processing operations. The discrete mathematical morphology operations of dilation and erosion, for example, are contained within minimax algebra. The MED is much quicker to compute than the SVD and avoids the numerical computational issues of the SVD because the operations involved only integer addition, subtraction, and compare. We present the algorithm to embed data using the MED, show examples applied to image data, and discuss limitations and advantages as compared with another similar algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Proceedings Volume Mathematics of Data/Image Coding, Compression, and Encryption VIII, with Applications, 59150W (2005) https://doi.org/10.1117/12.614023
In this paper, a selective weighting method is used for data embedding to achieve blind watermark detection. In the proposed system, block polarity and activity index modulation are used for the selective weighting. The block polarity is determined based on the number of coefficients that are larger than the median value. The block activity index is the pseudo-quantized block activity that is represented by the sum of absolute differences (SAD) of each coefficient to the median value. The block activity index modulation is performed based on the XOR operation of the randomized watermark and the randomized wavelet blocks polarity. In the block activity index modulation, if any coefficient is located very close to the median, it is vulnerable to attacks because its polarity can easily be changed. In such cases, the coefficient is forced to shift, by the just-noticeable-difference (JND) amount, toward the positive or negative end to enhance the robustness. The watermark embedding is actually performed by the activity index modulation that will modify each coefficient value by a small amount to force the activity to be quantized into a specific region. Simulation results show that the proposed method performs extremely well for Checkmark with non-geometric attacks, such as linear filtering, remodulation, denoising, and compression. The proposed scheme is also robust against image cropping, downsampling, rotation, and columns removal attacks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.