PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Digital watermarking methods of still images can be placed under two categories based on whether they cast the watermark in the spatial domain or in the transform domain. It was found that the transform domain watermarking schemes are typically much more robust to image manipulation as compared to the spatial domain schemes. In this paper, we will present a watermarking method based on the wavelet transform which does not require the original image for watermark detection. In our method, the original image is decomposed using multi- stage discrete wavelet transform. The watermark, generated by pseudo-random sequence, is added to all high-frequency coefficients that are above a given threshold. The watermarked image is finally obtained by using the inverse discrete wavelet transform. For watermark detection, we calculate the correlation between the wavelet coefficients of a possibly corrupted watermarked image and the watermark. By comparing the correlation with a predefined threshold, the embedded watermark can be detected. Experimental results are given to illustrate the robustness against smoothing, cropping, and JPEG compression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we propose a new watermarking method using vector sets of wavelet coefficients. In our proposal, unlawful attacks can be detected by finding invalid vector set caused by attacks and original watermark information is not required for extracting embedded data or detecting unlawful attacks. Though the watermarking method using a pair of wavelet coefficients has been already proposed by H.S. Kim et al, the original image is needed for extracting embedded data and unlawful attacks can be never detected in their method. Therefore, the author of contents cannot strongly claim a copyright to his contents when the embedded information is lost by the attack, because even if the attack is made, the watermark information is extracted without contradiction, rightly or wrongly. We propose a new watermarking method based on vector set of wavelet coefficients. The proposed scheme does not requires the original image to extract embedded data, and can detect unlawful attacks by detecting invalid vector set caused by attacks. Simulation results show that our proposed scheme indicates good picture quality in the watermarked image and robustness to some types of image processing attacks including JPEG compression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Pen-tablet devices capable of capturing the dynamics of writing record temporal and pressure information as well as the spatial pattern. This paper explores biometric verification based upon the dynamics of writing where writers are distinguished not on the basis of what they write (ie the signature), but how they write. We have collected samples of dynamic writing from 38 Chinese writers. Each writer was asked to provide 10 copies of a paragraph of text and the same number of signature samples. From the data we have extracted stroke-based primitives from the sentence data utilizing pen-up/down information and heuristic rules about the shape of the character. The x, y and pressure values of each primitive were interpolated into an even temporal range based upon a 20 msec sampling rate. We applied the Daubechies 1 wavelet transform to the x signal, y signal and pressure signal using the coefficients as inputs to a multi-layer perceptron trained with back-propagation on the sentence data. We found a sensitivity of 0.977 and specificity of 0.990 recognizing writers based on test primitives extracted from sentence data and measures of 0.916 and 0.961 respectively, from test primitives extracted from signature data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The main aim of this work is moving object recognition in outdoor environments in the context of video surveillance for intruder detection in archeological sites. Our goal is to propose an example-based learning technique to detect vehicles and peoples in dynamic scenes. The classification is purely based on the object shape and not on its image content. First motion information is used for detecting the objects of interest. Haar wavelets are used to represent the images and, finally, a supervised three layer neural network is used to classify the patterns.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We extend the traditional analysis of network vulnerability by searching for sequences of exploited vulnerabilities distributed throughout a network. While vulnerabilities considered in isolation may seem innocuous, when considered in combination they may lead to serious security breaches. Our approach establishes encoding rules to reason about interdependent vulnerabilities and exploits. It then reasons about the rules to perform critical failure analysis for a given network. We have developed a prototype software tool for automating the analysis, which can be integrated with existing network security tools such as vulnerability databases and network discovery tools. We demonstrate our approach through an example application. We also perform a scaling experiment to show the performance of our approach for larger networks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Technological developments in fiber-optic communications introduce new technologies with their own set of vulnerabilities. Vulnerability issues at the optical layer are explored to the degree necessary to lay a foundation for the development of security measures to protect the communications infrastructure and communications signal. Several means of infusing security into the development of new optical communication devices and optical fiber technology are presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wavelet and wavelet packet transforms are presently used for image compression and denoising. There has been recent progress on three fronts: implementing multiplication operations in wavelet bases, estimating compressibility by wavelet packet transform coding, and designing wavelet packets to control frequency spreading and pointwise convergence. Some open problems are mentioned.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Empirical mode decomposition (EMD) is an adaptive decomposition of the data, as is the Wavelet packet best basis decomposition. This work present the first attempt to examining the use of EMD for image compression purposes. The Intrinsic Mode Function (IMF) and their Hilbert spectra are compared to the wavelet basis and the wavelet packet decompositions expanded in each of its best bases on the same data. By decomposing the signal into basis functions, the waveforms in the signal is represented by the basis and a set of decorrelated discrete values in a vector. A coding scheme is presented where the idea is to decompose the signal into its IMF:s where only the max and min values for each IMF is transmitted. The reconstruction of the IMF in the decoder is done with spline interpolation. We have in the two-dimensional EMD an adaptive image decomposition without the limitations from filter kernels or cost functions. The IMF:s are, in the two-dimensional case, to be seen as spatial frequency subbands, with various center frequency and bandwidth along the image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This work investigates the wavelet packet transforms and its abilities to efficiently represent images. We are interested in the image compression approach of image processing. The wavelet packet basis selection algorithm finds spatial frequency resonance in the image. The different decomposition trees that represents the optimal basis for the triplet image, cost function and filter gives us a feeling of chaos but for compression applications it doesn't matter that there is no typical tree for a particular image or that there is no strong trend for a certain type of tree in combination with a fixed filter or fixed cost function. The most important measure in image coding applications is believed to be the cost for coding the transform coefficients, it is even more important than the cost for choosing the optimal basis. When measuring the cost for coding the coefficient matrix we realize that we are free to choose a cost function that gives us a nice decomposition tree together with a good filter. We simulate the coding cost by estimate the entropy of the coefficient matrix. Results are presented from tests where the images from the Brodatz texture set have been decomposed with different filters and different cost functions and we also present calculations of the decision rule to split or not to split the subband. With the knowledge of the mean and variance of the input signal, we can calculate the typical decomposition tree for the signal using different image models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Detecting man-made objects concealed in dense foliage is an ongoing challenge in military imaging. There is a need to detect, recognize and classify objects and background. Wavelet multiscale feature detection will be applied to Landsat images for the purpose of delineating man-made objects from foliage (or camouflage) which may be used to disguise them.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Bi-directional Reflectance Distribution Function (BRDF) is a complex function characterizing the reflection of light on a surface. It depends on five variables: four angles (lighting direction and observer direction) and the wavelength. A complete measurement campaign generates a large data set difficult to model. One way to proceed is to fit an analytical model on this data set. A numerical optimization technique, like simplex, allows to retrieve the best parameters of the model by minimizing the error with regard to measurements. Most of the analytical models obtain poor results for specular surfaces, and no wavelength dependent model actually exists. These reasons lead us to choose a numerical approach and particularly wavelets. This paper shows how wavelets can be used to provide an efficient BRDF model. Results of modeling are presented over a large collection of measurement data sets. At fixed wavelength, wavelet model has pretty good results, comparable to the best analytical models for diffuse surfaces, and much better for specular surfaces. The global relative error is lower than 5% with a compression ratio better than 90%. For spectral data sets, the wavelet model also presents very interesting performances with compression ratios greater than 95% and error lower than 2%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In image registration, we determine the most accurate match between two images, which may have been taken at the same or different times by different or identical sensors. In the past, correlation and mutual information have been used as similarity measures for determining the best match for remote sensing images. Mutual information or relative entropy is a concept from information theory that measures the statistical dependence between two random variables, or equivalently it measures the amount of information that one variable contains about another. This concept has been successfully applied to automatically register remote sensing images based on the assumption that the mutual information of the image intensity pairs is maximized when the images are geometrically aligned. The transformation which maximizes a given similarity measure has been previously determined using exhaustive search, but this has been found to be inefficient and computationally expensive. In this paper we utilize a new simple, yet powerful technique based on stochastic gradient, for the maximization of both similarity measures with remote-sensing images, and we compare its performance to that of the exhaustive search. We initially consider images, which are misaligned by a rotation and/or translation only, and we compare the accuracy and efficiency of a registration scheme based on optimization for this data. In addition, the effect of wavelet pre-processing on the efficiency of a multi- resolution registration scheme is determined, using Daubechies wavelets. Finally we evaluate this optimization scheme for the registration of satellite images obtained at different times, and from different sensors. It is noted that once a correct optimization result is obtained at one of the coarser levels in the multi-resolution scheme, then the registration process is much faster in achieving subpixel accuracy, and is more robust when compared to a single level optimization. Mutual information was generally found to optimize in about one third the time required by correlation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The greatest advantage gained from hyperspectral imagery is that narrow spectral features can be used to give more information about materials than was previously possible with broad-band multispectral imagery. For many applications, the new larger data volumes from such hyperspectral sensors, however, present a challenge for traditional processing techniques. For example, the actual identification of each ground surface pixel by its corresponding reflecting spectral signature is still one of the most difficult challenges in the exploitation of this advanced technology, because of the immense volume of data collected. Therefore, conventional classification methods require a preprocessing step of dimension reduction to conquer the so-called curse of dimensionality. Spectral data reduction using wavelet decomposition could be useful, as it does not only reduce the data volume, but also preserves the distinctions between spectral signatures. This characteristic is related to the intrinsic property of wavelet transforms that preserves high- and low-frequency features during the signal decomposition, therefore preserving peaks and valleys found in typical spectra. When comparing to the most widespread dimension reduction technique, the Principal Component Analysis (PCA), and looking at the same level of compression rate, we show that Wavelet Reduction yields better classification accuracy, for hyperspectral data processed with a conventional supervised classification such as a maximum likelihood method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The theory of modeling linear communication channels by wavelets is reviewed. It is shown that wavelet-based channel identification can converge fast and be a flexible tool for dealing with many different signal propagation situations. Moreover, wavelet-based identification can provide robustness against narrow band noise and/or impulse noise. Possibilities for sparse channel representations are emphasized and illustrated by real life examples. A wavelet packet decomposition that maximizes the convergence speed of the LMS algorithm used in channel identification is discussed. Comparisons with the DFT, DCT, and other transforms are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It has been shown that Variable Bit Rate (VBR) video exhibits long-range dependence characteristics. Several models have been proposed to synthesize traces whose autocorrelation match that of the video traffic traces. Given the relationship between self-similarity and long- range dependence, we investigate the application of discrete-time linear scale invariant systems provided by Zhao and Rao to modeling VBR video traffic. This formulation, called a DLSI system, was derived using a continuous dilation operator in discrete-time as a direct analog of the continuous-time linear scale-invariant system formulation of Wornell and differs from other approaches such as wavelet based construction. While simulations had shown that DLSI systems were capable of generating self- similar data such as those found in network traffic, questions regarding their long-range behavior remained. Answers to these questions are dealt in this paper. We present an alternative modeling technique to model VBR video traffic using DLSI system. The proposed model has a fractional pole-zero structure and provides a good model for long-range dependent and self-similar time series. The behavioral pattern of the fractional pole-zero filter for a range of the Hurst parameter is also shown. The time-domain characteristics of the filter are analyzed using power series approximations. Comparison of output autocorrelation function (ACF) to those of the video trace ACF is made. We conclude that some samples of video data are indeed better modeled by the LSI models than the conventional models such as Markovian and Long-range dependent models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The processing of colored documents with Document Management Systems (DMS) is possible with the modern document scanning systems today. Because of the enormous amount of image data generated scanning a typical A4 document with a 300 dpi resolution, image compression is used. The JPEG compression scheme is widely used for such image data. The lack of image quality caused by necessary lossy compression, can significantly reduce the recognition quality of a subsequent optical character recognition (OCR) process, which is essential to any DMS system. The new standard JPEG2000 (Part 6), a high performance system for compressing and archiving scanned documents, particularly those containing text and image, is overcoming the gap between high compression and legibility of documents suitable to be managed inside DMS systems. The utilization of JPEG2000 (Part 6) results in substantially higher image quality in comparison to standard compression techniques. This high quality is achieved by combining automatic text detection with bitonal compression of text and color/grayscale wavelet compression of images. Since the innovative JPEG2000 (Part 6) compression scheme is a complex image processing system, allocating some computational performance, a scalable software system has been designed to meet the throughput of high-performance document scanners.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper addresses a solution to the problem of designing the low pass filter of a two-channel IIR orthogonal perfect reconstruction filter bank to match a specified frequency response. The derived solution treats independently the numerator and denominator in the rational transfer function. Based on the proposed solutions, a low pass filter is matched to a desired frequency response and the frequency response constrained to be have multiple zeroes at (pi) . The end result is a low pass filter with a smoother response function than FIR based solutions for a given number of filter coefficients while optimal in the sense of being close to desired functions. The paper concludes by demonstrating the solution by matching the low pass filter of a Meyer wavelet.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wavelet filter banks are potentially useful tools for analyzing and extracting information from frequency modulated (FM) signals in noise. Chief among the advantages of such filter banks is the tendency of wavelet transforms to concentrate signal energy while simultaneously dispersing noise energy over the time-frequency plane, thus raising the effective signal to noise ratio of filtered signals. Over the past decade, much effort has gone into devising new algorithms to extract the relevant information from transformed signals while identifying and discarding the transformed noise. Therefore, estimates of the ultimate performance bounds on such algorithms would serve as valuable benchmarks in the process of choosing optimal algorithms for given signal classes. Discussed here is the specific case of FM signals analyzed by Morlet wavelet filter banks. By making use of the stationary phase approximation of the Morlet transform, and assuming that the measured signals are well resolved digitally, the asymptotic form of the Fisher Information Matrix is derived. From this, Cramer-Rao bounds are analytically derived for simple cases.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper examines the application of joint time-frequency analysis (JTFA) to the detection of weak targets that are close to the powerful ocean clutter in signals received by over-the-horizon radar (OTHR). Issues of interest are the detection of targets near time-varying clutter and the use of JTFA to model the clutter for the purpose of excision. Comparisons are made between the spectrogram and the particular JTFA algorithm applied here, namely, the Smoothed Pseudo Wigner-Ville Distribution (SPWVD). Results are shown for real and synthetic signals. The SPWVD of the real signal has been successfully modeled by a synthetic signal, which increases our understanding of the behavior of this transform when applied to OTHR clutter data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In over-the-horizon radar (OTHR) moving target detection, the signal to clutter ratio (SCR) is low. One method to detect a moving target is to first reject the clutter and improve the SCR before the detection, such as the adaptive Fourier transform developed by Root when a target moves uniformly. When a target does not move uniformly, the Fourier based techniques for the target detection including super resolution techniques may not work well. In this paper, we replace the Fourier transform by the adaptive chirplet transform in the Doppler processing in OTHR when a target moves non-uniformly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In many applications, signals usually consist of multiple linear frequency modulated signal components. To analyze these signals in the joint time-frequency domain, rotational time-frequency transforms are very useful. In this paper, we introduce the basic concept of rotational time-frequency transforms, discuss some properties of the fractional Fourier transform, describe a bilinear generalized-marginal time-frequency transform, and present the Radon transform for line feature detection and estimation. Finally, we apply the rotational time-frequency transform to multiple moving target detection in synthetic aperture radar (SAR) imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The use of the Radon transform and the Wigner-Radon power spectrum for ISAR motion compensation is described. It is shown that the local Radon power spectrum is closely related to the Cohen's class of quadratic time-frequency representations in a similar way as the Radon and Fourier transform are related. The peak of the local Radon transform is used as a measure for the velocity towards the radar of a moving target. The velocity estimate can be used to align the range profiles and perform target radial motion correction. Another application of the Radon transformation is the correction for time-variation of Doppler frequency of the signal during the Coherent Processing Interval. The Radon transform of the cross-range time-frequency representation of the signal is used for focusing an ISAR image that has been blurred due to non-uniform target rotation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Starting with some recall on the work done by Pierre et Jacqueline Bertrand, who introduced the use of phase space representations in radar imaging, we will try to show the strong connection existing between generalized wavelets, time-frequency distributions, and various areas of radar imaging (especially Synthetic Aperture Radar imaging).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Time-frequency analysis is useful for inverse synthetic aperture radar (ISAR) imaging of aircraft with general motion. For applications of ISAR images, quantifiable performance is preferrable. Such performance can be described in terms of effective resolution and dynamic range of an image, quantities that can be obtained from inpulse responses. The problem with this approach is that time-frequency ISAR impulse responses are signal dependent. Based on two representative test signals, four Cohen's class time-frequency methods are compared. The comparison uses estimated resolution and dynamic range from selected azimuth impulse responses. It is found that the adaptive optimal kernel estimator is best of the four tested. This method is less signal dependent than the other methods, and have reasonable dynamic range for radar imaging.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
ISAR images of moving targets are susceptible to distortion, for example, blurring due to small random fluctuation in the targets' motions. Small perturbation in the pitch, roll or yaw motions of an in-flight aircraft can introduce significant distortion to the ISAR images of in-flight aircraft. In this paper, experimental results on the distortion produced by small perturbed yaw motion of a target will be discussed. Numerical simulations of the distorted motions are compared with experimental results. This comparative analysis allows one to characterize the distortion mechanism and to gain some insights into the distortion process in radar imaging. It is found that even very small perturbation in the motion can be sufficient enough to introduce sizeable distortion to the radar images. The distortion can be attributed to the temporal phase modulation effect in the imaging process.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present the results of target recognition research based on the moment functions of various radar signatures, such as time-frequency signatures, range profiles, and scattering centers. The proposed approach utilizes geometrical moments or central moments of the obtained radar signatures. In particular, we derived exact and closed form expressions of the geometrical moments of the adaptive Gaussian representation (AGR), which is one of the adaptive joint time-frequency techniques, and also computed the central moments of range profiles and one-dimensional (1-D) scattering centers on a target, which are obtained by various super-resolution techniques. The obtained moment functions are further processed to provide small dimensional and redundancy-free feature vectors, and classified via a neural network approach or a Bayes classifier. The performances of the proposed technique are demonstrated using a simulated radar cross section (RCS) data set, or a measured RCS data set of various scaled aircraft models, obtained at the Pohang University of Science and Technology (POSTECH) compact range facility. Results show that the techniques in this paper can not only provide reliable classification accuracy, but also save computational resources.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we propose an integration of speckle reduction and edge detection in synthetic aperture radar (SAR) images by using over complete wavelet decomposition. The input image is decomposed in multiple level without downsampling, as resolution needs to be preserved. For each subband, a threshold value is estimated according to the noise variance and used for soft-thresholding to reduce speckle. The points of sharp variation (edges) induce modulus maxima in highpass subbands, and the local maxima are detected to produce single-pixel edges. Depending on the requirement of details desired in the edges the level of decomposition can be selected. The method is successfully applied to JERS- 1/SAR images, and some experiments are given.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The binary-tree best base searching method developed by Coifman and his colleagues is well known and widely used in wavelet packet applications. However, the requirement that the base vectors be chosen from either a parent or its directly related children in the binary-tree structure is a limitation because it doesn't search all possible orthogonal bases and therefore may not provide a optimal result. We have recently found that the set of all possible orthogonal bases in a wavelet packet is much larger than the set searched by the binary-tree method. Based on this observation, we have developed a new way to search the best base among the larger set of orthogonal bases. Preliminary results on 1D signals show that we can achieve considerable improvement in signal compression and feature extraction using the new method. This method can also be readily extended to 2D images for improved image compression, feature extraction, and object classification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Color image processing is investigated in this paper using an algebraic approach based on triplet numbers. In the algebraic approach, each image element is considered not as a 3D vector, but as a triplet number. The main goal of the paper is to show that triplet algebra can be used to solve colour image processing problems in a natural and effective manner. In this work we propose novel methods for wavelet transforms implementation in color triplet-valued space.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
There is currently a considerable interest in methods of invariant 3D image recognition. Indeed, very often information about 3D objects can be obtained by computer tomographic reconstruction, 3D magnetic resonance imaging, passive 3D sensors or active range finders. Due to that algorithms of systematic derivation of 3D moment invariants should be developed for 3D color object recognition. In this work we proposed an elegant theory which allows to describe many such invariants. Our theory is based on the theory of triplet numbers and quaternions. We propose Hurwitzon--valued invariants, which are related to the descriptions of objects as the zero sets of implicit polynomials. These are global invariants which show great promise for recognition of complicated objects. Hurwitzon--valued invariants have good discriminating power for computer recognition of 3Dcolour objects using statistical pattern recognition methods. For fast computation of Hurwitzon--valued invariants we use modular arithmetic of Galois fields and rings, which maps calculation of invariants to fast number theoretical Fourier--Galois--Hamilton--transform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
FIR filters are routinely used in the implementation of modern digital signal processing systems, such as the discrete wavelet transform. Their efficient implementation using commercially available VLSI technology is a subject of continuous study and development. This paper presents the implementation using modern Altera APEX20K field-programmable logic (FPL) devices of reduced complexity and high performance FIR filters by means of the residue number system (RNS). Index arithmetic over Galois fields and the quadratic residue number systems (QRNS) together with a selection of a small wordwidth modulus set are the keys for attaining low-complexity and high-throughput in real and complex FIR filters. RNS-FPL merged FIR filters were about 65% faster than 2C designs and required fewer logic elements in most cases. An index arithmetic QRNS-based complex FIR filter yielded better results. This filter was up to 60% faster than the three-multiplier-per-tap filter and required fewer LEs for filters having more than 8 taps. Particularly, a 32-tap filter needed 24% LEs less than the classical design.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Canonical polynomial forms taken from Thom's cuspoid family of catastrophes are studied as anharmonic oscillator potentials via time-frequency representations. Polynomial coefficients are control parameters, allowed to vary with time, inducing sudden changes in instantaneous equilibrium positions as well as characteristic oscillations about those equilibria that exhibit slowly varying spectra. Higher order potentials organize lower order potentials, which are contained in the higher order potentials, resulting in similar time-frequency signatures of the Newtonian dynamics ensuing as the potential changes with time.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
During the last decade, the optics community has shown interest in building bridges between mathematical wavelets and optical phenomena. In a first time, we review some of the previous works done on the subject. Namely, we discuss the optical implementation of the transform, as well as its utilization in relation with optical pattern matching. A short discussion on works, unfortunately falling short to explain scalar diffraction in terms of a wavelet transform, is presented. At this point, we introduce the physical wavelet (Psi) . After portraying the mathematical properties of (Psi) , we describe its contributions to the optical world. Actually, this wavelet being a solution of Maxwell's equations, we derive interesting optical properties from its mathematical behavior. For instance, looking more closely to the scalar projection of this wavelet, we demonstrate the equivalence between Huygens' diffraction principle and the wavelet transform using y as the transformation kernel. Another application involves a closely related form of this wavelet that can be used to generate limited diffraction beams.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper lifting is used for similarity analysis and classification of sets of similar medical images. The lifting scheme is an invertible wavelet transform that maps integers to integers. Lifting provides efficient in-place calculation of transfer coefficients and is widely used for analysis of similar image sets. Images of a similar set show high degrees of correlation with one another. The inter-set redundancy can be exploited for the purposes of prediction, compression, feature extraction, and classification. This research intends to show that there is a higher degree of correlation between images of a similar set in the lifting domain than in the pixel domain. Such a high correlation will result in more accurate classification and prediction of images in a similar set. Several lifting schemes from Calderbank-Daubechies-Fauveue's family were used in this research. The research shows that some of these lifting schemes decorrelates the images of similar sets more effectively than others. The research presents the statistical analysis of the data in scatter plots and regression models.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A wavelet-lifting scheme that maps integers to integers, performs all calculations in-place, and is computationally efficient is used in this paper. It processes 2-D medical images row by row producing an equivalent 1-D signal. The interdependency of pixels in 2-D medical images is known to vary in different regions. Thus, some lifting schemes decorrelate the resultant signal more efficiently than others do. The effect of different scanning approaches on the performance of several lifting schemes is presented.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In here we present the application of Wavelet function to compress the electroencephalogram (EEG) into a multichannel system. The system consists of electronic components as bio-amplifies, analog filters, multi-channel ADC, microprocessor, PCMCIA memory, etc. The compressing algorithms for EEG signal were implemented using language C/C++. The coefficients of digital FIR filter into compression algorithm were chosen as the coefficients of Daubechies Wavelets. The experiments presented in this paper have demonstrated the possibility to compress the EEG in real time. The fidelity of compressed EEG signal has been verified. Different results of the time values needed for compressing are presented in the cases of use DSP and AMD586 processor.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A it transform--model based approach for suppressing noise in frequency modulated (FM) signals is presented. This approach is based on a model of the expected output of a wavelet filter bank in response to a noise-free FM signal. Resolving the discrepancy between filter bank output in response to a noisy FM signal and the expected output given by the model provides the mechanism for noise suppression. Specifically, a stationary phase approximation to the Morlet wavelet transform is used to form the model. The approach is shown to perform favorably on a numerical example when compared to both simple lowpass filtering (linear) and wavelet thresholding (non-linear) denoising techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present an approach to complex signal identification that uses a non-linear transformation into a 2-D (image) domain as a fundamental first step. Motivating this approach is the observation that many complex signals of interest have characteristic complex--plane behaviors when viewed under certain invariance rules, e.g., rotation and/or scaling in the complex--plane. Orthonormal bases in 2D that exhibit special properties may be employed to some advantage for 1D classification. Specifically, we use the Zernike transform to yield rotationally invariant features of complex 1D signals. These features may be furthered projected into a low dimensional subspace via a standard Fisher analysis in the context of a specific data set. Using a small data set consisting of six different sources the method is shown to perform well and exhibit a high level of noise robustness. The resulting feature vector is of low dimensionality and has reasonable computational cost.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Rocket noise is an inherent behavior of rocket exhaust. Static experiment of rocket engine with synthetic propellant was conducted in rocket engine static-state experiment laboratory to study the properties of the noise. The near-field rocket noise has high intensity and is rich in shocks that are perceived as crackle. Wavelet transformation was used to analyze near-field solid rocket noise measured by piezo-resistive pressure transducers in experiment. Mexican Hat wavelet was used to detect the sharp edge of shock waveform in near-field noise. Large magnitude and short duration shock waveforms are detected. Discrete Daubechies10 wavelet is used to decompose the noise data. Features of near-field noise are analyzed in each scale. The conception of scale energy spectrum based on wavelet coefficients is also introduced to study the noise data. The energy distribution of near-field noise in scale domain is illustrated in this paper. Scale concentration of near-field energy is identified by the scale energy spectrum. The distinct difference between near-field and far-field rocket noise is that the energy of near-field noise concentrates in a much higher frequency band than that of far-field noise. Form the analysis a tendency can be perceived that the energy peak moves from small scale (high frequency) to large scale (low frequency) slightly with the test position goes far away from the exit plane. Wavelet transformation is proved useful in analysis of these noise signals which rich in shock waveform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical sensors airborne will include a laser rangefinder, an infrared search and tracking system (IRST) or forward looking infrared system (FLIR) , as well as a visible TV system and image intensifier. these optical systems are equipped in varying conditions where the environment is very different. They must be capable of withstanding the harsh land, air and sea environments associated with military operations, such as impact from airborne sand, grit, insect and raindrops. In order to solve the questions, a wider spectral band dome is often used for protecting the optical system. This paper will describe mainly a hemispherical dome airborne and analyze the effect of material and shape of dome on imaging performance. The improvement design will be discussed finally.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Wavelet transforms via lifting scheme are called the second-generation wavelet transforms. However, in some lifting schemes the coefficients are transformed using mathematical method from the first-generation wavelets, so the filters with better performance using in lifting are limited. The spatial structures of lifting scheme are also simple. For example, the classical lifting scheme, predicting-updating, is two-stage, and most researchers simply adopt this structure. In addition, in most design results the lifting filters are not only hard to get and also fixed. In our former work, we had presented a new three-stage lifting scheme, predicting-updating-adapting, and the results of filter design are no more fixed. In this paper, we continue to research the spatial model of lifting scheme. A group of general multi-stage lifting schemes are achieved and designed. All lifting filters are designed in spatial domain and proper mathematical methods are selected. Our designed coefficients are flexible and can be adjusted according to different data. We give the mathematical design details in this paper. Finally, all designed model of lifting are used in image compression and satisfactory results are achieved.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Powerline communication has become interesting as a new choice of communication media by using OFDM in the Europe Internet application and using TDMA in power meters' reading in Japan mimicking the function of phone-line DSL. However, this raw copper media without hefty infrastructure investment such as the telephone twisted pair DSL has many challenges, because it was designed to transmit an electrical current that had isolated power grid with transformers, it has nothing that is suitable to convey data but it is everywhere in the last mile of connecting households. To make everything worse, the powerline is also easily interfered by unpredictable impulsive noise, background colored noise, and fatal attenuation of signal. Our goal is to take the US city grid power-line as a supplement to the concept of a single-user & multiple- sensor-broadcasting applications for security. To successfully communicate through the power-line for the household/stadium/subway/traffic-lights security application, e.g. separated video feds find themselves the single owner PC through the common household/stadium/subway/traffic-lights power-line, we develop and test an appropriate sparse coding, compression, and error correction to succeed the task with a limited bandwidth technique. In this paper we did not apply out human sensory preserving compression code, but to concentrate on sparse coding BSS with the less-bandwidth- demanding audio signals. In this paper, we describe a detail model of the power-line topologies based on realistic powerline data, and quantify the error rates of transmitted data on the powerline in the cases when noise is presented in the channel.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, an efficient video compression (CR, compression ratio,approximately equals 1000:1) mechanism is presented, and that preserves not only the visual acuity but also the perception fidelity. This is possible because we have made novel utilization of the Human Vision System (HVS). It is well known that our eyes do not pay attention to image pixels rather to intensity changes. Thus the traditional compression based on pixels' dynamic range reduction at all pixel level could be overkill. We furthermore label the discontinuity of the intensity with the derivative jump value across the boundary as the finger print of the edge, and that label together with its actual gray scale value are called the singularity map (SM) that enable us uniquely identify those corresponding control points among neighbor- hood frames without the usual exhaustive search at all image pixels level. The proposed video compression consists of two parallel operations: (1) sending the lossy compression of full image by the HVS wavelets without, however, the block artifact of DCT of JPEG or MPEG, and (2) inserting the original gray scale edge preserved by the SM before the lossy compression operation. Although we know how HVS can extract the SM without the convolution broadening, effectively if we could separate the address from the actual value, we can derive the address by taking into consideration the usual convolution broadening using the Sobel and Canny or any edge operator. Once the address of SM is sharpened without the broadening by inverse diffusion or other tricks, we can straightforwardly look up the actual image value at the gray scale edge from the original image. Since the correspondence among neighborhood frames of the singularity maps is followed in time, we need only send the change of the address without the need of sending the next frame and that fact further compress the redundancy in time from K bytes per frame to K bits on the average. This is the essential principle to enable us to transmit live video via the SINGARS radio at a narrow bandwidth of 16 K bps.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image and video indexing have become important with recent increase in digital image collections. A majority of proposed indexing techniques in the literature are based on features extracted from the entire image. In this paper, images are segmented by the K-mean clustering algorithm in order to allow searching, and retrieving at region level. Farther, the regions are classified into object and non-object classes as we may retrieve regions based on features tailored for the corresponding type. The object class contains clumped regions while the non-object class contains regions that scattered in the entire image scenes as constrained by (Chi) 2 statistic. The performance of the retrieval of four region indexing techniques, Histogram and three Wavelet-based techniques, are evaluated. The four indexing techniques are suitable for general domain image collections. The evaluation result has shown that, although, the three wavelet-based region indexing techniques provide a good comparable performances for non-object region queries, the histogram-based region indexing technique outperforms the three wavelet-based region indexing techniques for object region queries. The histogram technique is more suitable for indexing object regions. In the other hand the wavelet-based techniques are more suitable for non-object region indexing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We have shown that the remote sensing optical imaging to achieve detailed sub-pixel decomposition is a unique application of blind source separation (BSS) that is truly linear of far away weak signal, instantaneous speed of light without delay, and along the line of sight without multiple paths. In early papers, we have presented a direct application of statistical mechanical de-mixing method called Lagrange Constraint Neural Network (LCNN). While the BSAO algorithm (using a posteriori MaxEnt ANN and neighborhood pixel average) is not acceptable for remote sensing, a mirror symmetric LCNN approach is all right assuming a priori MaxEnt for unknown sources to be averaged over the source statistics (not neighborhood pixel data) in a pixel-by-pixel independent fashion. LCNN reduces the computation complexity, save a great number of memory devices, and cut the cost of implementation. The Landsat system is designed to measure the radiation to deduce surface conditions and materials. For any given material, the amount of emitted and reflected radiation varies by the wavelength. In practice, a single pixel of a Landsat image has seven channels receiving 0.1 to 12 microns of radiation from the ground within a 20x20 meter footprint containing a variety of radiation materials. A-priori LCNN algorithm provides the spatial-temporal variation of mixture that is hardly de-mixable by other a-posteriori BSS or ICA methods. We have already compared the Landsat remote sensing using both methods in WCCI 2002 Hawaii. Unfortunately the absolute benchmark is not possible because of lacking of the ground truth. We will arbitrarily mix two incoherent sampled images as the ground truth. However, the constant total probability of co-located sources within the pixel footprint is necessary for the remote sensing constraint (since on a clear day the total reflecting energy is constant in neighborhood receiving pixel sensors), we have to normalized two image pixel-by-pixel as well. Then, the result is indeed as expected.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Since in remote sensing each pixel could have its own unique radiation source s including man-made objects associated with different spectral reflectance matrix A, we could not average over neighborhood pixels. Instead, we solve pixel-by-pixel independent classes analysis (ica) without pixel average by Lagrange Constraint of the data measurement model and Gibbs' equal a priori probability assumption based on Shannon's Entropy H(s) with probability normalization condition for an arbitrary number of M classes that is bounded by the spectral data components N. We formulate the Fast Lagrangian method to maximize the Shannon entropy with the equality constraints in order to achieve O(N) numerical complexity contrary to the O(N2) numerical complexity associated with the solution of the inverse problem required in the classical Lagrangian formulation. Trivial equal probability solution with uniformly distributed class vector s is avoided by introducing additional set of the inequality constraints. The unknown spectral reflectance matrix A is estimated blindly in non-parameterized form minimizing an LMS energy function. We apply the Riemannian metric to the gradient learning for reproducing the biological Hebbian rule in terms of a full rank vector outer product formula and demonstrate faster convergence than standard Euclidean gradient. Since the proposed Fast Lagrangian method has O(N) numerical complexity we have achieved a real time hyperspectral remote sensing capability as platform moves, samples and processes. A FPGA firmware implementation for massive pixel parallel algorithm has been fired for patent.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Linear unmixing approaches are used to estimate the abundance fractions of the endmembers resident in each pixel. Generally, two constraints will be applied. First, the abundance fractions of each endmembers should be nonnegative, which is called nonnegativity constraint. The second constraint, called sum-to-one constraint, says the sum of all abundance fractions should be one. One great challenge is to include the nonnegativity constraint while solving linear mixture model. In this paper, we propose a Lagrange constraint neural network (LCNN) approach to linearly unmix the spectrum with both sum-to-one and nonnegativity constraints.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Lagrange Constraint Neural Network (LCNN) is a statistical-mechanical ab-initio model without assuming the artificial neural network (ANN) model at all but derived it from the first principle of Hamilton and Lagrange Methodology: H(S,A)= f(S)- (lambda) C(s,A(x,t)) that incorporates measurement constraint C(S,A(x,t))= (lambda) ([A]S-X)+((lambda) 0-1)((Sigma) isi -1) using the vector Lagrange multiplier-(lambda) and a- priori Shannon Entropy f(S) = -(Sigma) i si log si as the Contrast function of unknown number of independent sources si. Szu et al. have first solved in 1997 the general Blind Source Separation (BSS) problem for spatial-temporal varying mixing matrix for the real world remote sensing where a large pixel footprint implies the mixing matrix [A(x,t)] necessarily fill with diurnal and seasonal variations. Because the ground truth is difficult to be ascertained in the remote sensing, we have thus illustrated in this paper, each step of the LCNN algorithm for the simulated spatial-temporal varying BSS in speech, music audio mixing. We review and compare LCNN with other popular a-posteriori Maximum Entropy methodologies defined by ANN weight matrix-[W] sigmoid-(sigma) post processing H(Y=(sigma) ([W]X)) by Bell-Sejnowski, Amari and Oja (BSAO) called Independent Component Analysis (ICA). Both are mirror symmetric of the MaxEnt methodologies and work for a constant unknown mixing matrix [A], but the major difference is whether the ensemble average is taken at neighborhood pixel data X's in BASO or at the a priori sources S variables in LCNN that dictates which method works for spatial-temporal varying [A(x,t)] that would not allow the neighborhood pixel average. We expected the success of sharper de-mixing by the LCNN method in terms of a controlled ground truth experiment in the simulation of variant mixture of two music of similar Kurtosis (15 seconds composed of Saint-Saens Swan and Rachmaninov cello concerto).
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a novel wavelet compression algorithm for both audio and image with acceptable test by human perception. It is well known that Discrete Wavelet Transform (DWT) provides global multiple resolution decomposition that is the significant feature for the audio and image compressions. Experimental simulations show that the proposed audio and image model can satisfy the current industrial communication requirements in terms of the processing time and the compression fidelity. Development of wavelet-based compression algorithm considers the trade-off for hardware implementations. As a result, this high-performance video codec can develop compact, low power, high-speed, portable, cost-effective, and low-weight video compression for multimedia and home security applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, Goodrich Corp.- Fuels and Utility systems Division, has launched a research effort to investigate various sensing and computing techniques to gauge fuel in commercial and military aircrafts. Ultrasonics are among other techniques that were tested for such purpose. Generally, in ultrasonic fuel measurement systems, a transducer is excited to create sonic bursts into fuel and measure reflected echo off the fuel surface. A fixed target or pin is usually disposed at a fixed and predetermined distance from the ultrasonic transducer within the tank so as to compute the speed of sound through across fuel in the tank. Knowing the speed of sound in fuel and being able to measure the round trip time from when an ultrasonic burst is generated until its reflected off the fuel surface, the fuel height may be calculated. With a priori knowledge of the tank geometry, the fuel quantity can be estimated. This measurement process seems straightforward. A problem however is being able to discriminate between echoes reflected off various objects within the tank. In particular, it is crucial to discriminate among echoes reflected off various objects within the tank. In particular, it is crucial to discriminate among echoes reflected off the fuel surface and echoes reflected off other object withhin the tank, like the fixed target or tank sides. This paper presents a discrimination method based on wavelets to assign various ultrasonic echoes to their appropriate reflective surfaces.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.