PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
Bright light can temporarily reduce the visual capacity of the human eye. Especially in traffic or in dealing with lasers disability glare can considerably reduce the visual faculty and lead to accidents. Even after the glare light has disappeared out of the field of vision, the visual capacity can still remain reduced for seconds or even minutes. It is therefore indispensable to investigate the recovery time of the human eye after glaring and to determine limiting exposures in order to avoid accidents or possible eye damage. We report on measurements of the recovery time in the range from 475 nm to 650 nm performed with thermal sources. Experiments are reported on the influence of varying intensities of test and glare light applied in the line of sight or extrafoveally. These studies allow us to give an empirical formula for the recovery time. In the present contribution we also report on measurements as a function of the duration of glaring. The range from 20 microsecond(s) to 1 s has been covered with an Argon laser at 488 nm as glare source. The resulting recovery time in different regions of this very large time scale will be discussed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes an approach that will enable high speed processing of images to locate and track objects within the scene. Using a preprocessing chip directly attached to the detector array, the nearest neighbors of each element in the array can be analyzed in order to locate edges within the frame. These edges can be used to define objects. Because information about the object is all that is handed off to the main processor, both the data throughput and processing requirements are greatly reduced. This paper will discuss the basic program, modeling, preliminary chip design as well as the future plans for a 512 X 512 pixel preprocessor device that can operate at framing rates in excess of 1000 frames per second.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The assembly of the International Space Station will require the robotic manipulation of elements with a degree of complexity and restricted visibility which will be unprecedented in the history of manned space operations. Space-based robotic operators will maneuver multi-ton space station elements into tight capture envelopes with limited, non-orthogonal camera views and little to no direct viewing through windows. A camera-based system known as the Canadian Space Vision System will provide astronauts with the necessary precise positioning cues to perform these element berthings. An experimental version of this vision system was utilized on the space shuttle Columbia flight in November, 1996 to demonstrate element berthings and to evaluate several space station assembly operations.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper focuses on the characteristics and performance of a laser range scanner (LARS) with short and medium range 3D sensing capabilities for space applications. This versatile laser range scanner is a precision measurement tool intended to complement the current Canadian Space Vision System (CSVS). Together, these vision systems are intended to be used during the construction of the International Space Station (ISS). Integration of the LARS to the CSVS will allow 3D surveying of a robotic work-site, identification of known objects from registered range and intensity images, and object detection and tracking relative to the orbiter and ISS. The data supplied by the improved CSVS will be invaluable in Orbiter rendez-vous and in assisting the Orbiter/ISS Remote Manipulator System operators. The major advantages of the LARS over conventional video-based imaging are its ability to operate with sunlight shining directly into the scanner and its immunity to spurious reflections and shadows which occur frequently in space. Because the LARS is equipped with two high-speed galvanometers to steer the laser beam, any spatial location within the field of view of the camera can be addressed. This level of versatility enables the LARS to operate in two basic scan pattern modes: (1) variable scan resolution mode and (2) raster scan mode. In the variable resolution mode, the LARS can search and track targets and geometrical features on objects located within a field of view of 30 degrees X 30 degrees and with corresponding range from about 0.5 m to 2000 m. This flexibility allows implementations of practical search and track strategies based on the use of Lissajous patterns for multiple targets. The tracking mode can reach a refresh rate of up to 137 Hz. The raster mode is used primarily for the measurement of registered range and intensity information of large stationary objects. It allows among other things: target-based measurements, feature-based measurements, and, image-based measurements like differential inspection in 3D space and surface reflectance monitoring. The digitizing and modeling of human subjects, cargo payloads, and environments are also possible with the LARS. A number of examples illustrating the many capabilities of the LARS are presented in this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the industry moves towards open architecture standards, current image processing systems need to exploit the high computing throughput of commercially available parallel processing architectures in order to implement increasingly complex algorithms and systems. Adapting the inputs from a variety of visible and IR sensors to the unique requirements of multi-processing systems is essential for development of high performance real-time image processing systems. This paper describes the architecture of the Hughes Video Input Card (VIC) which provides a programmable hardware interface between imaging sensors and a high-speed parallel processor interconnect bus. In addition to providing the sensor electrical interface, the VIC supports important video processing functions by pre-conditioning the video data by the sensor electrical interface, the VIC supports important video processing functions by pre-conditioning the video data by image windowing, flowing point data conversion, and pixel decimation. These programmable features, combined with the image windowing, flowing point data conversion, and pixel decimation. These programmable features, combined with the VIC's interface to the high-speed RACEway, far exceed the capabilities of any front panel data port input.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we demonstrate that if micro-scanning is used with image composition and digital restoration then this process can be used to accurately restore pixel-scale features that couldn't be accurately restored otherwise. The performance of this technique is analyzed by model-based simulations using a comprehensive digital image processing systems model that accounts for blurring, noise, sampling, image composition, restoration filtering and reconstruction.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Vector quantization (VQ) has recently received significant attention as a powerful technique for data compression. VQ is theoretically attractive due to results from rate- distortion theory that show that VQ is asymptotically optical for the coding of stationary data sources. However, the nonstationary nature of the sources common in practical applications has prompted a search for more general VQ algorithms that are capable of adapting to changing source statistics as the coding progress. Such algorithms are commonly referred to as adaptive vector quantization (AVQ). We describe a new AVQ algorithm called generalized threshold replenishment (GTR) which differs from prior AVQ algorithms in that it features an explicit, online consideration of both rate and distortion. Rate-distortion cost criteria are used in both the determination of nearest-neighbor codewords and the decision to update the codebook. Results presented indicate that, for the coding of an image sequence, (1) most AVQ algorithms achieve distortion much lower than that of nonadaptive VQ for the same rate, and (2) the GTR algorithm achieves rate-distortion performance substantially superior to that of the prior AVQ algorithms for low-rate coding, being the only algorithm to achieve a rate below 1.0 bits/pixel for our image-sequence testing data.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One method for image segmentation involves fitting a mixture model to features extracted form an image, then using this statistical model to segment the image according to the posterior probabilities associated with each component. This procedure has the disadvantage that it can produce a noisy and disconnected segmentation. Using the posterior probabilities from the mixture, a Maximum A Posteriori (MAP) estimator can be produced which smooths the segmentation. This in turn can be used to improve the original mixture estimates via the expectation maximization (EM) algorithm for mixture models. This has the dual benefit of incorporating spatial information into the estimation of the mixture parameters, as well as producing improved segmentation. The algorithm is described, and applied to synthetic and real images. The result on the synthetic images show both improved segmentation and improved estimation of the mixture parameters.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes a hierarchical algorithm for multi- resolution texture-based image enhancement for segmentation performance. The proposed algorithm is a modification of the steerable pyramid algorithm that has been successfully implemented for research in texture synthesis. We investigate in this paper the application of such a multiscale approach for image analysis to the problem of image enhancement for segmentation. We present the application of the proposed algorithm to two medical images as a first exploration of the potential benefits of the approach. While some quantitative measure of assessment needs ultimately to be developed for performance assessment, some subjective evaluation at this early stage of the research can provide early feedback on the magnitude of a potential benefit. While the early results presented in this paper are encouraging, it is yet to be demonstrated that the proposed algorithm or an optimized version of the algorithm yields increased performance compared to other techniques.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A generalized algorithm for compression of monochrome and color images with fast Harr wavelet transform was investigated in this paper. These algorithms were evaluated on the existing images processing system available in the Laboratory of Image Processing at NIU. Experimental results show that the algorithms presented are successful for high ratio image compression. Satisfactory reconstructed images with high ratio of compression are obtained. With those algorithms the computational complexity will be greatly simplified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The application of principal components analysis (PCA) to multispectral satellite images is a routine way to present the data in false-color composite images. These composite images include a very high percentage of available information and have no correlation between the displayed colors. PCA routines are included in commercial GIS software, and custom algorithms are in wide use.This paper describes an early application of a new, genetic algorithm based, PCA routine. Landsat data for an Idaho farm were evaluated for temporal changes using this new algorithm, and the eigenvalues consistently converged with excellent results.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The first Clouds and Earth Radiant Energy System (CERES) scanning radiometer is scheduled to be launched on the joint US/Japanese Tropical Rainfall Measuring Mission in late 1997. The use of data from the CERES with those from higher resolution imagers requires a detailed knowledge of the CERES PRF, which describes the response of the radiometer to a point at a given location in the field-of-view. The PSF is determined by the field-of-view of the instrument, its optical design, and the time response of the thermistor- bolometer and the associated signal-conditioning electronics. The field-of-view is limited by an elongated hexagonal aperture in the field stop. The PSF has been measured in the laboratory; however, the finite solid angle of the beam used for measurement of the PSF complicates the interpretation of these measurements. This paper discusses the estimation of the PSF of the CERES instruments based on the effects of the time response, the finite solid angle of the beam used in the laboratory calibration, and the experimental output of the instrument. The paper compares the actual instrument output with the predicted results based on a finite solid angle uniform source.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a technique based on subband decomposition using wavelets, for the restoration of distorted, degraded, and corrupted advanced very high resolution radiometer (AVHRR) images. The principal focus is in restoring the effects of degradation and corruption generated by the resampling process in changes of format and additive noise. A Landsat Multispectral Scanner image is used to simulate introduces distortions similar to those obtained from the AVHRR image acquisition systems. The images are degraded by sampling followed by the addition of white noise. The restoration technique uses subband decomposition. This paper studies the selection of a suitable wavelet for the decomposition. An appropriate wavelet and level of decomposition is chosen to decompose the image into certain number of channels. Finally, a multichannel restoration filter based on Donoho and Jhonstone thresholding is applied in the subband domain, to smooth the selected detail coefficients. Examples of restoration of simulated NOAA-AVHRR images using the above technique are presented in the paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An optimal imaging system must fulfill its specifications without an expensive and undue quality. However the translation of high-level human requirements into low-level system demands is not easy. As human specifications deal only with objects or scenes to be sen, the knowledge of these objects and its properties relevant to the information transfer through the imaging system is critical. As many imaging system quality criteria are based on the knowledge of second order statistical properties of scenes or objects to be imaged, the goal of this paper is to show that it is possible to extract these properties from high-level mission requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents a comparative study of the performance of the spatial gray level dependence method (SGLDM) and the wavelet transform (WT) method using the three prevalent classifiers, maximum likelihood estimation, minimum distance classifier and the k-nearest neighbor. The features have been extracted using a tree-structured wavelet transform. Daubechies filters have been used for the composition. For SGLDM, experiments were performed to come up with an optimum combination of distance and angle for computing features. The criteria chosen for comparison is the classification accuracy under the constraints of the same sample size, same number of training and test samples, and same number of features. The results indicate that the maximum-likelihood classifier and the minimum distance function gave comparable results for the wavelet transform method. The k-nearest neighbor classifier gave the highest classification accuracy for the wavelet transform method but performed poorly for the SGLDM. Maximum-likelihood classifier performed better for the wavelet transform algorithm than the SGLDM. The minimum distance classifier did not prove to be powerful for the SGLDM.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The ability of wavelets to extract the high frequency component of an image has made them useful for edge detection. The high frequency details are analyzed and processed to obtain the edges. This work is primarily concerned with the comparison of Daubechies, Coiflet, and Symlet wavelets for the purpose of edge detection. Discrete wavelet frame has been sued to detect edges in this work. Different wavelet filters of varied lengths have been used to find out the best wavelet for edge detection. The criterion chosen for comparison is the same threshold selection. The results of the experimentation suggest that the Haar wavelet, which is the simplest of the Daubechies wavelets, is the best wavelet with the methodology followed in this paper. The results are also indicative of the fact that with increase in filter length the performance of the wavelet deteriorates.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Human hand recognition is a practical problem in pattern recognition for restricted access environment. The methodology of shape analysis, a conventional method for object identification, has been studied in this paper for human hand recognition. The idea behind the chosen approach is to consider human hand as an object of definite rigid shape, extract its boundary and shape parameters as recognition features. Shape analysis requires the extraction of object features, often normalized and invariant to various geometric change such as translation, rotation and scale. In the human hand recognition system three different views at different orientations of the same hand were taken with the help of PULNIX CCD camera. The digital image of hand has been analyzed using Fourier descriptors and various moment invariants.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present some fundamental theoretical results pertaining to the question of how many randomly selected labelled example points it takes to reconstruct a set in euclidean space. Drawing on results and concepts from mathematical morphology and learnability theory, we pursue a set-theoretic approach and demonstrate some provable performances pertaining to euclidean-set-reconstruction from stochastic samples. In particular, we demonstrate a stochastic version of the Nyquist Sampling Theorem - that, under weak assumptions on the situation under consideration, the number of randomly-drawn example points needed to reconstruct the target set is at most polynomial in the performance parameters and also the complexity of the target set as loosely captured by size, dimension and surface-area. Utilizing only rigorous techniques, we can similarly establish many significant attributes - such as those relating to robustness, cumulativeness and ease-of- implementation - pertaining to smoothing over labelled example points. In this paper, we formulate and demonstrate a certain fundamental well-behaving aspect of smoothing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A point pattern relaxation matching method which is invariant to rotation and scale changes is proposed. Experimental results with large simulated images prove the effectiveness and feasibility of the method to perform point pattern relaxation matching invariant to rotation and scale changes.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One of the major obstacles facing developers of parallel image processing applications is the lack of efficient programming environments. In this paper, we describe the environment currently under development for supporting image algebra operations on a fine grained, massively parallel processor, the PAL. A graphical design tool is described as are some issues that arise in retargeting a C++ library for image algebra to the PAL.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Even in existing standard television systems takes place a great informational surplusment, as far as man's mind receives less than 1 percent of this information. In high- resolution TV systems this so-called physiological video information surplusment increases more. That's why such TV systems appear to be too complicated and too expensive. Acceptable alternative to high resolution television may be using in existing television the means of video signal analog restoration, by means of which considerable improvement of picture quality is made.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Restoration of video information may be provided by digital means with using noise steady coding and decoding of initial information with introduced in it some surplusment. But initial television information, in the form of analog videosignals, possesses considerable surplusment, cause man's brain gets less than 1 percent of reproduced on TV screen information. This so-called physiological video information surplusment at present is not used in television. But it is used in following means of videosignal restoration that allows to improve considerably horizontal sharpness of image with increasing noise suppression.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We present a new stereo algorithm which includes some features of area-based and edge-based approaches, eliminating their main drawbacks. An addition to usual pre- and post-processing steps, the algorithm includes a new solution of correspondence problem For each left edge point within the some support area we search at first the possible corresponding points in the right image selecting than only the single candidate having the maximum support sum. To find a support sum for each disparity candidate, we use a disparity gistogram taking the magnitude of the nearest gistogram peak as a support sum. To find a support sum for each disparity candidate, we use a disparity gistogram taking the magnitude of the nearest gistogram peak as a support sum to be found. To reject false matches, each support area is classified depending on fraction F of edge points connected with the major gistogram peak. If F is more than some threshold, say 80 percent, all points not connected with the major peak are simply rejected marking the rest points as a true. In opposite case, more complicated procedure is used taking into account the edge contours information by testing it for some global consistency constrains. The algorithm have been successfully tested using specially developed computer program operating with different natural objects including some hard features.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In pattern recognition, thinning algorithms are often a useful tool to represent a digital pattern by means of a skeletonized image, consisting of a set of one-pixel-width lines that highlight the significant features interest in applying thinning directly to gray-scale images, motivated by the desire of processing images characterized by meaningful information distributed over different levels of gray intensity. In this paper, a new algorithm is presented which can skeletonize both black-white and gray pictures. This algorithm is based on the gray distance transformation and can be used to process any non-well uniformly distributed gray-scale picture and can preserve the topology of original picture. This process includes a preliminary phase of investigation in the 'hollows' in the gray-scale image; these hollows are considered not as topological constrains for the skeleton structure depending on their statistically significant depth. This algorithm can also be executed on a parallel machine as all the operations are executed in local. Some examples are discussed to illustrate the algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper analyzes the jagged effect that occurs in raster image display devices.Our main goal is to identify the true causes of the jagged effect, and the initial hypothesis is that between sampling and reconstruction errors the latter is the only cause. The developed methodology used to perform the research about the true causes of the jagged effect is based on the creation of four test situations where we control the occurrence of the two possible causes to be tested. In each test situation we investigate the occurrence of it, sampling error and reconstruction error in the time and frequency domains. The final conclusion based on the result of these analyses is that between the reconstruction and sampling errors, the reconstruction is the only one that produce the jagged effect.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper analyzes an image reconstruction technique, called 2D normalized sampled finite sinc reconstructor (NSFSR 2D), different from the currently used image reconstruction technique modeled as a 2D zero order reconstructor (ZOR 2D). The main goal of this work is to eliminate or drastically reduce the jagged effect caused by the reconstruction error. The proposed reconstructor has a behavior closer to the ideal reconstructor used in the original sampling theorem. Some tests were done in space and frequency domains comparing this reconstructor to the currently used. We conclude that NSFSR 2D is much better than the ZOR 2D. ALthough some improvements should be done, the results so far are expressive and promising.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper shows an architectural proposal for an application specific integrated circuit (ASIC) designed to perform image reconstruction in real time. This architecture implements in hardware the reconstruction technique, called 2D normalized sampled finite sinc reconstructor (NSFSR 2D), which has been formerly proposed and implemented in software. We develop an ASIC that implements NSFSR 2D technique as a dedicated static pipeline architecture. We model and simulate this architecture using VHDL hardware description language. Based on analysis of the validation results, we conclude that the proposed architecture implements the NSFSR 2D correctly and is optimized in performance when compared with a software-based implementation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A general adaptive protocol to operate modems with minimum latency while extracting the maximum throughput is presented. Modern modems have large internal buffers and complex error correction and data compression protocols. The internal buffering in modems generally leads to large latencies and jitters in latency. Buffers in the communication drivers and operating system add to latency in the path. These shortcomings can be overcome using a special protocol so that a tighter bound on latency and jitter can be specified. This can benefit the real-time communication over modems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel image compression method is proposed, in this paper, based on fractal prediction. The original image is decimated into a smaller image which is encoded with fractal method based on the fact that the smaller the size of an image, the shorter the time of encoding. A fractal prediction is obtained by decoding an image, at a same size as the original image, from the fractal codes of the decimated image. The prediction image is subtracted from the original image to arrive at a difference image, which is encoded based on DCT for error correction. Experimental results show that this algorithm is faster than typical fractal image coding methods, and the reconstructed image have good fidelities to the original image at relatively high compression. This method combines the advantages of fractal coding and JPEG coding together.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper describes homogeneous computing structures, which are like systolic arrays and can be used for designing efficient special purpose architectures and programmable specialized hardware accelerators for signal and image processing. The process of data execution in homogeneous computing structures is considered. The program for simulation of homogeneous computing structure and the programming system IMPL are described.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In stereo vision, depth information is one of the important parameters to understand the real world. One method for extracting such depth information is based on the geometry of stereo vision using two cameras displaced from each other by a baseline distance. In this paper, we show an improved triangulation method based on stereo vision angles. We setup a stereo vision system which extracts the distance to the object by detecting moving objects using difference image and by obtaining depth information using the improved triangulation method It has been implemented employing a TMS320C30 DSP board in a stereo vision system. As a result of experiment, the proposed vision system has the accuracy of 0.2mm in the range of 400mm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents an iterative fuzzy vector quantization approach used in codebook design and its neural net algorithm, the fuzzy self-organizing feature map (FSOFM) algorithm, which is the development of the self-organizing feature map (SOFM) algorithm and the fuzzy vector quantization (FVQ) algorithms. The FVQ algorithm allows that each training vector is assigned to multiple codewords in the early stages of the codebook design. ALthough, the FVQ algorithm reduces the dependence of the resulting codebook on the initial codebook, the codewords are calculated in batch mode. The iterative fuzzy vector quantization approach is based on a gradient decent approach, and the concept of fuzzy is introduced into it. The FSOFM algorithm considers the winning output node and its neighborhood as a fuzzy set of the input node. As a result, the feature vector of the output node in the fuzzy set of the input sample can be updated by the membership function and the training vector just completing once iteration. In this paper, the LBG, FVQ, SOFM and FSOFM algorithms are used in image compression based on vector quantization. This paper evaluates the computing efficiency of these algorithms and compares the quality of the resulting codebooks.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presents the development of parallel processing subsystems as part of multi-wavelength imaging pyrometers (M-WIPs) for real-time non-contact temperature measurement. The parallel processing subsystems are based on V-C40 Hydra or PC-Hydra which consists of four TMS320C40 floating-point digital signal processors. This paper describes the single- program-multiple-data implementation of the least-square fitting algorithm used in the M-WIP temperature measurement and compares its performance with the multi-program- multiple-data approach.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.