PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.
This PDF file contains the front matter associated with SPIE Proceedings Volume 9675, including the Title Page, Copyright information, Table of Contents, Invited Panel Discussion, and Conference Committee listing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Elevation data obtained by the laser altimeter describes the real lunar topography. Digital elevation modeling and data preprocessing were carried out based on source data published by China National astronomical observatories. The Feature Points Dynamic Supply Method(FPDSM) was proposed. The whole craters were identified on the moon and which characteristic parameters were calculated. Lips details can be distinguished. Light noise of image data and the difficulties to identify irregular boundary of lunar craters were reduced. Number of craters in designated areas were counted and so does the statistic distribution. The Feature Points Dynamic Supply Method was verified from three ways: the relationship between craters density and geological age ,and NASA’s statistical results of diameter-frequency, and the data available of some well-known craters. The maximum measurement error of pit center position and diameter are 15.63% and 4.39%. For the angle between the major diameter of fitted ellipse of pit lips and local north direction ,the neighboring pits have the same or similar value.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we design a novel deep-sea near infrared light based imaging equipment for deep-sea mining observation systems. The spectral sensitivity peaks are in the red region of the invisible spectrum, ranging from 750nm to 900nm. In addition, we propose a novel underwater imaging model that compensates for the attenuation discrepancy along the propagation path. The proposed model fully considered the effects of absorption, scattering and refraction. We also develop a locally adaptive Laplacian filtering for enhancing underwater transmission map after underwater dark channel prior estimation. Furthermore, we propose a spectral characteristic-based color correction algorithm to recover the distorted color. In water tank experiments, we made a linear scale of eight turbidity steps ranging from clean to heavily scattered by adding deep sea soil to the seawater (from 500 to 2000 mg/L). We compared the results of different turbidity underwater scene, illuminated alternately with near infrared light vs. white light. Experiments demonstrate that the enhanced NIR images have a reasonable noise level after the illumination compensation in the dark regions and demonstrates an improved global contrast by which the finest details and edges are significantly enhanced. We also demonstrate that the effective distance of the designed imaging system is about 1.5 meters, which can meet the requirement of micro-terrain observation around the deep-sea mining systems. Remotely Operated Underwater Vehicle (ROV)-based experiments also certified the effectiveness of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A Multiple-frequency broadband planar monopole antenna is proposed in this Paper. The antenna is stimulated and numerically optimized by HFSS13.0 (High Frequency Structure Simulator). The size of it is 39mm×22mm×1.7mm. The antenna resonates at many frequencies. The parameter S11≤-6dB means the proposed antenna matches well with its feed-line and covers many useful operation frequency bands, including 2G(DCS1800 and PCS1900), 3G(UMTS), 4G(LTE2300 and LTE2500), ISM, WLAN. It is quiet appropriate for the present ultra-thin smart phones
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
LED-based digital hologram, considered as low-coherence digital hologram, is confined to in-line holography because the interference fringes could be observed only when the angle between the object and reference wave is small enough. So, phase-shifting technique is usually employed. But it is not fit for dynamic analysis for demanding more than one hologram. A numerical reconstruction method based on compressive sensing theory for single LED-based digital hologram is proposed to achieve dynamic analysis. By this method, the out-of-focus twin image and the coherent noise can be inhibited to some extent. The theory is presented in detail, and experimental result on LED-based digital holography with USAF pattern as test target, is performed to demonstrate the feasibility and validity of the method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Gain modulation imaging technique is one of the prominent schemes for scannerless lidar. By controlling the gate width, it’s easy to suppress backscatter noise and make the image more accurately. Imaging range and accuracy of gain modulation laser imaging become a research focus at present. According to the principle of imaging, the signal energy and the noise energy reaching the imager can be found. Further signal-to-noise ratio can be obtained. Previous theoretical models consider only linear gain condition. However the influence of laser pulse width and other factors are less taken into consideration. These models will have a certain deviation with the actual one. By simulating the nonlinear gain with consideration of the laser pulse width and lambert spherical radiation, more accurate SNR model of gain modulation laser imaging is obtained. On this basis, the established SNR model can be used to estimate the experimental distance with good imaging effect. It provides the theoretical basis for subsequent experiment system parameter selection and image processing.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image information onboard processing is one o f important technology to rapidly achieve intelligence for remote sensing satellites. As a typical target, aircraft onboard detection has been getting more attention. In this paper, we propose an efficient method of aircraft detection for remote sensing satellite onboard processing. According to the feature of aircraft performance in remote sensing image, the detection algorithm consists of two steps: First Salient Object Detection (SOD) is employed to reduce the amount of calculation on large remote sensing image. SOD uses Gabor filtering and a simple binary test between pixels in a filtered image. White points are connected as regions. Plane candidate regions are screened from white regions by area, length and width of connected region. Next a new algorithm, called Circumferential Information Matching method, is used to detect aircraft on candidate regions. The results of tests show circumference curve around the plane center is stable shape, so the candidate region can be accurately detecting with this feature. In order to rotation invariant, we use circle matched filter to detect target. And discrete fast Fourier transform (DFFT) is used to accelerate and reduce calculation. Experiments show the detection accuracy rate of proposed algorithm is 90% with less than 0.5s processing time. In addition, the calculation of the proposed method through quantitative anglicized is very small. Experimental results and theoretical analysis show that the proposed method is reasonable and highly-efficient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Calculation to the transmission of infrared radiation is a key step in the research of the ability of detecting targets for an infrared detector system. Water vapor and carbon dioxide is the main factor causing the infrared radiation attenuation in the atmosphere. According to the existing experimental data, the effect of H2O and CO2 to the transmission of infrared radiation on horizontal route in the standard meteorological conditions is analyzed, and the transmittance in the different bants is simulated by establishing a reasonable mathematical model and using the method of curve fitting. The high-accuracy algorithm and it’s accuracy is put forward, and is compared with the traditional formula of the infrared radiation transmission. Research results of calculation to the transmission of infrared radiation can obtain higher calculation accuracy, and it is of great significance in the research of the ability of detecting targets for an infrared detector system which needs high-accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optoelectronic imaging system which loaded on ships have several imaging channels generally. They can cover visible light (daylight and low-light), medium wave infrared and long wave infrared. To that systems have only two channels, visible light imaging is kept. In this paper, for the requirement of target real-time detection and classification under sea-sky background, image data from different channels are processed independently using Harris feature of targets and texture feature of background, then the result data from different channels are fused and compared to delete fake targets and interference from background, in order to reduce false alarm rate and improve the detection location precision. Based on the location relationship between targets and different type background (sky or sea), the target types are determined. For the rigidly requirement of system real time, multithread mechanism and big neighborhood processor are applied for parallel data processing, in order to reduce the processing time less than one frame time. At last, the experiment has been done on two channels system, approving that the method in this paper can improve the comprehensive searching performance of optoelectronic imaging system.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Abstract—a non-uniformity correction algorithm is proposed and implemented on a Field-Programmable Gate Array (FPGA) hardware platform to solve a problem of pixel response non-uniformity(PRNU) for multiple Time Delay and Integration Charge Couple Device(TDICCD) camera. The non-uniformity is introduced and a synthetical correction algorithm is presented, in which the two-point correction method is applied to a single channel, gain averaging correction method among multi-channel and scene-adaptive correction method among multiple-TDICCD. Then, the correction algorithm is generated. Finally, the FPGA ability for fix-point processing is analyzed; the correction algorithm is optimized, and implemented on FPGA. Testing results indicate that the non-uniformity can be decreased from 8.27% to 0.51% for three TDICCDs camera's images with this proposed correction algorithm, proving that this correction algorithm is with high real-time performance, great engineering realization and satisfaction for the system requirements.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose an image deconvolution algorithm when the data is contaminated by Poisson noise. By minimizing Stein's unbiased risk estimate (SURE), the SURE-LET method was firstly proposed to deal with Gaussian noise corruption. Our key contribution is to demonstrate that the SURE-LET algorithm is also applicable for Poisson noisy image and proposed an efficient algorithm.
The formulation of SURE requires knowledge of Gaussian noise variance. We experimentally found a simple and direct link between the noise variance estimated by median absolute difference (MAD) method and the optimal one that leads to the best deconvolution performance in terms of mean squared error (MSE). Extensive experiments show that this optimal noise variance works satisfactorily for a wide range of natural images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a variant of unbiased predictive risk estimate (UPRE) as a novel criterion for estimating a point spread function (PSF) from the degraded image only. Compared to the traditional unbiased estimates (e.g. UPRE and SURE), the key advantage of this variant is that it does not require the knowledge of noise variance. The PSF is obtained by minimizing this new objective functional over a family of smoother processings. Based on this estimated PSF, we then perform deconvolution using our recently developed SURE-LET algorithm. The novel criterion is exemplified with a number of parametric PSF. The experimental results demonstrate that the UPRE-variant minimization yields highly accurate estimates of the PSF parameters, which also result in a negligible loss of visual quality, compared to that obtained with the exact PSF. The highly competitive results outline the great potential of developing more powerful blind deconvolution algorithms based on this criterion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a generalized version of Akaike's information criterion (AIC) as a novel criterion for estimating a point spread function (PSF) from the degraded image only. We first show that the generalized AIC (G-AIC) is equivalent to quadratic prediction loss up to some constant, and prove that incorporating exact smoother filtering, the minimization of the prediction loss yields exact estimate of PSF. The PSF is obtained by minimizing this G-AIC over a family of approximated smoother filterings. Based on this estimated blur kernel, we then perform non-blind deconvolution using our recently proposed SURE-LET algorithm. The proposed framework is exemplified with a number of parametric PSF. The experimental results demonstrate that the minimization of this criterion yields highly accurate estimates of the PSF parameters, which also result in a negligible loss of visual quality, compared to that obtained with the exact PSF. The highly competitive results show the great potential of developing more powerful blind deconvolution algorithms based on this criterion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In today’s information age, people will depend more and more on computers to obtain and make use of information, there is a big gap between the multimedia information after digitization that has large data and the current hardware technology that can provide the computer storage resources and network band width. For example, there is a large amount of image storage and transmission problem. Image compression becomes useful in cases when images need to be transmitted across networks in a less costly way by increasing data volume while reducing transmission time. This paper discusses image compression to effect on face recognition system. For compression purposes, we adopted the JPEG, JPEG2000, JPEG XR coding standard. The face recognition algorithms studied are SIFT. As a form of an extensive research, Experimental results show that it still maintains a high recognition rate under the high compression ratio, and JPEG XR standards is superior to other two kinds in terms of performance and complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To overcome the drawback that Boosting decision trees perform fast speed in the test time while the training process is relatively too slow to meet the requirements of applications with real-time learning, we propose a fast decision trees training method by pruning those noneffective features in advance. And basing on this method, we also design a fast Boosting decision trees training algorithm. Firstly, we analyze the structure of each decision trees node, and prove that the classification error of each node has a bound through derivation. Then, by using the error boundary to prune non-effective features in the early stage, we greatly accelerate the decision tree training process, and would not affect the training results at all. Finally, the decision tree accelerated training method is integrated into the general Boosting process forming a fast boosting decision trees training algorithm. This algorithm is not a new variant of Boosting, on the contrary, it should be used in conjunction with existing Boosting algorithms to achieve more training acceleration. To test the algorithm’s speedup performance and performance combined with other accelerated algorithms, the original AdaBoost and two typical acceleration algorithms LazyBoost and StochasticBoost were respectively used in conjunction with this algorithm into three fast versions, and their classification performance was tested by using the Lsis face database which contained 12788 images. Experimental results reveal that this fast algorithm can achieve more than double training speedup without affecting the results of the trained classifier, and can be combined with other acceleration algorithms. Key words: Boosting algorithm, decision trees, classifier training, preliminary classification error, face detection
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to low resolutions of infrared face image, the local texture features are more appreciated for infrared face feature extraction. To extract rich facial texture features, infrared face recognition based on local binary pattern (LBP) and center-symmetric local derivative pattern (CS-LDP) is proposed. Firstly, LBP is utilized to extract the first order texture from the original infrared face image; Secondly, the second order features are extracted CS-LDP. Finally, an adaptive weighted fusion algorithm based separability discriminant criterion is proposed to get final recognition features. Experimental results on our infrared faces databases demonstrate that separability oriented fusion of LBP and CS-LDP contributes complementary discriminant ability, which can improve the performance for infrared face recognition
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Based on multiple maximum scatter difference discrimination Dictionary learning, a novel face recognition algorithm is proposed. Dictionary used for sparse coding plays a key role in sparse representation classification. In this paper, a multiple maximum scatter difference discriminated criterion is used for dictionary learning. During the process of dictionary learning, the multiple maximum scatter difference computes its discriminated vectors from both the range of the between class scatter matrix and the null space of the within-class scatter matrix. The proposed algorithm is theoretically elegant and easy to calculate. Extensive experimental studies conducted on the AR database and Extended Yale Database B in comparison with existing basic sparse representation and other classification methods, it shows that the performance is a little better than the original sparse representation methods with lower complexity.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image blind deconvolution is a more practical inverse problem in modern imaging sciences including consumer photography, astronomical imaging, medical imaging, and microscopy imaging. Among all of the latest blind deconvolution algorithms, the total variation based method provides privilege for large blur kernel. However, the computation cost is heavy and it does not handle the estimated kernel error properly. Otherwise, the using of the whole image to estimate the blur kernel is inaccurate because of that the insufficient edges information will hazard the accuracy of estimation. Here, we proposed a robust multi-frame images blind deconvolution algorithm to handle this complicated imaging model and applying it to the engineering community. In our proposed method, we induced the patch and kernel selection scheme to selecting the effective patch to estimate the kernel without using the whole image; then an total variation based kernel estimation algorithm was proposed to estimate the kernel; after the estimation of blur kernels, a new kernel refinement scheme was applied to refine the pre-estimated multi-frame estimated kernels; finally, a robust non-blind deconvolution method was implemented to recover the final latent sharp image with the refined blur kernel. Objective experiments on both synthesized and real images evaluate the efficiency and robustness of our algorithm and illustrate that this approach not only have rapid convergence but also can effectively recover high quality latent image from multi-blurry images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In real applications, such as consumer digital imaging, it is very common to record weakly blurred and strongly noisy images. Recently, a state-of-art algorithm named geometric locally adaptive sharpening (GLAS) has been proposed. By capturing local image structure, it can effectively combine denoising and sharpening together. However, there still exist two problems in the practice. On one hand, two hard thresholds have to be constantly adjusted with different images so as not to produce over-sharpening artifacts. On the other hand, the smoothing parameter must be manually set precisely. Otherwise, it will seriously magnify the noise. However, these parameters have to be set in advance and totally empirically. In a practical application, this is difficult to achieve. Thus, it is not easy to use and not smart enough. In an effort to improve the restoration effect of this situation by way of GLAS, an improved GLAS (IGLAS) algorithm by introducing the local phase coherence sharpening Index (LPCSI) metric is proposed in this paper. With the help of LPCSI metric, the two hard thresholds can be fixed at constant values for all images. Compared to the original method, the thresholds in our new algorithm no longer need to change with different images. Based on our proposed IGLAS, its automatic version is also developed in order to compensate for the disadvantages of manual intervention. Simulated and real experimental results show that the proposed algorithm can not only obtain better performances compared with the original method, but it is very easy to apply.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper presented a design of real-time infrared image acquisition and display system based on mercury cadmium telluride (HgCdTe) short-wave infrared staring camera. In this system, the short-wave infrared camera was the basic condition. It was based on mercury cadmium telluride (HgCdTe) infrared photo-detector and used the FPGA+DSP hardware structure. It also had many special functions such as manual focus, adjustable integral time, and increasable attenuator and so on. The Camera Link connecting Line Linked the camera up with the PC, and the acquisition card in the PC adopted the PCI Express x4 technology. So all the above functions and features constituted the system, and made it realize real-time acquisition and display rapidly and steadily. In the experiment, an acquisition and display interface and a playback interface were used. The former was used to control the infrared camera and adjust the acquired images in real time. And the latter was used to test the stored images and do some post processes and so on. In the experiment result, this system has reliable performances, and its functions such as real-time acquisition, display and storage are complete and accurate. This system is easy to realize and apply in engineering project.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Remote sensing technology has been broadly recognized for its convenience and efficiency in mapping vegetation, particularly in high-altitude and inaccessible areas where there are lack of in-situ observations. In this study, Landsat Thematic Mapper (TM) images and Chinese environmental mitigation satellite CCD sensor (HJ-1 CCD) images, both of which are at 30m spatial resolution were employed for identifying and monitoring of vegetation types in a area of Western China——Qinghai Lake Watershed(QHLW). A decision classification tree (DCT) algorithm using multi-characteristic including seasonal TM/HJ-1 CCD time series data combined with digital elevation models (DEMs) dataset, and a supervised maximum likelihood classification (MLC) algorithm with single-data TM image were applied vegetation classification. Accuracy of the two algorithms was assessed using field observation data. Based on produced vegetation classification maps, it was found that the DCT using multi-season data and geomorphologic parameters was superior to the MLC algorithm using single-data image, improving the overall accuracy by 11.86% at second class level and significantly reducing the “salt and pepper” noise. The DCT algorithm applied to TM /HJ-1 CCD time series data geomorphologic parameters appeared as a valuable and reliable tool for monitoring vegetation at first class level (5 vegetation classes) and second class level(8 vegetation subclasses). The DCT algorithm using multi-characteristic might provide a theoretical basis and general approach to automatic extraction of vegetation types from remote sensing imagery over plateau areas.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The image phase coherences (PCs) remain invariant when brightness and contrast changes. A new method of remote sensing image registration is proposed. PCs are firstly extracted from the reference image and the input image. In order to improve the registration efficiency, in the first step the PCs are firstly down-sampled using Gaussian pyramid method, and the coarse translation parameters are calculated using phase correlation. In the second step, Harris corners are detected from the two images, and normalized cross-correlation (NCC) function based on PCs is used to find the corresponding matching corners of the two images, and then obtain parameters of an alignment transform model. Experiments have demonstrated that the coarse-to-fine method can be successfully applied to multi-source images registration.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A frame of simulated star map needs to superimpose various types of background noise on it, among which the omissive refractive stray sunlight out of the sun baffle is one of the most important noise sources. For the real time simulation of star maps, the optimal scheme should be that sun stray light noise generation relies on mathematical model rather than the pre-generated noise frame base to be loaded. Firstly the formation mechanism of sun stray light noise out of star tracker baffle is introduced and its modeling method is given, the sun directional vector at the imaging time is converted to the unit vector coordinates in the star tracker body frame through a series of attitude transfer matrix, and continue to be projected on the extended imaging plane via the optics model, gray value of each pixel is assigned based on the distance between the sun projection point and the corresponding pixel. Then, based on a set of sun simulator experimental imaging data in different angles for the performance test of a certain baffle, the model coefficients are estimated through fitting method. Finally, the item of sun stray light noise simulated this way is superimposed on the basic pure simulated star map, so the similarity of the final outputted star map is further promoted in the electronic star simulator used in a certain institution. The scenarios simulated by this method, which depict the circumstance that the boresight of the star tracker is adjacent to sun vector, are convenient tools for the robustness test of star image centroiding algorithm or on-orbit real time flight simulation involving star light attitude determination.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
MODIS NDVI time-series data could indicate vegetation status in each season and have been widely used for land cover classification and studies in the fields of vegetation and land degradation monitoring. During global land cover mapping project at 30m resolution aiming at developing high quality product, there were mistakes of classification between bareland and vegetation in GlobeLand30 data in regions surrounding deserts because that the dates of some images are not in growing season. In this paper, we proposed a method to check GlobeLand30 data of 2010 in these areas. Max NDVI value of MODIS NDVI time-series data is chosen to represent growing conditions of vegetation. And then vegetation fraction (VF) calculated from the max NDVI value is divided into bareland and vegetation based on the definition of bareland that VF of bareland is lower than 10%. The dimidiated VF maps are employed to check GlobeLand30 with the help of high resolution images and other references. Finally, errors found out by steps above are modified with VF maps and segmentation objects of images at 30m resolution. 149 map sheets of GlobeLand30 were checked and 105 of them were modified. 13409 samples in 10 map sheets totally were selected to assess the effect of the approach. The result showed that the accuracy after modification of GlobeLand30 was higher than that before modification.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In recent years, multi-spectral data have been widely used to prospect the source rock or source region of porphyry copper deposits. In this paper, we present a system to extract the information of alteration mineral from multi-spectral data. The system is developed with the Visual C# 2010 and IDL8.0 platform, and is able to process the ASTER multi-spectral data which is a new multiband file after layer stacking. Based on the principal component analysis, the ASTER data is processed automatically in this system. The result shows two components, the classified information of argillic- and phyllic-altered mineral assemblage and propylitization mineral assemblage. Compared with the traditional processes, the system not only lowers the threshold of the application of remote sensing technology, but also improves the work efficiency significantly.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
This paper researches on the key and difficult issues in stereo measurement deeply, including camera calibration, feature extraction, stereo matching and depth computation, and then put forwards a novel matching method combined the seed region growing and SIFT feature matching. It first uses SIFT characteristics as matching criteria for feature points matching, and then takes the feature points as seed points for region growing to get better depth information. Experiments are conducted to validate the efficiency of the proposed method using standard matching graphs, and then the proposed method is applied to dimensional measurement of mechanical parts. The results show that the measurement error is less than 0.5mm for medium sized mechanical parts, which can meet the demands of precision measurement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
CFAR (Constant False Alarm Rate) is a key technology in Infrared dim-small target detection system. Because the traditional constant false alarm rate detection algorithm gets the probability density distribution which is based on the pixel information of each area in the whole image and calculates the target segmentation threshold of each area by formula of Constant false alarm rate, the problems including the difficulty of probability distribution statistics and large amount of algorithm calculation and long delay time are existing. In order to solve the above problems effectively, a formula of Constant false alarm rate based on target coordinates distribution is presented. Firstly, this paper proposes a new formula of Constant false alarm rate by improving the traditional formula of Constant false alarm rate based on the single grayscale distribution which objective statistical distribution features are introduced. So the control of false alarm according to the target distribution information is implemented more accurately and the problem of high false alarm that is caused of the complex background in local area as the cloud reflection and the ground clutter interference is solved. At the same time, in order to reduce the amount of algorithm calculation and improve the real-time characteristics of algorithm, this paper divides the constant false-alarm statistical area through two-dimensional probability density distribution of target number adaptively which is different from the general identifying methods of constant false-alarm statistical area. Finally, the target segmentation threshold of next frame is calculated by iteration based on the function of target distribution probability density in image sequence which can achieve the purpose of controlling the false alarm until the false alarm is down to the upper limit. The experiment results show that the proposed method can significantly improve the operation time and meet the real-time requirements on condition of keeping the target detection performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A fast randomized Hough transformation track initiation algorithm based on multi-scale clustering is proposed to overcome existing problems in traditional infrared search and track system(IRST) which cannot provide movement information of the initial target and select the threshold value of correlation automatically by a two-dimensional track association algorithm based on bearing-only information . Movements of all the targets are presumed to be uniform rectilinear motion throughout this new algorithm. Concepts of space random sampling, parameter space dynamic linking table and convergent mapping of image to parameter space are developed on the basis of fast randomized Hough transformation. Considering the phenomenon of peak value clustering due to shortcomings of peak detection itself which is built on threshold value method, accuracy can only be ensured on condition that parameter space has an obvious peak value. A multi-scale idea is added to the above-mentioned algorithm. Firstly, a primary association is conducted to select several alternative tracks by a low-threshold .Then, alternative tracks are processed by multi-scale clustering methods , through which accurate numbers and parameters of tracks are figured out automatically by means of transforming scale parameters. The first three frames are processed by this algorithm in order to get the first three targets of the track , and then two slightly different gate radius are worked out , mean value of which is used to be the global threshold value of correlation. Moreover, a new model for curvilinear equation correction is applied to the above-mentioned track initiation algorithm for purpose of solving the problem of shape distortion when a space three-dimensional curve is mapped to a two-dimensional bearing-only space. Using sideways-flying, launch and landing as examples to build models and simulate, the application of the proposed approach in simulation proves its effectiveness , accuracy , and adaptivity of correlation threshold selection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Solving the problem of obstacle avoidance with K-means clustering analysis method is discussed in this paper. Firstly, applying the MATLAB to read digital elevation a few kilometers away from the lunar surface, and obtaining the elevation matrix of all points. Then the different heights of points into four grays for different shades are classified with K-means clustering analysis method. Finally, a safe landing area (coordinates, radius and velocity increment) is determined with clockwise spiral forward searching method starting from the landing center.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the existing photoelastic modulator (PEM) resonant frequency is high (tens to hundreds of kHz), the interference signal frequency is up to hundreds of MHz, even to several GHz. Signal frequency is so high that they can not effectively be detected by charge coupled device (CCD). This paper reports a method for measuring spectroscopy using two PEMs at different frequencies. The difference frequency of dual-PEM system is 2~3 orders of magnitude lower than any one modulation frequency of the two PEMs. Operating the PEMs at slightly different resonant frequencies f1 and f2 respectively, the dual-PEM system generates a difference frequency modulation signal. Therefore, interference signal contains low frequency modulation components which carry the information of the incident light. And low frequency modulation component consists of a series of frequency-multiplier signals whose fundamental frequency is equal to (f1-f2)/2. The low frequency modulation components can be detected by ordinary CCD. Through Fourier transform, modulation signal of CCD is to get a series of frequency-multiplier signals amplitude. Then the incident light spectra can be obtained by the corresponding matrix operations. Furthermore, this method is to realize object imaging spectral measurement by the way of combining with CCD. The peak retardation amplitude of two PEMs does not require being complete equal, and modulation frequencies f1 and f2 do not also require equality. So this can reduce the difficulty of the PEM processing. What’s more, the method makes the traditional PEM have both imaging and spectroscopy measurement functions. The basic principle is introduced, the basic equations is derived, and the feasibility is verified through the corresponding numerical simulation and experiment.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to test the working state of adaptive optics system, it is necessary to design an online sweep-frequency circuit module to test the frequency response of the adaptive system. Sweep-frequency signal generator based on Direct Digital frequency Synthesis (DDS) is one of the core components. But the classic DDS technology also has some drawbacks: the truncation error of phase, the truncation error of magnitude (caused by memory FWL) and high occupancy of ROM. These are also the optimization directions in this paper. This paper presents a FPGA-based DDS sweep-frequency signal generator suitable in adaptive optics. It has a low occupancy rate with ROM. And in the case of low-ROM, the paper reduces the noise generated by the truncation error of phase and the truncation error of magnitude of DDS sweepfrequency signal generator by method of linear interpolation. The results show that, when the reference frequency is 100 MHz, the frequency resolution can be as low as 0.025 Hz. It only takes up 0.5 KB ROM with the ROM compression ratio of 64:1 in the optimized scheme in the paper and has higher precision due to the method of linear interpolation than the unoptimized scheme, which can meet the engineering needs. Compared with other schemes, the scheme in the paper improves signal accuracy in the case of reducing the truncation error of phase, the truncation error of magnitude and the occupancy rate with ROM, but only adds a multiplication and division circuit, which is a practical solution.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to test the working status of adaptive optics systems, it is necessary to design a disturbance signal module. Disturbance signal module based on DDS (Direct Digital frequency Synthesis) is used to generate single-frequency disturbance signal to test the working conditions of deformable mirror and adaptive optics systems. But DDS is a periodic sampling sequence and will inevitably lead to the introduction of periodic noise which makes the disturbance signal scattering. This paper uses two methods to reduce the scattering of the single-frequency signal generated by DDS technology. The first method is the compression ROM table. In the case of the same ROM capacity, it is equivalent to extend the compressed ROM table with 256 points to ROM table with 1024 points. In this process, Oversampling is introduced to improve spectral purity to reduce the scattering of the single-frequency signal. The second method is the random phase jitter technology. It introduces m sequence as DDS sampling output random phase jitter unit. The purpose is to generate some random number added at the end of the phase accumulator. As a result, the output does not always push back than ideal, but randomly in advance, thus breaking its periodicity. This method changes the original uniform look-up sampling into a random non-uniform look-up sampling, making DDS output spectrum white. It can also improve spectral purity of the DDS output, thereby reducing the scatting of the single-frequency signal generated by DDS technology.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A color image super-resolution (SR) reconstruction based on an improved Projection onto Convex Sets (POCS) in YCbCr space is proposed. Compared with other methods, the POCS method is more intuitive and generally simple to implement. However, conventional POCS algorithm is strict to the accuracy of movement estimation and it is not conducive to the resumption of the edge and details of images. Addressed to these two problems, we on one hand improve the LOG operator to detect edges with the directions of ±0°, ±45°, ±90°, ±135° in order to inhibit the edge degradation. Then, by using the edge information, we proposed a self-adaptive edge-directed interpolation and a modified adaptive direction PSF to construct a reference image as well as to reduce the edge oscillation when revising the reference respectively. On the other hand, instead of block-matching, the Speeded up Robust Feature (SURF) matching algorithm, which can accurately extract the feature points with invariant to affine transform, rotation, scale, illumination changes, are utilized to improve the robustness and real-time in motion estimation. The performance of the proposed approach has been tested on several images and the obtained results demonstrate that it is competitive or rather better in quality and efficiency in comparison with the traditional POCS.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image restoration takes a crucial place in several important application domains. With the increasing of computation requirement as the algorithms become much more complexity, there has been a significant rise in the need for accelerating implementation. In this paper, we focus on an efficient real-time image processing system for blind iterative deconvolution method by means of the Richardson-Lucy (R-L) algorithm. We study the characteristics of algorithm, and an image restoration processing system based on the coordinated framework of FPGA and DSP (CoFD) is presented. Single precision floating-point processing units with small-scale cascade and special FFT/IFFT processing modules are adopted to guarantee the accuracy of the processing. Finally, Comparing experiments are done. The system could process a blurred image of 128×128 pixels within 32 milliseconds, and is up to three or four times faster than the traditional multi-DSPs systems.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Mean shift is a traditional moving target tracking algorithm, which has some deficiencies such as: A tracking window of a target needs to be initialed manually in the first frame; the window size cannot be adaptively changed according to a moving object in the process of tracking; if a target is obscured, it might be lost in the tracking window. In order to solve these problems, a method combining Kalman filter and Scale and Orientation Adaptive Mean Shift Tracking (SOAMST) is proposed. Firstly we use Kalman filter to locate a moving target at the beginning. Then the ratio of the first order moment to the zero order moment is used to estimate its center, and the second order center moment is used to estimate its size and orientation. Meanwhile, whether the target is obscured is determined by the Bhattacharyya coefficient based on the target model and a candidate one. A candidate model is more similar to the target and the estimation result of the target is more reliable when the Bhattacharyya coefficient is closer to 1. On the contrary, if the Bhattacharyya coefficient decreases to 0, the target will be lost for being totally obscured. If the target is partially obscured or not obscured, SOAMST is used directly to track the target; if totally obscured, Kalman filter is imposed to estimate the location of the target in the next frame before SOAMST. The experiments show that the proposed algorithm can track a moving target automatically at the initial frame without prior knowledge. It can also track a completely obscured target accurately by Kalman filtering based location estimation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to enhance the observability of multispectral image and improve the accuracy of camouflage effect evaluation based on multispectral photographic. A new method for multispectral image processing has been put forward. In the visible band, more spectral images for image fusion which are based on wavelet transformation respectively are chosen. The image information is enhanced. The visible light and near infrared band images are fused and introduced in three-channel of red, green and blue. The true color image is synthetized. While the detail of visible light image is enhanced, near infrared image information which is more interesting in camouflage evaluation is kept. Finally the fusion image are processed through histogram stretching and correlation method. The image color and luminance difference of each part is enhanced. The target recognition and camouflage effect evaluation is more advantageous. The experimental results proved that the method has a good effect.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of information technology, the digitization of scientific or engineering drawings has received more and more attention. In hydrology, meteorology, medicine and mining industry, the grid drawing sheet is commonly used to record the observations from sensors. However, these paper drawings may be destroyed and contaminated due to improper preservation or overuse. Further, it will be a heavy workload and prone to error if these data are manually transcripted into the computer. Hence, in order to digitize these drawings, establishing the corresponding data base will ensure the integrity of data and provide invaluable information for further research. This paper presents an automatic system for hydrological record image digitization, which consists of three key techniques, i.e., image segmentation, intersection point localization and distortion rectification.
First, a novel approach to the binarization of the curves and grids in the water level sheet image has been proposed, which is based on the fusion of gradient and color information adaptively. Second, a fast search strategy for cross point location is invented and point-by-point processing is thus avoided, with the help of grid distribution information. And finally, we put forward a local rectification method through analyzing the central portions of the image and utilizing the domain knowledge of hydrology. The processing speed is accelerated, while the accuracy is still satisfying. Experiments on several real water level records show that our proposed techniques are effective and capable of recovering the hydrological observations accurately.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to realize fast target detection under complex image scene, a novel method is proposed based on supervised saliency map and efficient subwindow search. Supervised saliency map generation mainly includes: (1) the original image is segmented by different parameters to obtain multi-segmentation results; (2) regional feature is mapped for salient value by random forest regressor; (3) obtain saliency map by fusing multi-level segmentation results. Efficient subwindow search method is implemented by transforming salient target detection as maximum saliency density, and using branch and bound algorithm to localize the maximum saliency density in global optimum. The experimental results show that the new method can not only detect salient region, but also recognize this region in some extent.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the development of remote sensing (RS) and the related technologies, the resolution of RS images is enhancing. Compared with moderate or low resolution images, high-resolution ones can provide more detailed ground information. However, a variety of terrain has complex spatial distribution. The different objectives of high-resolution images have a variety of features. The effectiveness of these features is not the same, but some of them are complementary. Considering the above information and characteristics, a new method is proposed to classify RS images based on hierarchical fusion of multi-features. Firstly, RS images are pre-classified into two categories in terms of whether feature points are uniformly or non-uniformly distributed. Then, the color histogram and Gabor texture feature are extracted from the uniformly-distributed categories, and the linear spatial pyramid matching using sparse coding (ScSPM) feature is obtained from the non-uniformly-distributed categories. Finally, the classification is performed by two support vector machine classifiers. The experimental results on a large RS image database with 2100 images show that the overall classification accuracy is boosted by 10.1% in comparison with the highest accuracy of single feature classification method. Compared with other multiple-feature fusion methods, the proposed method has achieved the highest classification accuracy on this dataset which has reached 90.1%, and the time complexity of the algorithm is also greatly reduced.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The method to get high-orbit satellite basic information such as geometry and material characteristic, is an important goal in the field of space posture apperception. In this paper, we calculate the satellite magnitude by comparing the output value of camera’s CCD between the known fixed star and the satellite. We select certain reference stars to calculate the luminance value of a certain object on the acquired image using a background-removing method. We make time-domain analysis of the measurement data, and get the statistic result. With the knowledge of the theory brightness of the target, we estimate the geometric characteristics of the target. We have got a serious of the images of a certain satellite on large telescope. The experimental results demonstrate that, the accuracy of the measured magnitude is better than 0.12Mv, and the estimation error of the target reflection surface size is less than 15%.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Blurred face recognition is still remaining as a challenge task, but with wide applications. Image blur can largely affect recognition performance. The local phase quantization (LPQ) was proposed to extract the blur-invariant texture information. It was used for blurred face recognition and achieved good performance. However, LPQ considers only the phase blur-invariant texture information, which is not sufficient. In addition, LPQ is extracted holistically, which cannot fully explore its discriminative power on local spatial properties. In this paper, we propose a novel method for blurred face recognition. The texture and structure blur-invariant features are extracted and fused to generate a more complete description on blurred image. For texture blur-invariant feature, LPQ is extracted in a densely sampled way and vector of locally aggregated descriptors (VLAD) is employed to enhance its performance. For structure blur-invariant feature, the histogram of oriented gradient (HOG) is used. To further enhance its blur invariance, we improve HOG by eliminating weak gradient magnitude which is more sensitive to image blur than the strong gradient. The improved HOG is then fused with the original HOG by canonical correlation analysis (CCA). At last, we fuse them together by CCA to form the final blur-invariant representation of the face image. The experiments are performed on three face datasets. The results demonstrate that our improvements and our proposition can have a good performance in blurred face recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to inclement weather caused frequently, such as clouds, fog , rain etc. The light intensity on the illuminated objects falls sharply, it make the scenes captured unclear, poor visual quality and low contrast degree. To improve the overall quality of these images, especially the bad illuminated images, the paper propose a new color image enhancement algorithm which is based on multi-scale Retinex theory with color recovering factor (MSRCR) and the human visual system (HVS). It can effectively solve the problem of the color balance of digital images by removing the influence of light and obtain component images reflected the reflex of the object surface, meanwhile, reduce the impact of non-artificial factors and overcome the Ringing effect and human interference. Through comparison and contrast among experiments, that combined evaluated parameters on enhancement image, such as variance, average gradient, sharpness and so forth with the traditional image enhancement methods, such as histogram enhancement, adaptive histogram enhancement, the MSRCR algorithm is proved to be effective in image contrast, detail enhancement and color fidelity, etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Centroid computation of Gaussian spot is often conducted to get the exact position of a target or to measure wave-front
slopes in the fields of target tracking and wave-front sensing. Center of Gravity (CoG) is the most traditional method of
centroid computation, known as its low algorithmic complexity. However both electronic noise from the detector and
photonic noise from the environment reduces its accuracy. In order to improve the accuracy, thresholding is unavoidable
before centroid computation, and optimum threshold need to be selected. In this paper, the model of Gaussian spot is
established to analyze the performance of optimum threshold under different Signal-to-Noise Ratio (SNR) conditions.
Besides, two optimum threshold selection methods are introduced: TmCoG (using m % of the maximum intensity of spot
as threshold), and TkCoG ( usingμn +κσ n as the threshold), μn and σn are the mean value and deviation of back noise.
Firstly, their impact on the detection error under various SNR conditions is simulated respectively to find the way to
decide the value of k or m. Then, a comparison between them is made. According to the simulation result, TmCoG is
superior over TkCoG for the accuracy of selected threshold, and detection error is also lower.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Color filter array (CFA) is one of the key points for single-sensor digital cameras to produce color images. Bayer CFA is the most commonly used pattern. In this array structure, the sampling frequency of green is two times of red or blue, which is consistent with the sensitivity of human eyes to colors. However, each sensor pixel only samples one of three primary color values. To render a full-color image, an interpolation process, commonly referred to CFA demosaicing, is required to estimate the other two missing color values at each pixel. In this paper, we explore an adaptive progressive interpolation based on the edge type algorithm. The proposed demosaicing method consists of two successive steps: an interpolation step that estimates missing color values according to various edges and a post-processing step by iterative interpolation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Images will be blurred by relative motion between the camera and the object of interest. In this paper, we analyzed the process of motion-blurred image, and demonstrated a restoration method based on Lucy-Richardson algorithm. The blur extent and angle can be estimated by Radon transform algorithm and auto-correlation function, respectively, and then the point spread function (PSF) of the motion-blurred image can be obtained. Thus with the help of the obtained PSF, the Lucy-Richardson restoration algorithm is used for experimental analysis on the motion-blurred images that have different blur extents, spatial resolutions and signal-to-noise ratios (SNR’s). Further, its effectiveness is also evaluated by structural similarity (SSIM). Further studies show that, at first, for the image with a spatial frequency of 0.2 per pixel, the modulation transfer function (MTF) of the restored images can maintains above 0.7 when the blur extent is no bigger than 13 pixels. That means the method compensates low frequency information of the image, while attenuates high frequency information. At second, we fund that the method is more effective on condition that the product of the blur extent and spatial frequency is smaller than 3.75. Finally, the Lucy-Richardson algorithm is found insensitive to the Gaussian noise (of which the variance is not bigger than 0.1) by calculating the MTF of the restored image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The existing Gaussian Mixture Model(GMM) which is widely used in vehicle detection suffers inefficiency in detecting foreground image during the model phase, because it needs quite a long time to blend the shadows in the background. In order to overcome this problem, an improved method is proposed in this paper. First of all, each frame is divided into several areas(A, B, C and D), Where area A, B, C and D are decided by the frequency and the scale of the vehicle access. For each area, different new learning rate including weight, mean and variance is applied to accelerate the elimination of shadows. At the same time, the measure of adaptive change for Gaussian distribution is taken to decrease the total number of distributions and save memory space effectively. With this method, different threshold value and different number of Gaussian distribution are adopted for different areas. The results show that the speed of learning and the accuracy of the model using our proposed algorithm surpass the traditional GMM. Probably to the 50th frame, interference with the vehicle has been eliminated basically, and the model number only 35% to 43% of the standard, the processing speed for every frame approximately has a 20% increase than the standard. The proposed algorithm has good performance in terms of elimination of shadow and processing speed for vehicle detection, it can promote the development of intelligent transportation, which is very meaningful to the other Background modeling methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image segmentation is an important application in computer vision. Nowadays, image segmentation of infrared image has not gain as much attention as image segmentation of visible light image. But this application is very useful. For example, searching and tracking targets with infrared search and track system (IRST) has been widely used these days due to its special passive mode. So it can be used as a kind of supplementary equipment for radar. Infrared image segmentation can help computers identify backgrounds of the image, and help it automatically adjust the related parameters for the next work, such as targets recognition or targets detection.
Our work proposed a new image segmentation method for infrared image using histogram of oriented gradients (HOG) feature and kernel extreme learning machine (kernel ELM). HOG are feature descriptors which can be used in computer vision and image processing for the purpose of object detection. In this paper, we extract HOG feature of infrared image, and use this feature as the basis for classification. After having feature, we use kernel extreme learning machine to do the segmentation. Kernel extreme learning machine has shown many excellent characteristics in classification. By testing our algorithm proposed in our paper, we demonstrated that our algorithm is effective and feasible.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Infrared small target detection is a crucial and yet still is a difficult issue in aeronautic and astronautic applications. Sparse representation is an important mathematic tool and has been used extensively in image processing in recent years. Joint sparse representation is applied in dual-band infrared dim target detection in this paper. Firstly, according to the characters of dim targets in dual-band infrared images, 2-dimension Gaussian intensity model was used to construct target dictionary, then the dictionary was classified into different sub-classes according to different positions of Gaussian function’s center point in image block; The fact that dual-band small targets detection can use the same dictionary and the sparsity doesn’t lie in atom-level but in sub-class level was utilized, hence the detection of targets in dual-band infrared images was converted to be a joint dynamic sparse representation problem. And the dynamic active sets were used to describe the sparse constraint of coefficients. Two modified sparsity concentration index (SCI) criteria was proposed to evaluate whether targets exist in the images. In experiments, it shows that the proposed algorithm can achieve better detecting performance and dual-band detection is much more robust to noise compared with single-band detection. Moreover, the proposed method can be expanded to multi-spectrum small target detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Faced with complex background on aerial moving platform, infrared point target detection apply background suppression policy to greatly improve detection efficiency. However, due to the background relative motion, it presents challenges for target detection. From remote observation in the air, background movement could be approximately regarded as plane rigid motion, which is the sum of translation and rotation movement. Until now, existing algorithms by comparing the adjacent frames of infrared image have good performance in the detection of translation motion, but poor effect in the situation of the rotational motion. It is proposed a rigid motion estimation algorithm based on infrared background feature point set (IRMBE) .Firstly, by processing statistical movement characteristics of the feature point set on infrared background image, the algorithm gauges out the translation motion vector. Secondly, it uses Monte Carlo method in background feature point set to estimate the vector of rotation axis and the angular velocity. Experiments show that the algorithms can perform good estimation of the complex background rigid movement, in the application of space-based Infrared observation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A temporal method to detect infrared small target using recursive sparse recovery is presented. Three advantages are provided by this method: detection of multiscale targets, removing false alarms caused by complex clutter, less computational complexity. Firstly, an initial estimate of the background subspace is trained. Secondly, the target and the background are recovered by low-rank and sparse decomposition based on trained background subspace. Thirdly, the background subspace is updated by background estimation. Finally, the small target can be easily extracted by the threshold segmentation. Experimental results indicate that the proposed method is robust and efficient in complex backgrounds.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-spectral integration target consists of collimator, lighting and target plate with cross-line and four-bar target. The high brightness image of four-bar target and stray light have a serious impact on the auto extraction for center of cross-line when do the optical axis parallelism on-line test using the multi-spectral target. To solve the problem, a fast and accurate extraction algorithm for center of cross-line based on two windows scanning was proposed. Firstly, using thresholding techniques get rid of the low brightness stray light. Then, using two windows scanning method get rid of the image of high brightness four-bar target and get the cross-line area. Lastly, according to the results to find the corresponding area in the gray-scale map of the original image, using energy accumulation method find the extreme point of energy. The value of x-axis and y-axis is the position of the center of cross-line. The results show that the algorithm is fast and accurate and can realize the extraction for center of cross-line under the influence of the high brightness four-bar target and stray light.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Warship detection in smoke screen interference background belongs to the field of object extraction from image with low contrast and low signal/noise ratio. Aimed at the specialty of the complex background, a novel algorithm of warship detection in smoke screen interference based on region of interest for CMAC-prediction is proposed in the article. The regions-of-interest (ROI) must be predicted in target tracking of IR image for increasing capture probability. CMAC estimator can effectually resolve conflict between operational counts and predicting precision. The local fractal dimension is used to differentiate the warship from the ROI. The experimental results show that CMAC can accurately estimate the ROI and a similar performance in a low-noise environment and superiority of the fractal operators in a high noise, the algorithms are effectively for smoke screen interference and are easy to be implemented by parallel processing hardware.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The development of infrared imaging guidance technology has promoted the research of infrared imaging simulation technology,and the key of infrared imaging simulation is the generation of IR image. The generation of IR image is worthful in military and economy. In order to solve the problem of credibility and economy of infrared scene generation, a method of infrared scene generation based on the measured image is proposed. Through researching on optical properties of ship-target and sea background, ship-target images with various gestures are extracted from recorded images based on digital image processing technology. The ship-target image is zoomed in and out to simulate the relative motion between the viewpoint and the target according to field of view and the distance between the target and the sensor. The gray scale of ship-target image is adjusted to simulate the radiation change of the ship-target according to the distance between the viewpoint and the target and the atmospheric transmission. Frames of recorded infrared images without target are interpolated to simulate high frame rate of missile. Processed ship-target images and sea-background infrared images are synthetized to obtain infrared scenes according to different viewpoints. Experiments proved that this method is flexible and applicable, and the fidelity and the reliability of synthesis infrared images can be guaranteed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For solving the difficult problem that there is no effective way to measure abnormal holes located at blade erection loop of aero-engine case, an image measurement system based on high precision air-bearing turntable is established in this paper. The issue that monocular vision can’t measure curved surface has overcome by using high precision turntable to make sure high positioning accuracy of the surface abnormal holes and high-resolution microscope lens which is used to image local tiny features. Besides, an algorithm of determining the boundary points of a trailing edge on the contour of abnormal hole is proposed to achieve a rapid fitting and accuracy. After experiments and analysis, results show that the system can be used to measure local tiny features on curved surfaces validly and efficiently.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The criminal’s fingerprints often refer to those fingerprints that are extracted from crime scene and have played an important role in police’ investigation and cracking the cases, but these fingerprints have features such as blur, incompleteness and low-contrast of ridges. Traditional fingerprint enhancement and identification methods have some limitations and the current automated fingerprint identification system (AFIS) hasn’t not been applied extensively in police’ investigation. Since the Gabor filter has drawbacks such as poor efficiency, low preciseness of the extracted ridge’s orientation parameters, the enhancements of low-contrast fingerprint images can’t achieve the desired effects. Therefore, an improved Gabor enhancement for low-quality fingerprint is proposed in this paper. Firstly, orientation image templates with different scales were used to distinguish the orientation images in the fingerprint area, and then orientation parameters of ridge were calculated. Secondly, mean frequencies of ridge were extracted based on local window of ridge’s orientation and mean frequency parameters of ridges were calculated. Thirdly, the size and orientation of Gabor filter were self-adjusted according to local ridge’s orientation and mean frequency. Finally, the poor-quality fingerprint images were enhanced. In the experiment, the improved Gabor filter has better performance for low-quality fingerprint images when compared with the traditional filtering methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Target recognition is widely used in national economy, space technology and national defense and other fields. There is great difference between the difficulty of the target recognition and target extraction. The image complexity is evaluating the difficulty level of extracting the target from background. It can be used as a prior evaluation index of the target recognition algorithm's effectiveness. The paper, from the perspective of the target and background characteristics measurement, describe image complexity metrics parameters using quantitative, accurate mathematical relationship. For the collinear problems between each measurement parameters, image complexity metrics parameters are clustered with gray correlation method. It can realize the metrics parameters of extraction and selection, improve the reliability and validity of image complexity description and representation, and optimize the image the complexity assessment calculation model. Experiment results demonstrate that when gray system theory is applied to the image complexity analysis, target characteristics image complexity can be measured more accurately and effectively.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The classic mean-shift tracking algorithm has achieved success in the field of computer vision because of its speediness and efficiency. However, classic mean-shift tracking algorithm would fail to track in some complicated conditions such as some parts of the target are occluded, little color difference between the target and background exists, or sudden change of illumination and so on. In order to solve the problems, an improved algorithm is proposed based on the mean-shift tracking algorithm and adaptive fusion of features. Color, edges and corners of the target are used to describe the target in the feature space, and a method for measuring the discrimination of various features is presented to make feature selection adaptive. Then the improved mean-shift tracking algorithm is introduced based on the fusion of various features. For the purpose of solving the problem that mean-shift tracking algorithm with the single color feature is vulnerable to sudden change of illumination, we eliminate the effects by the fusion of affine illumination model and color feature space which ensures the correctness and stability of target tracking in that condition. Using a group of videos to test the proposed algorithm, the results show that the tracking correctness and stability of this algorithm are better than the mean-shift tracking algorithm with single feature space. Furthermore the proposed algorithm is more robust than the classic algorithm in the conditions of occlusion, target similar with background or illumination change.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
To enhance continuous wave terahertz (CW-THz) scanning images contrast and denoising, a method based on wavelet transform and Retinex theory was proposed. In this paper, the factors affecting the quality of CW-THz images were analysed. Second, an approach of combination of the discrete wavelet transform (DWT) and a designed nonlinear function in wavelet domain for the purpose of contrast enhancing was applied. Then, we combine the Retinex algorithm for further contrast enhancement. To evaluate the effectiveness of the proposed method in qualitative and quantitative, it was compared with the adaptive histogram equalization method, the homomorphic filtering method and the SSR(Single-Scale-Retinex) method. Experimental results demonstrated that the presented algorithm can effectively enhance the contrast of CW-THZ image and obtain better visual effect.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the commercial performance of camera sensor and the imaging quality of lens improving, it has the possibility to applicate in the space target observation. Multiple cameras can further improve the detection ability of the camera with image fusion. This paper mainly studies the multiple camera image fusion problem of registration with the imaging characteristics of a commercial camera, and then put forward an applicable method of star image registration. It proved that the accuracy of registration could reach the subpixel level with experiments.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
CMOS sensors have more advantages than traditional CCD sensors. The imaging system based on CMOS has become a hot spot in research and development. In order to achieve the real-time data acquisition and high-speed transmission, we design a high-speed CMOS imaging system on account of FPGA. The core control chip of this system is XC6SL75T and we take advantages of CameraLink interface and AM41V4 CMOS image sensors to transmit and acquire image data. AM41V4 is a 4 Megapixel High speed 500 frames per second CMOS image sensor with global shutter and 4/3” optical format. The sensor uses column parallel A/D converters to digitize the images. The CameraLink interface adopts DS90CR287 and it can convert 28 bits of LVCMOS/LVTTL data into four LVDS data stream. The reflected light of objects is photographed by the CMOS detectors. CMOS sensors convert the light to electronic signals and then send them to FPGA. FPGA processes data it received and transmits them to upper computer which has acquisition cards through CameraLink interface configured as full models. Then PC will store, visualize and process images later. The structure and principle of the system are both explained in this paper and this paper introduces the hardware and software design of the system. FPGA introduces the driven clock of CMOS. The data in CMOS is converted to LVDS signals and then transmitted to the data acquisition cards. After simulation, the paper presents a row transfer timing sequence of CMOS. The system realized real-time image acquisition and external controls.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Fake targets had essential effect on target trace and guidance information extraction. Measures based on extended Kalman filter were recommended in this paper. The model of an imaging seeker was established firstly. Then the interfered model that the imaging seeker was misguided by the fake targets was introduced. The process how the fake targets misguided the imaging seeker was analyzed. An extended Kalman filter was established in the sphere coordinates, which could help to enhance the estimate level. The measurement of the seeker was transformed to the suitable information to spur the extended Kalman filter. The strategies were composed of four stages. Before the fake targets appeared, the extended Kalman filter estimated the motion information and the trend of the target quickly. When the fake targets appeared, the motion of the seeker will be controlled with the information predicted by the extended Kalman filter. The measured information of the seeker will not be used as stimulus to the extended Kalman filter until the true target was identified. After the true target was chosen, the seeker averted to the real target, and the angular information of line of sight measured by the seeker would be used to ensure the stability of the extended Kalman filter. When the transit time of the seeker from the previous direction to the target was finished, the rate of line of sight will be used to make the extended Kalman filter converged quickly. Theoretical analysis and simulation results show that the method is reasonable and efficient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the domain of target recognition, the image complexity of target and background is used to describe the difficult degree of extracting and recognizing target from complex background, which has important guiding significance and widely application prospect in a lot of domains such as biological medical, information encrypt, image compression, meteorological analysis, automatic target recognition. This paper comprehensively took the innate characteristics of target and the target local background characteristic into consideration, which affected the algorithm performance of target extraction and recognition, then made generalizations of three classes of evaluation methods: methods based on the target characteristic, including the target shape characteristic, the gray standard deviation of target pixels, the target Local background entropy difference, etc; methods based on the target similitude, including the edge profile and structural characteristic similitude between target and phony target; methods based on the background characteristic, including texture characteristic edge ratio, etc. And on this basis, we made research on the relationship of structural features and evaluation parameters, and analyzed the foundation and properties of each method by contrast. Thoughts and foresights of this field are given at the end of this paper.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
High accuracy star map identification results are the basis of astronomical positioning. The traditional triangle star identification algorithm has a higher redundancy and a poor robustness to noise. Considering the specific requirements of the star map identification of the astronomical camera, in allusion to this default, proceeding with selection of guide stars, construction of guide star catalogue and realization of matching algorithm, a modified triangle algorithm based on traditional one is presented. With the proposed algorithm, the guide star is selected from astronomical durchmusterung. In order to speed up guide star indexing, the guide star catalogue is founded after dividing the sky using the overlapping rectangle method. The guide star sub-catalogue is constructed by the radius of guide triangle circumcircle and the two sides of guide triangle. The characteristic radius is used for indexing and sorted in an ascending order to improve the searching efficiency in the processing of star map identification. The matching scope of the angular distance is narrowed and the matching rate of angular distance is improved by the matching of the characteristics radius. If there exists redundancy, a normalized magnitude is used to eliminate it. Within the observing area of the real sky, the 1050 star maps continuously are calculated. The simulation results show that, the identification rate of this algorithm is greater than 97. 83% when the noise of position error is two pixels, and the average identification time is about 25. 07ms. Compared with the traditional triangle algorithm, this modified algorithm has a couple of advantages, including the smaller storage capacity of guide star catalogue, better robustness to position and magnitude error, higher rate of correcting star map identification and lower redundancy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Airport detection on optical remote sensing images has attracted great interest in the applications of military optics scout and traffic control. However, most of the popular techniques for airport detection from optical remote sensing images have three weaknesses: 1) Due to the characteristics of optical images, the detection results are often affected by imaging conditions, like weather situation and imaging distortion; and 2) optical images contain comprehensive information of targets, so that it is difficult for extracting robust features (e.g., intensity and textural information) to represent airport area; 3) the high resolution results in large data volume, which makes real-time processing limited. Most of the previous works mainly focus on solving one of those problems, and thus, the previous methods cannot achieve the balance of performance and complexity. In this paper, we propose a novel coarse-to-fine airport detection framework to solve aforementioned three issues using wavelet coefficients. The framework includes two stages: 1) an efficient wavelet-based feature extraction is adopted for multi-scale textural feature representation, and support vector machine(SVM) is exploited for classifying and coarsely deciding airport candidate region; and then 2) refined line segment detection is used to obtain runway and landing field of airport. Finally, airport recognition is achieved by applying the fine runway positioning to the candidate regions. Experimental results show that the proposed approach outperforms the existing algorithms in terms of detection accuracy and processing efficiency.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Recently, image processing has got a lot of attention in the field of photogrammetry, medical image processing, etc. Matching two or more images of the same scene taken at different times, by different cameras, or from different viewpoints, is a popular and important problem. Feature extraction plays an important part in image matching. Traditional SIFT detectors reject the unstable points by eliminating the low contrast and edge response points. The disadvantage is the need to set the threshold manually. The main idea of this paper is to get the stable extremums by machine learning algorithm. Firstly we use ASIFT approach coupled with the light changes and blur to generate multi-view simulated images, which make up the set of the simulated images of the original image. According to the way of generating simulated images set, affine transformation of each generated image is also known. Instead of the traditional matching process which contain the unstable RANSAC method to get the affine transformation, this approach is more stable and accurate. Secondly we calculate the stability value of the feature points by the set of image with its affine transformation. Then we get the different feature properties of the feature point, such as DOG features, scales, edge point density, etc. Those two form the training set while stability value is the dependent variable and feature property is the independent variable. At last, a process of training by Rank-SVM is taken. We will get a weight vector. In use, based on the feature properties of each points and weight vector calculated by training, we get the sort value of each feature point which refers to the stability value, then we sort the feature points. In conclusion, we applied our algorithm and the original SIFT detectors to test as a comparison. While in different view changes, blurs, illuminations, it comes as no surprise that experimental results show that our algorithm is more efficient.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Laser one-dimensional range profile, that is scattering power from pulse laser scattering of target, is a radar imaging technology. The laser two-dimensional range profile is two-dimensional scattering imaging of pulse laser of target. Laser one-dimensional range profile and laser two-dimensional range profile are called laser range profile(LRP). The laser range profile can reflect the characteristics of the target shape and surface material. These techniques were motivated by applications of laser radar to target discrimination in ballistic missile defense. The radar equation of pulse laser is given in this paper. This paper demonstrates the analytical model of laser range profile of cylinder based on the radar equation of the pulse laser. Simulations results of laser one-dimensional range profiles of some cylinders are given. Laser range profiles of cylinder, whose surface material with diffuse lambertian reflectance, is given in this paper. Laser range profiles of different pulse width of cylinder are given in this paper. The influences of geometric parameters, pulse width, attitude on the range profiles are analyzed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A space-based infrared camera was launched to collect atmospheric radiation data. In order to investigate its performance quantitatively both under pre-launch and post-launch conditions, a practical estimation model of radiometric calibration precision was proposed that only depended on the measured image data from ground and on-orbit blackbody-based calibration tests. The model treated the calibration error as a consequence of two independent factors. One was introduced by using the calibration equation to represent the relationship between the object apparent radiance and the camera digital response, and the other was the measurement uncertainty when imaging a target with known constant emission. Distribution maps of the errors for the focal plane array were constructed by means of estimating the calibration error pixel-wisely. Results show that the camera’s performance after launch is slightly worse than that before launch. The pixels with calibration errors more than 10% only account for about 5% for this camera, and they generally locate in the edge of the focal plane. The maps will be helpful in weighing the validity of sampled data at the pixel level.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the tendency of aging society, countries all over the world are dealing with the demographic change. Fall had been proven to be of the highest fatality rate among the elderly. To realize the elderly fall detection, the proposed algorithm used the hybrid feature. Based on the rate of centroid change, the algorithm adopted VEI to offer the posture feature, this combined motion feature with posture feature. The algorithm also took advantage of SIFT descriptor of VEI(V-SIFT) to show more details of behaviors with occlusion. An improved motion detection method was proposed to improve the accuracy of front-view motion detection. The experimental results on CASIA database and self-built database showed that the proposed approach has high efficiency and strong robustness which effectively improved the accuracy of fall detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The Wide Field of View (WFV) is one of the key instruments for China’s high resolution earth observing system, operating on the Gao Fen-1 (GF-1) satellite which was launched on April 26, 2013. WFV has 4 typical reflective solar bands from 0.45-0.89 μm with 16m nadir spatial resolution. Because of GF-1’s lacking in onboard calibrators, on-orbit radiometric calibration is mainly relies on site calibration. The reflectance-based method of site calibration has been used for the absolute radiometric calibration of the GF-1 WFV on June 26th, 2013 and August 3rd, 2014. The reflectance-based method relies on ground-based measurements of the surface reflectance and atmospheric conditions at Dunhuang test site nearly coincident with the imaging of the test site by the WFV. Site calibration is a time consuming and hard sledding method, which is also difficult to reveal or adequately correct for the deficiency of instrument calibration by its own calibration subsystem. Cross-calibration can be a powerful method to remedy the insufficient of the site calibration. Terra Moderate Resolution Imaging Spectroradiometer (MODIS) with its high accuracy onboard calibration system has been selected as reference sensor. Cross calibration between GF-1/WFV and Terra/MODIS are based on the near-simultaneous and cloud-free image pairs over Dunhuang test site on the same day with site calibration. Spectral band adjustment factors (SBAF) for the cross calibration are determined by SRF, solar zenith and azimuth angle, sensors’ zenith and azimuth angle and a typical reflectance spectrum over the Dunhuang test site obtained by in-situ measurements. The percentage difference between the site calibrations is within 5% which reflects that the GF-1/WFV is stable after the launch and the reflectance-based method itself is reliable. The comparison of site calibration and cross calibration shows that site calibration has higher accuracy than the cross calibration, the cross calibration is affected by the SBAF.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An approach of optical target recognition via compressive sensing theory is proposed, its feature expressed on the usually suitability and robust on noise, it made a breakthrough on the complex operation which is used in the common recognition algorithm when the characteristic is extracted, it can classify the target accurately when plenty of information is consisted in the observing data and the test sample can be sparse expressed. and the disturbance error caused by noise could be even eliminated under the recognition structure expressed by sparse, the process of the simulation testify the validity of the proposed method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For the along-track scanning mode, the same place along the ground track could be detected by the Advanced Multi-angular Polarized Radiometer (AMPR) with several different scanning angles from -55 to 55 degree, which provides a possible means to get the multi-angular detection for some nearby pixels. However, due to the ground sample spacing and spatial footprint of the detection, the different sizes of footprints cannot guarantee the spatial matching of some partly overlap pixels, which turn into a bottleneck for the effective use of the multi-angular detected information of AMPR to study the aerosol and surface polarized properties. Based on our definition and calculation of t he pixel coincidence rate for the multi-angular detection, an effective multi-angle observation’s pixel matching method is presented to solve the spatial matching problem for airborne AMPR. Assuming the shape of AMPR’s each pixel is an ellipse, and the major axis and minor axis depends on the flying attitude and each scanning angle. By the definition of coordinate system and origin of coordinate, the latitude and longitude could be transformed into the Euclidian distance, and the pixel coincidence rate of two nearby ellipses could be calculated. Via the traversal of each ground pixel, those pixels with high coincidence rate could be selected and merged, and with the further quality control of observation data, thus the ground pixels dataset with multi-angular detection could be obtained and analyzed, providing the support for the multi-angular and polarized retrieval algorithm research in t he next study.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, we present a novel image stitching method to handle parallax in practical application. For images with significant amount of parallax, the more effective approach is to align roughly and globally the overlapping regions and then apply a seam-cutting method to composite naturally stitched images. It is well known that images can be modeled by various planes result from the projective parallax under non-ideal imaging condition. The dominant-plane homography has important advantages of warping an image globally and avoiding some local distortions. The proposed method primarily addresses large parallax problem through two steps: (1) selecting matching point pairs located on the dominant plane, by clustering matching correspondences and then measuring the cost of each cluster; and (2) in order to obtain a plausible seam, edge maps of overlapped area incorporation arithmetic is adopted to modify the standard seam-cutting method. Furthermore, our approach is demonstrated to achieve reliable performance of handling parallax through a mass of experimental comparisons with state-of-the-art methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Using polarimetric information of the camouflaged target surface to identify camouflage has been a hot research area in camouflage detecting. The main method is to use the difference in the degree of polarization(DOP) between background and target to add the contrast ratio of them. The measurement of the DOP has some requirements on the intensity of reflected radiation. In case of low reflected radiation intensity, the difference in the DOP for different materials is not so distinguishable. In addition, the linear degree of polarization is largely under the effects of detection angle and surface roughness, so it is hard to differentiate the degree of polarization when the targets with similar surface roughness are detected at the same detection angle. By analyzing the elements affecting the reflected electromagnetic radiation amplitudes and phase on the camouflaged target surface, this article makes a research on the polarization character of reflected radiation A method on camouflaged target recognition directly or indirectly by taking the angle of ellipsometry (AOE) imaging under the linear polarized light. The function model of the angle of incidence, complex refractive index and AOE was modeled, then the model was simulated by MATLAB and the results showed it can describe the distribution properties of AOE. A new thought for the approach of identifying camouflaged target recognition by detecting polarimetric information was proposed, and it has a deep theoretical and practical significance in camouflaged target recognition.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
It is prevalent for the low-light night-vision helmet to equip the binocular viewer with image intensifiers. Such equipment can not only acquire night vision ability, but also obtain the sense of stereo vision to achieve better perception and understanding of the visual field. However, since the image intensifier is for direct-observation, it is difficult to apply the modern image processing technology. As a result, developing digital video technology in night vision is of great significance. In this paper, we design a low-light night-vision helmet with digital imaging device. It consists of three parts: a set of two low-illumination CMOS cameras, a binocular OLED micro display and an image processing PCB. Stereopsis is achieved through the binocular OLED micro display. We choose Speed-Up Robust Feature (SURF) algorithm for image registration. Based on the image matching information and the cameras’ calibration parameters, disparity can be calculated in real-time. We then elaborately derive the constraints of binocular stereo display. The sense of stereo vision can be obtained by dynamically adjusting the content of the binocular OLED micro display. There is sufficient space for function extensions in our system. The performance of this low-light night-vision helmet can be further enhanced in combination with The HDR technology and image fusion technology, etc.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ship as an important military target and water transportation, of which the detection has great significance. In the military field, the automatic detection of ships can be used to monitor ship dynamic in the harbor and maritime of enemy, and then analyze the enemy naval power. In civilian field, the automatic detection of ships can be used in monitoring transportation of harbor and illegal behaviors such as illegal fishing, smuggling and pirates, etc. In recent years, research of ship detection is mainly concentrated in three categories: forward-looking infrared images, downward-looking SAR image, and optical remote sensing images with sea background. Little research has been done into ship detection of optical remote sensing images with harbor background, as the gray-scale and texture features of ships are similar to the coast in high-resolution optical remote sensing images. In this paper, we put forward an effective harbor ship target detection method. First of all, in order to overcome the shortage of the traditional difference method in obtaining histogram valley as the segmentation threshold, we propose an iterative histogram valley segmentation method which separates the harbor and ships from the water quite well. Secondly, as landing ships in optical remote sensing images usually lead to discontinuous harbor edges, we use Hough Transform method to extract harbor edges. First, lines are detected by Hough Transform. Then, lines that have similar slope are connected into a new line, thus we access continuous harbor edges. Secondary segmentation on the result of the land-and-sea separation, we eventually get the ships. At last, we calculate the aspect ratio of the ROIs, thereby remove those targets which are not ship. The experiment results show that our method has good robustness and can tolerate a certain degree of noise and occlusion.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Extreme-Low-Light CMOS has been widely applied in the field of night-vision as a new type of solid image sensor. But if the illumination in the scene has drastic changes or the illumination is too strong, Extreme-Low-Light CMOS can’t both clearly present the high-light scene and low-light region. According to the partial saturation problem in the field of night-vision, a HDR image fusion algorithm based on the Laplace Pyramid was researched. The overall gray value and the contrast of the low light image is very low. We choose the fusion strategy based on regional average gradient for the top layer of the long exposure image and short exposure image, which has rich brightness and textural features. The remained layers which represent the edge feature information of the target are based on the fusion strategy based on regional energy. In the process of source image reconstruction with Laplacian pyramid image, we compare the fusion results with four kinds of basal images. The algorithm is tested using Matlab and compared with the different fusion strategies. We use information entropy, average gradient and standard deviation these three objective evaluation parameters for the further analysis of the fusion result. Different low illumination environment experiments show that the algorithm in this paper can rapidly get wide dynamic range while keeping high entropy. Through the verification of this algorithm features, there is a further application prospect of the optimized algorithm. Keywords: high dynamic range imaging, image fusion, multi-exposure image, weight coefficient, information fusion, Laplacian pyramid transform.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the progress in CAD (Computer Aided Design) systems, many mechanical components can be designed efficiently with high precision. But, such a system is unfit for some organic shapes, for example, a toy. In this paper, an easy way to dealing with such shapes is presented, combing visual perception with tangible interaction. The method is divided into three phases: two tangible interaction phases and one visual reconstruction. In the first tangible phase, a clay model is used to represent the raw shape, and the designer can change the shape intuitively with his hands. Then the raw shape is scanned into a digital volume model through a low cost vision system. In the last tangible phase, a desktop haptic device from SensAble is used to refine the scanned volume model and convert it into a surface model. A physical clay model and a virtual clay mode are all used in this method to deal with the main shape and the details respectively, and the vision system is used to bridge the two tangible phases. The vision reconstruction system is only made of a camera to acquire raw shape through shape from silhouettes method. All of the systems are installed on a single desktop, make it convenient for designers. The vision system details and a design example are presented in the papers.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An image layering and confidence analysis based small target detection method in infrared image is proposed. First, a Huffman tree is used to refine the histogram curve, and the valleys of the refined curve are detected automatically. Then, the grey values of the detected valleys are recorded as the segmenting thresholds to stratify the original infrared image. After detecting small abnormal regions in each layer and defining them to be candidate targets, the candidate target set is composed of the candidate targets in all layers. Finally, the abnormality based confidence of each candidate target is calculated and sorted. The candidate target with maximum confidence is considered as the real one. The experiments show that the proposed method performs robustly and efficiently.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
As the traditional color fusion method based on color transfer algorithm has an issue that the color of target and background is similar. A kind of infrared polarization image color fusion method based on color contrast enhancement was proposed. Firstly the infrared radiation intensity image and the polarization image were color fused, and then color transfer technology was used between color reference image and initial fused image in the YCbCr color space. Secondly Otsu segmentation method was used to extract the target area image from infrared polarization image. Lastly the H,S,I component of the color fusion image which obtained by color transfer was adjusted to obtain the final fused image by using target area in the HSI space. Experimental results show that, the fused result which obtained by the proposed method is rich in detail and makes the contrast of target and background more outstanding. And then the ability of target detection and identification can be improved by the method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A new method is proposed to compare and evaluate the accuracy and robustness of Perspective-n-Points (PnP) algorithms, which bases on the principle that position and pose of a camera in the world coordinate system is unique when the camera captures an image of a target. First, a world coordinate system, a camera coordinate system and several target coordinate systems are established, and transformational relations from the world coordinate system to each target coordinate system are known. Second, the transformational relations from each target coordinate system to the camera coordinate system are calculated from theoretical simulation and experimental test, and furthermore, the transformational relations from the world coordinate system to the camera coordinate system are obtained. Third, the average and variance of the different transformational relations are calculated, and the value of the variance can be used to evaluate the accuracy and robustness of the algorithms. Finally, the EPnP and LHM algorithms are compared by the proposed method. The comparison results show that the LHM is more accurate and more robust than EPnP, and it consistent with previous comparison method using rotation error and translation error.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The spatially modulated imaging polarimeter is a compact, static and simultaneous polarization imaging system which can collect the interferometric data in real time. By processing the interferometric data, the complete polarization information of the incident light can be demodulated to realize the polarization detection. Based on the device principle of the spatially modulated imaging polarimeter using Savart Plates, the interference pattern processing method in the frequency domain was analyzed. Then, the physical mechanism of the modulation was explained and its corresponding processing method in the spatial domain was analyzed. Finally, in order to evaluate and compare the two methods, the simulation was performed and a prototype of the polarimeter was developed to finish the field experiment. The experimental results show that the spatial processing method can effectively extract the polarization information from the interference pattern. It can identify the artificial targets more clearly from the natural background compared with the frequency-domain processing method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In real-time image processing, with the improvement of resolution and frame rate of camera imaging, not only the requirement of processing capacity is improving, but also the requirement of the optimization of process is improving. With regards to the FPGA + DSP architecture image processing system, there are three common methods to overcome the challenge above. The first is using higher performance DSP. For example, DSP with higher core frequency or with more cores can be used. The second is optimizing the processing method, make the algorithm to accomplish the same processing results but spend less time. Last but not least, pre-processing in the FPGA can make the image processing more efficient. A method of multi-resolution pre-processing by FPGA based on FPGA + DSP architecture is proposed here. It takes advantage of built-in first in first out (FIFO) and external synchronous dynamic random access memory (SDRAM) to buffer the images which come from image detector, and provides down-sampled images or cut-down images for DSP flexibly and efficiently according to the request parameters sent by DSP. DSP can thus get the degraded image instead of the whole image to process, shortening the processing time and transmission time greatly. The method results in alleviating the burden of image processing of DSP and also solving the problem of single method of image resolution reduction cannot meet the requirements of image processing task of DSP.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tracking ballistic re-entry target is a typical nonlinear filtering problem. In order to track the ballistic re-entry target in the nonlinear and non-Gaussian complex environment, a novel chaos map particle filter (CMPF) is used to estimate the target state. CMPF has better performance in application to estimate the state and parameter of nonlinear and non-Gassuian system. The Monte Carlo simulation results show that, this method can effectively solve particle degeneracy and particle impoverishment problem by improving the efficiency of particle sampling to obtain the better particles to part in estimation. Meanwhile CMPF can improve the state estimation precision and convergence velocity compared with EKF, UKF and the ordinary particle filter.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Camera calibration, an important part of the binocular stereo vision research, is the essential foundation of 3D reconstruction of the spatial object. In this paper, the camera calibration method based on OpenCV (open source computer vision library) is submitted to make the process better as a result of obtaining higher precision and efficiency. First, the camera model in OpenCV and an algorithm of camera calibration are presented, especially considering the influence of camera lens radial distortion and decentering distortion. Then, camera calibration procedure is designed to compute those parameters of camera and calculate calibration errors. High-accurate profile extraction algorithm and a checkboard with 48 corners have also been used in this part. Finally, results of calibration program are presented, demonstrating the high efficiency and accuracy of the proposed approach. The results can reach the requirement of robot binocular stereo vision.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Tone mapping can be used to compress the dynamic range of the image data such that it can be fitted within the range of the reproduction media and human vision. The original infrared images that captured with infrared focal plane arrays (IFPA) are high dynamic images, so tone mapping infrared images is an important component in the infrared imaging systems, and it has become an active topic in recent years. In this paper, we present a tone mapping framework using multi-scale retinex. Firstly, a Conditional Gaussian Filter (CGF) was designed to suppress "halo" effect. Secondly, original infrared image is decomposed into a set of images that represent the mean of the image at different spatial resolutions by applying CGF of different scale. And then, a set of images that represent the multi-scale details of original image is produced by dividing the original image pointwise by the decomposed image. Thirdly, the final detail image is reconstructed by weighted sum of the multi-scale detail images together. Finally, histogram scaling and clipping is adopted to remove outliers and scale the detail image, 0.1% of the pixels are clipped at both extremities of the histogram. Experimental results show that the proposed algorithm efficiently increases the local contrast while preventing “halo” effect and provides a good rendition of visual effect.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A high dynamic range infrared image of the sea surface scene includes the effects due to the sea clutter, mirror reflections from the wave facets, which decrease the visibility of the ship targets and their details. This paper provides an efficient adaptive enhancement technique for ship targets based on bilateral filtering and visual saliency detection. The 14- bit raw image is separated into a detail layer and a base layer by applying an adaptive bilateral filter. Then the two layers are processed separately and added afterward. Hereafter by employing visual saliency detection we can get the gain matrix to improve the contrast of ship targets. Finally, the image whose contrast is improved is quantized to the display range. The strength of our proposed method lies in its ability to inhibit the sea clutter and adaptability in different sea surface scene and shows a better performance in the contrast of the ship targets and the visibility of their details.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
With the rapid development of high-speed railway, the automated fault inspection is necessary to ensure train’s operation safety. Visual technology is paid more attention in trouble detection and maintenance. For a linear CCD camera, Image alignment is the first step in fault detection. To increase the speed of image processing, an improved scale invariant feature transform (SIFT) method is presented. The image is divided into multiple levels of different resolution. Then, we do not stop to extract the feature from the lowest resolution to the highest level until we get sufficient SIFT key points. At that level, the image is registered and aligned quickly. In the stage of inspection, we devote our efforts to finding the trouble of brake shoe, which is one of the key components in brake system on electrical multiple units train (EMU). Its pre-warning on wear limitation is very important in fault detection. In this paper, we propose an automatic inspection approach to detect the fault of brake shoe. Firstly, we use multi-resolution pyramid template matching technology to fast locate the brake shoe. Then, we employ Hough transform to detect the circles of bolts in brake region. Due to the rigid characteristic of structure, we can identify whether the brake shoe has a fault. The experiments demonstrate that the way we propose has a good performance, and can meet the need of practical applications.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A real-time inspection of the key components is necessary for ensuring safe operation of freight car. While traditional inspection depends on the trained human inspectors, which is time-consuming and lower efficient. With the development of machine vision, vision-based inspection methods get more railway on-spot applications. The cross rod end fastening bolts are important components on both sides of the train body that fixing locking plates together with the freight car main structure. In our experiment, we get the images containing fastening bolt components, and accurately locate the locking plate position using a linear Support Vector Machine (SVM) locating model trained with Histograms of Oriented Gradients (HOG) features. Then we extract the straight line segment using the Line Segment Detector (LSD) and encoding them in a range, which constitute a straight line segment dataset. Lastly we determine the locking plate’s working state by the linear pattern. The experiment result shows that the localization accurate rate is over 99%, the fault detection rate is over 95%, and the module implementation time is 2f/s. The overall performance can completely meet the practical railway safety assurance application.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The article presents a new method to detect small moving targets in space surveillance. Image sequences are processed to detect and track targets under the assumption that the data samples are spatially registered. Maximum value projection and normalization are performed to reduce the data samples and eliminate the background clutter. Targets are then detected through connected component analysis. The velocities of the targets are estimated by centroid localization and least squares regression. The estimated velocities are utilized to track the targets. A sliding neighborhood operation is performed prior to target detection to significantly reduce the computation while preserving as much target information as possible. Actual data samples are acquired to test the proposed method. Experimental results show that the method can efficiently detect small moving targets and track their traces accurately. The centroid locating precision and tracking accuracy of the method are within a pixel.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Space targets in astronomical images such as spacecraft and space debris are always in the low level of brightness and hold a small amount of pixels, which are difficult to distinguish from fixed stars. Because of the difficulties of space target information extraction, dynamic object monitoring plays an important role in the military, aerospace and other fields, track extraction of moving targets in short-exposure astronomical images holds great significance. Firstly, capture the interesting stars by region growing method in the sequence of short-exposure images and extract the barycenter of interesting star by gray weighted method. Secondly, use adaptive threshold method to remove the error matching points and register the sequence of astronomical images. Thirdly, fuse the registered images by NCST-PCNN image fusion algorithm to hold the energy of stars in the images. Fourthly, get the difference of fused star image and final star image by subtraction of brightness value in the two images, the interesting possible moving targets will be captured by energy accumulation method. Finally, the track of moving target in astronomical images will be extracted by judging the accuracy of moving targets by track association and excluding the false moving targets. The algorithm proposed in the paper can effectively extract the moving target which is added artificially from three images or four images respectively, which verifies the effectiveness of the algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The infrared small target’s detection and tracking are important parts of the automatic target recognition. When the camera platform equipped with an infrared camera moves, the small target’s position change in the imaging plane is affected by the composite motion of the small target and the camera platform. Traditional detection and tracking algorithms may lose the small target and make the follow-up detection and tracking fail because of not considering the camera platform’s movement. Moreover, when there exist small targets with different motion features in the camera’s view, some detection and tracking algorithms can’t recognize different targets based on their motion features because there are no trajectories in a unified coordinate system, which may lead to the true small targets undetected or detected incorrectly . To solve those problems, we present a method under the condition of moving camera platform. Firstly, get the camera platform’s motion information from the inertial measurement values, and then decouple to remove the motion of the camera platform itself by means of coordinate transformation. Next, estimate the trajectories of the small targets with different motion features based on their position changes in the same imaging plane coordinate system. Finally, recognize different small targets preliminarily based on their different trajectories. Experimental results show that this method can improve the small target’s detection probability. Furthermore, when the camera platform fails to track the small target, it’s possible to predict the position of the small target in the next frame based on the fitted motion equation and realize sustained and stable tracking.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Moving object detection is a major research direction of video surveillance systems. This paper proposes a novel approach for moving object detection by fusing information from the laser scanner and infrared camera. First, in accordance with the feature of laser scanner data, we apply robust principal component analysis (RPCA) to studying moving object detection. Then the depth and angle information of moving objects is mapped to the infrared image pixels so as to obtain the regions of interest (ROI). Finally, moving objects can be recognized by making investigation of the ROI. Experimental results show that this method has good real-time performance and accuracy.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Due to the limited depth-of-focus of optical lenses in imaging camera, it is impossible to acquire an image with all parts of the scene in focus. To make up for this defect, fusing the images at different focus settings into one image is a potential approach and many fusion methods have been developed. However, the existing methods can hardly deal with the problem of image detail blur. In this paper, a novel multiscale geometrical analysis called the directional spectral graph wavelet transform (DSGWT) is proposed, which integrates the nonsubsampled directional filter bank with the traditional spectral graph wavelet transform. Through combines the feature of efficiently representing the image containing regular or irregular areas of the spectral graph wavelet transform with the ability of capturing the directional information of the directional filter bank, the DSGWT can better represent the structure of images. Given the feature of the DSGWT, it is introduced to multi-focus image fusion to overcome the above disadvantage. On the one hand, using the high frequency subbands of the source images are obtained by the DSGWT, the proposed method efficiently represents the source images. On the other hand, using morphological filter to process the sparse feature matrix obtained by sum-modified-Laplacian focus measure criterion, the proposed method generates the fused subbands by morphological filtering. Comparison experiments have been performed on different image sets, and the experimental results demonstrate that the proposed method does significantly improve the fusion performance compared to the existing fusion methods.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Camera calibration is one of the indispensable processes to obtain 3D depth information from 2D images in the field of computer vision. Camera self-calibration is more convenient and flexible, especially in the application of large depth of fields, wide fields of view, and scene conversion, as well as other occasions like zooms. In this paper, a self-calibration method based on two vanishing points is proposed, the geometric characteristic of disappear points formed by two groups of orthogonal parallel lines is applied to camera self-calibration. By using the vectors’ orthogonal properties of connection optical centers and the vanishing points, the constraint equations on the camera intrinsic parameters are established. By this method, four internal parameters of the camera can be solved though only four images taken from different viewpoints in a scene. Compared with the two other self-calibration methods with absolute quadric and calibration plate, the method based on two vanishing points does not require calibration objects, camera movement, the information on the size and location of parallel lines, without strict experimental equipment, and having convenient calibration process and simple algorithm. Compared with the experimental results of the method based on calibration plate, self-calibration method by using machine vision software Halcon, the practicability and effectiveness of the proposed method in this paper is verified.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In order to solve the problem that infrared images usually have a poor visual effect with low contrast and weak detail information, an adaptive detail enhancement method for infrared image based on bilateral filter is proposed in this paper. Firstly, adopting the bilateral filter which has a good filtering performance, the original infrared image is effectively derived into the smoothed component and the detail component. Exactly, the detail component is the difference between the original infrared image and the smoothed component. The major merit of using the bilateral filter is that the abundant and subtle detail contents containing a lot of edges and textures of the original infrared image could be obtained via adjusting the parameters flexibly. Further, the detail component plays a key role in obtaining an adaptive detail enhancement weight which is generated by the normalization of the detail component. The weight is in the range [0, 1] and their magnitudes can be regarded as the intensity of the original image details. As a result, this detail enhancement weight is adaptive and effective for the original infrared image. Finally, a kind of linear weighting strategy is utilized to achieve the image sharpness combing the original image and the adaptive weight. The experimental results show that the proposed method outperforms other conventional methods in terms of visual effect and quantitative evaluation, which provides a new approach for infrared image detail enhancement.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, color image encryption combined with ptychography has been investigated. Ptychographic imaging possesses a remarkable advantage of simple optics architecture and complex amplitude of object can be reconstructed just by a series of diffraction intensity patterns via aperture movement. Traditional technique of three primary color synthesis is applied for encrypting color image. In order to reduce physical limitations, the encryption’s algorithm is based on Fresnel transformation domain. It is illustrated that the proposed optical encryption scheme has well ability to recover the encrypted color plaintext and advances in security enhancement thanks to introducing ptychography, since light probe as key factor enlarges the key space. Finally, the encryption’s immunity to noise and reconstruction impact from lateral offset of probe has been investigated.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
We propose a novel optical encryption system for three-dimension imaging combined with three-dimension Ptychography. Employing the proposed cryptosystem, a 3D object can be encrypted and decrypted successfully. Compared with the conventional three-dimensional cryptosystem, not only encrypting the pure amplitude 3D object is available, but also the encryption of complex amplitude 3D object is achievable. Considering that the probes overlapping with each other is the crucial factor in ptychography, their complex-amplitude functions can serve as a kind of secret keys that lead to the enlarged key space and the enhanced system security. Varies of simulation results demonstrate that the feasibility and robust of the cryptosystem. Furthermore, the proposed system could also be used for other potential applications, such as three-dimensional information hiding and multiple images encryption.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Visual cryptography (VC), a new cryptographic scheme for image. Here in encryption, image with message is encoded to be N sub-images and any K sub-images can decode the message in a special rules (N>=2, 2<=K<=N). Then any K of the N sub-images are printed on transparency and stacked exactly, the message of original image will be decrypted by human visual system, but any K-1 of them get no information about it. This cryptographic scheme can decode concealed images without any cryptographic computations, and it has high security. But this scheme lacks of hidden because of obvious feature of sub-images. In this paper, we introduce indirect visual cryptography scheme (IVCS), which encodes sub-images to be pure phase images without visible strength based on encoding of visual cryptography. The pure phase image is final ciphertexts. Indirect visual cryptography scheme not only inherits the merits of visual cryptography, but also raises indirection, hidden and security. Meanwhile, the accuracy alignment is not required any more, which leads to the strong anti-interference capacity and robust in this scheme. System of decryption can be integrated highly and operated conveniently, and its process of decryption is dynamic and fast, which all lead to the good potentials in practices.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A novel dim small target detection algorithm based on the nonsubsampled contourlet transform (NSCT) and the singular value decomposition (SVD) is proposed in this paper, which is to improve the performance of the dim small target detection under the complex sky cloud background. Firstly, the original infrared image is decomposed with the SVD, and several different numbers of the singular value for reconstruction is chosen to analyze the application of the SVD to the image. The complex sky cloud background in the infrared target image is predicted by choosing a certain number of the singular value to reconstruct the image, and it is subtracted from the original image to suppress the background and enhance the target signal. Secondly, to use the scale and the direction information of the image, the residual image is decomposed by the NSCT into several high-pass directional subbands and a low-pass subband. Thirdly, the SVD filtering is utilized again to those directional subbands to eliminate the noise and the residual background. And the low-pass subband is modified by the local mean removal method. Finally, the refined subbands are reconstructed by the inverse NSCT to fulfill the dim small target detection. The experimental results demonstrate that the proposed algorithm has better subjective vision and objective numerical indicators, and can acquire a better performance of the target detection.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper, a new temporal high-pass filter nonuniformity correction algorithm based on guided filter is proposed, which address the ghosting artifacts and preserve image details of original image. In this algorithm, the original input image is separated into two parts, which are the high spatial-frequency part that contains most of the nonuniformity and the low spatial-frequency part with well preserved details. Then the fixed pattern noise is estimated from the high spatial-frequency part and subtracted from the original image, which achieves the nonuniformity correction. The performance of this presented algorithm is tested with two infrared image sequences, and the experimental results show that the proposed algorithm can significantly reduce the ghosting artifacts and achieve a better nonuniformity correction performance.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
For the automatic recognition of pointer instrument, the method for the automatic recognition of pointer instrument based on improved Hough Transform was proposed in this paper. The automatic recognition of pointer instrument is applied to all kinds of lighting conditions, but the accuracy of it binaryzation will be influenced when the light is too strong or too dark. Therefore, the improved Ostu method was suggested to realize recognition for adaptive thresholding of pointer instrument under all kinds of lighting conditions. On the basis of dial image characteristics, Otsu method is used to get the value of maximum between-cluster variance and initial threshold than analyze its maximum between-cluster variance value to determine the light and shade of the image. When the images are too bright or too dark, the smaller pixels should be given up and then calculate the initial threshold by Otsu method again and again until the best binaryzation image was obtained. Hence, transform the pointer straight line of the binaryzation image to Hough parameter space through improved Hough Transform to determine the position of the pointer straight line by searching the maximum value of arrays of the same angle. Finally, according to angle method, the pointer reading was obtained by the linear relationship for the initial scale and angle of the pointer instrument. Results show that the improved Otsu method make pointer instrument possible to obtained the accuracy binaryzation image even though the light is too bright or too dark , which improves the adaptability of pointer instrument to automatic recognize the light under different conditions. For the pressure gauges with range of 60MPa, the relative error identification reached to 0.005 when use the improved Hough Transform Algorithm.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The motion blur is one of the common factors leading to blurred images, the parameters of the point spread function (PSF) estimation is the key and prerequisite of motion blurred image restoration. Based on motion blur image characteristics of spectrum and cepstrum analysis, a automatic detection algorithm based on frequency domain and cepstrum domain algorithms is proposed in the paper, which can automatically detect the blur length and blur angle, then we can restorate the motion blur image. Experiments show that when the blur length is 15 ~ 80 pixels noiselessly, In addition to the individual blur length/angle (e.g. 30 pixels/300, 75 pixels/300), blur length estimation error is 0 ~ 0.2 pixels and blur angle estimation error is almost 0. The detection range is greater than some other methods, and the quality of image restoration is good.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
A dim and small target detection method based on surfacelet transform is proposed to improve the performance of dim and small target detection under the complex clouds background. Firstly, the original infrared image is decomposed by the surfacelet transform to extract and analyze the multi-scale and multi-directional characteristics of the image. Then, the total variation and the local mean removal method are utilized to process the high-frequency and the low-frequency sub-bands respectively, which refines the coefficient value of the decomposed sub-bands. Finally, the refined sub-bands are recostructed to make the dim and small target separate from the background clutter signal, and then the background suppression is achieved and the real target is detected effectively. Theoretical analysis and experimental results show that, compared with the wavelet transform method and the total variation method, values of ISCR and BSF of the experimental result by the proposed method is higher, and the result by the proposed method has better effect both in subjective vision and the objective numerical evaluation.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In this paper we propose an efficient method to enhance contrast in real time in digital video streams by exploiting histogram variances and adaptively adjusting gamma curves. The proposed method aims to overcome the limitations of the conventional histogram equalization method, which often produces noisy, unrealistic effects in images. To improve visual quality, we use gamma correction technique and choose different gamma curves according to the histogram variance of the images. By using this scheme, the details of an image can be enhanced while the mean brightness level is kept. Experiment results demonstrate that our method is simple and efficient, and robust for both low and high dynamic scenes, and hence well suited for real-time, high-bit-depth video acquisitions.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
The target tracking by the spatio-temporal learning is a kind of online tracking algorithm based on Bayesian framework. But it has the excursion problem when applied in the infrared dim target. Based on the principle of the spatio-temporal learning algorithm, the excursion problem was analyzed and a new robust algorithm for infrared dim target tracking is proposed in this paper. Firstly, the Guide Image Filter was adopted to process the input image to preserve edges and eliminate the noise of the image. Secondly, the ideal spatial context model was calculated with the input image that contains little noise, which can be got by subtracting the filtering result from the original image. Simultaneously, a new weight in the context prior model was proposed to indicate that the prior is also related to the local gray level difference. The performance of the presented algorithm was tested with two infrared air image sequences, and the experimental results show that the proposed algorithm performs well in terms of efficiency, accuracy and robustness.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Multi-Element scanning imaging is an imaging method that is conventionally used in space-born spectrometer. By multipixel scanning at the same time, increased exposure time can be achieved and the picture quality can be enhanced. But when this imaging method is applied in airborne remote sensing image systems, corresponding imaging model and correction algorithms must be built, because of the poor posture stability of airborne platform and different characteristics and requirements. This paper builds a geometric correction model of airborne long linear multi-element scanning imaging system by decomposing the process of imaging and also deduced related correction algorithms. The sampling moment of linear CCD can be treated as push broom imaging and a single pixel imaging during the whole whisk broom period can be treated as whisk broom imaging. Based on this kind of decomposition, col-linearity equation correction algorithm and a kind of new tangent correction algorithm are deduced. As shown in the simulation experiment result, combining with position and attitude date collected by the posture position measurement system, these algorithms can map pixel position from image coordinate to WGS84 coordinate with high precision. In addition, some error factors and correction accuracy are roughly analyzed.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Human eyes cannot notice low contrast objects in the image. Image contrast enhancement methods can make the unnoticed objects noticed, and human can detect and recognize the objects. In order to guide the design of enhancement methods, performance of enhancement methods for objects detection and recognition(ODR) should be valued. The existing performance evaluation methods evaluate image enhancement methods by calculating the increment of contrast or image information entropy. However, it is essentially an image information transmission process that human detect and recognize objects in the image, and image contrast enhancement can be viewed as a form of image coding. According to human visual properties, the transmission process of ODR information are modeled in this paper, and a performance evaluation method was proposed from the information theory of Shannon.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Ship detection is important due to both its civil and military use. In this paper, we propose a novel ship detection method, Adaptive Target Filter (ATF), for high resolution optical imagery. The proposed framework can be grouped into two stages, where in the first stage, a test image is densely divided into different detection windows and each window is transformed to a feature vector in its feature space. The Histograms of Oriented Gradients (HOG) is accumulated as a basic feature descriptor. In the second stage, the proposed ATF highlights all the ship regions and suppresses the undesired backgrounds adaptively. Each detection window is assigned a score, which represents the degree of the window belonging to a certain ship category. The ATF can be adaptively obtained by the weighted Logistic Regression (WLR) according to the distribution of backgrounds and targets of the input image. The main innovation of our method is that we only need to collect positive training samples to build the filter, while the negative training samples are adaptively generated by the input image. This is different to other classification method such as Support Vector Machine (SVM) and Logistic Regression (LR), which need to collect both positive and negative training samples. The experimental result on 1-m high resolution optical images shows the proposed method achieves a desired ship detection performance with higher quality and robustness than other methods, e.g., SVM and LR.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
3D reconstruction of objects from point clouds with a laser scanner is still a laborious task in many applications. Automating 3D process is an ongoing research topic and suffers from the complex structure of the data. The main difficulty is due to lack of knowledge of real world objects structure. In this paper, we accumulate such structure knowledge by a probabilistic grammar learned from examples in the same category. The rules of the grammar capture compositional structures at different levels, and a feature dependent probability function is attached for every rule. The learned grammar can be used to parse new 3D point clouds, organize segment patches in a hierarchal way, and assign them meaningful labels. The parsed semantics can be used to guide the reconstruction algorithms automatically. Some examples are given to explain the method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Optical devices are always used to digitize complex objects to get their shapes in form of point clouds. The results have no semantic meaning about the objects, and tedious process is indispensable to segment the scanned data to get meanings. The reason for a person to perceive an object correctly is the usage of knowledge, so Bayesian inference is used to the goal. A probabilistic And-Or-Graph is used as a unified framework of representation, learning, and recognition for a large number of object categories, and a probabilistic model defined on this And-Or-Graph is learned from a relatively small training set per category. Given a set of 3D scanned data, the Bayesian inference constructs a most probable interpretation of the object, and a semantic segment is obtained from the part decomposition. Some examples are given to explain the method.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Image clarity, which reflects the sharpness degree at the edge of objects in images, is an important quality evaluate index for optical remote sensing images. Scholars at home and abroad have done a lot of work on estimation of image clarity. At present, common clarity-estimation methods for digital images mainly include frequency-domain function methods, statistical parametric methods, gradient function methods and edge acutance methods. Frequency-domain function method is an accurate clarity-measure approach. However, its calculation process is complicate and cannot be carried out automatically. Statistical parametric methods and gradient function methods are both sensitive to clarity of images, while their results are easy to be affected by the complex degree of images. Edge acutance method is an effective approach for clarity estimate, while it needs picking out the edges manually. Due to the limits in accuracy, consistent or automation, these existing methods are not applicable to quality evaluation of optical remote sensing images. In this article, a new clarity-evaluation method, which is based on the principle of edge acutance algorithm, is proposed. In the new method, edge detection algorithm and gradient search algorithm are adopted to automatically search the object edges in images. Moreover, The calculation algorithm for edge sharpness has been improved. The new method has been tested with several groups of optical remote sensing images. Compared with the existing automatic evaluation methods, the new method perform better both in accuracy and consistency. Thus, the new method is an effective clarity evaluation method for optical remote sensing images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
At the National Institute of Metrology, China, the national level standard was established for calibrating the level measuring instruments widely used in the field of surveying, construction and engineering. As a key technology in the development, an auto collimation system was set up to compare the level under test with the water level. In the auto collimation system, a charge-coupled device (CCD) camera was set up to acquired the image of a light spot reflected from the water surface and the plane mirror, a composite image processing and analyzing algorithm was designed to find the exact position of the light spot in the image. This target localization algorithm consists of sub-algorithm of background de-noising, dimension transforming, and curve fitting. The experiments prove that this algorithm get resolution of 0.002″, and repeatability of 0.01″.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
An image sharpness assessment method based on the property of Contrast Sensitivity Function (CSF) was proposed to realize the sharpness assessment of unfocused image. Firstly, image was performed the two-dimensional Discrete Fourier Transform (DFT), and intermediate frequency coefficients and high frequency coefficients are divided into two parts respectively. Secondly the four parts were performed the inverse Discrete Fourier Transform (IDFT) to obtain subimages. Thirdly, using Range Function evaluates the four sub-image sharpness value. Finally, the image sharpness is obtained through the weighted sum of the sub-image sharpness value. In order to comply with the CSF characteristics, weighting factor is setting based on the Contrast Sensitivity Function. The new algorithm and four typical evaluation algorithm: Fourier, Range , Variance and Wavelet are evaluated based on the six quantitative evaluation index, which include the width of steep part of focusing curve, the ration of sharpness, the steepness, the variance of float part of focusing curve, the factor of local extreme and the sensitivity. On the other hand, the effect of noise, and image content on algorithm is analyzed in this paper. The experiment results show that the new algorithm has better performance of sensitivity, anti-nose than the four typical evaluation algorithms. The evaluation results are consistent with human visual characteristics.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
One fractal interpolation algorithm has been discussed in detail and the statistical self-similarity characteristics of light field have been analized in correlated experiment. For the correlation imaging experiment in condition of low sampling frequent, an image analysis approach based on fractal interpolation algorithm is proposed. This approach aims to improve the resolution of original image which contains a fewer number of pixels and highlight the image contour feature which is fuzzy. By using this method, a new model for the light field has been established. For the case of different moments of the intensity in the receiving plane, the local field division also has been established and then the iterated function system based on the experimental data set can be obtained by choosing the appropriate compression ratio under a scientific error estimate. On the basis of the iterative function, an explicit fractal interpolation function expression is given out in this paper. The simulation results show that the correlation image reconstructed by fractal interpolation has good approximations to the original image. The number of pixels of image after interpolation is significantly increased. This method will effectively solve the difficulty of image pixel deficiency and significantly improved the outline of objects in the image. The rate of deviation as the parameter has been adopted in the paper in order to evaluate objectively the effect of the algorithm. To sum up, fractal interpolation method proposed in this paper not only keeps the overall image but also increases the local information of the original image.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In the analysis of neural cell images gained by optical microscope, accurate and rapid segmentation is the foundation of nerve cell detection system. In this paper, a modified image segmentation method based on Support Vector Machine (SVM) is proposed to reduce the adverse impact caused by low contrast ratio between objects and background, adherent and clustered cells’ interference etc. Firstly, Morphological Filtering and OTSU Method are applied to preprocess images for extracting the neural cells roughly. Secondly, the Stellate Vector, Circularity and Histogram of Oriented Gradient (HOG) features are computed to train SVM model. Finally, the incremental learning SVM classifier is used to classify the preprocessed images, and the initial recognition areas identified by the SVM classifier are added to the library as the positive samples for training SVM model. Experiment results show that the proposed algorithm can achieve much better segmented results than the classic segmentation algorithms.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
In binocular stereo matching, mistakes are relatively easy to appear in low-texture region due to the weak detail information. In order to eliminate the matching ambiguity as well as guarantee the matching rate, this paper proposes a stereo matching algorithm based on image segmentation. In most low-texture region, traditional cost functions are usually used, and the algorithm can only ameliorated through methods such as reasonable support window, dynamic programming and so on. The results of these algorithms make the whole image smooth, and lose many details. The matching cost function in our algorithm is based on the assumption that pixels are similar in homogeneous area, and reduce the use of multiplication so as to obtain better visual effects and decrease the computational complexity. The first is forming the segmentation maps of stereoscopic images as the guidance. Next comes calculating the aggregation cost in stereo matching in both horizontal and vertical direction successively referring to the segmentation maps. Eventually achieving the final disparity map with optimization algorithm, using WTA(Winner-Takes-All) as principle. The computational complexity of this algorithm is independent of the window size, and suitable for different sizes and shapes. The results of experimental show that this algorithm can get better matching precision about the colorful low-texture stereo image pairs, with few increase in computational complexity. This algorithm, to some extent, can improve the match quality of the regions with repeat texture.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Near real-time stereo based on effective cost aggregation is a local matching algorithm, it can achieve good accuracy and meet near real-time requirements. However, addition of supplementary window cost function in this method makes the edges of objects lose some precision. This paper proposes a novel stereo matching algorithm, whose aggregation strategies rely on segmentation. It can improve the matching accuracy and computational efficiency. First, dividing the left image into segments of homogeneous color with assume that disparity inside segments varies smoothly, then using the segmented image block as the matching window to search similar region in the right image. Second, the existing matching cost function includes segmentation window function and supplementary window cost function, former one tends to assign the same disparity value to all points belonging to the same segment, the latter reduces the dependence of the center pixel point on segmentation block in the high texture area and more dependent on pixels in neighborhood. However, this function makes the edges of objects lose some precision. Therefore, this paper proposes an adaptive supplementary window cost function using the image segmentation block size as adaptive parameter for added window, this function can not only solve mismatching in the high texture regions but also improve the precision at the edges of the objects. The experimental results demonstrate that our method shows the capabilities to improve the accuracy of fast local methods and can be regarded as an interesting trade-off between accuracy and speed, especially in regions of high texture and close to boundaries.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
Locally adaptive regression kernels model can describe the edge shape of images accurately and graphic trend of images integrally, but it did not consider images’ color information while the color is an important element of an image. Therefore, we present a novel method of target recognition based on 3-D-color-space locally adaptive regression kernels model. Different from the general additional color information, this method directly calculate the local similarity features of 3-D data from the color image. The proposed method uses a few examples of an object as a query to detect generic objects with incompact, complex and changeable shapes. Our method involves three phases: First, calculating the novel color-space descriptors from the RGB color space of query image which measure the likeness of a voxel to its surroundings. Salient features which include spatial- dimensional and color -dimensional information are extracted from said descriptors, and simplifying them to construct a non-similar local structure feature set of the object class by principal components analysis (PCA). Second, we compare the salient features with analogous features from the target image. This comparison is done using a matrix generalization of the cosine similarity measure. Then the similar structures in the target image are obtained using local similarity structure statistical matching. Finally, we use the method of non-maxima suppression in the similarity image to extract the object position and mark the object in the test image. Experimental results demonstrate that our approach is effective and accurate in improving the ability to identify targets.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.