Relative radiometric normalization (RRN) of remotely sensed images is often a preprocessing step during time series analysis and change detection. Conventional RRN methods may lessen the radiation difference of changed pixels in images during the RRN process, thus reducing the accuracy of change detection. To solve this problem, we propose a relative radiometric correction method based on wavelet transform and iteratively reweighted multivariate alteration detection (IR-MAD). A wavelet transform is applied to separate high and low frequency components of both the target image and reference image. The high frequency components remain unprocessed to preserve high frequency information. We use the IR-MAD algorithm to normalize the low frequency component of the target image. A reverse wavelet transform reconstructs the radiometrically normalized image. We tested the proposed method with traditional histogram matching, mean variance, the original IR-MAD method, and a method combining wavelet transform and low-pass filtering, and change detection was conducted to evaluate the RRN quality. The experiments show that the proposed method can not only effectively eliminate the overall radiation difference between images but also enable higher accuracy of change detection.
Object-based image analysis method has many advantages over pixel-based methods, so it is one of the current research hotspots. It is very important to get the image objects by multi-scale image segmentation in order to carry out object-based image analysis. The current popular image segmentation methods mainly share the bottom-up segmentation principle, which is simple to realize and the object boundaries obtained are accurate. However, the macro statistical characteristics of the image areas are difficult to be taken into account, and fragmented segmentation (or over-segmentation) results are difficult to avoid. In addition, when it comes to information extraction, target recognition and other applications, image targets are not equally important, i.e., some specific targets or target groups with particular features worth more attention than the others. To avoid the problem of over-segmentation and highlight the targets of interest, this paper proposes a multi-scale image segmentation method with visually saliency graph constraints. Visual saliency theory and the typical feature extraction method are adopted to obtain the visual saliency information, especially the macroscopic information to be analyzed. The visual saliency information is used as a distribution map of homogeneity weight, where each pixel is given a weight. This weight acts as one of the merging constraints in the multi- scale image segmentation. As a result, pixels that macroscopically belong to the same object but are locally different can be more likely assigned to one same object. In addition, due to the constraint of visual saliency model, the constraint ability over local-macroscopic characteristics can be well controlled during the segmentation process based on different objects. These controls will improve the completeness of visually saliency areas in the segmentation results while diluting the controlling effect for non- saliency background areas. Experiments show that this method works better for texture image segmentation than traditional multi-scale image segmentation methods, and can enable us to give priority control to the saliency objects of interest. This method has been used in image quality evaluation, scattered residential area extraction, sparse forest extraction and other applications to verify its validation. All applications showed good results.
High resolution image change detection is one of the key technologies of remote sensing application, which is of great significance for resource survey, environmental monitoring, fine agriculture, military mapping and battlefield environment detection. In this paper, for high-resolution satellite imagery, Random Forest (RF), Support Vector Machine (SVM), Deep belief network (DBN), and Adaboost models were established to verify the possibility of different machine learning applications in change detection. In order to compare detection accuracy of four machine learning Method, we applied these four machine learning methods for two high-resolution images. The results shows that SVM has higher overall accuracy at small samples compared to RF, Adaboost, and DBN for binary and from-to change detection. With the increase in the number of samples, RF has higher overall accuracy compared to Adaboost, SVM and DBN.
Unmanned aerial vehicle (UAV) remote sensing technology has come into wide use in recent years. The poor stability of the UAV platform, however, produces more inconsistencies in hue and illumination among UAV images than other more stable platforms. Image dodging is a process used to reduce these inconsistencies caused by different imaging conditions. We propose an algorithm for automatic image dodging of UAV images using two-dimensional radiometric spatial attributes. We use object-level image smoothing to smooth foreground objects in images and acquire an overall reference background image by relative radiometric correction. We apply the Contourlet transform to separate high- and low-frequency sections for every single image, and replace the low-frequency section with the low-frequency section extracted from the corresponding region in the overall reference background image. We apply the inverse Contourlet transform to reconstruct the final dodged images. In this process, a single image must be split into reasonable block sizes with overlaps due to large pixel size. Experimental mosaic results show that our proposed method reduces the uneven distribution of hue and illumination. Moreover, it effectively eliminates dark-bright interstrip effects caused by shadows and vignetting in UAV images while maximally protecting image texture information.
Traditional relative radiometric normalization methods generally depend on global statistical linear parameters, do not consider two-dimensional radiometric distribution, and do not eliminate foreground objects in an image. Thus, we present a method for relative radiometric consistency process based on object-oriented smoothing and contourlet transforms. Object-level smoothing is applied to both the reference image and the image to be normalized so as to reduce the influence of foreground objects on background radiation extraction. Then, high-frequency and low-frequency sections of an image are separated by contourlet transforms to preserve high-frequency texture information of the image to be normalized, with low-pass filtering applied to the low-frequency sections to gather the background radiation difference. Finally, contourlet reverse transforms are used to reconstruct the radiometrically normalized images. Test results show that the proposed method is effective for radiometric normalization of images with both large-scale and small-scale radiometric characteristics. The proposed method can not only normalize linear and nonlinear radiation differences at the same time, but also maximally preserve image texture information. It can improve the visual effects of normalized images and increase change detection accuracy.
Image registration plays a critically important role in many practical problems in diverse fields. A new object-oriented
image matching algorithm is presented based on the convexity model (CM) and full-scale image segmentation. The core
idea of this matching algorithm is to use image objects as matching unit rather than points or lines. This algorithm firstly
converts images into image objects trees by full-scale segmentation and convexity model restriction. Because image
objects which accord with the convexity model have rich and reliable statistical information and stable shapes, more
characteristics can be used in object-based image matching than pixel-based image matching. Initial experiments show
that matching algorithm proposed in this paper is not sensitive to rotation and resolution distortion, which can
accomplish the image matching and registration automatically.
Image registration plays a critically important role in many practical problems in diverse fields. A new object-oriented
image matching algorithm is presented based on the convexity model (CM) and full-scale image segmentation. The core
idea of this matching algorithm is to use image objects as matching unit rather than points or lines. This algorithm firstly
converts images into image objects trees by full-scale segmentation and convexity model restriction. Because image
objects which accord with the convexity model have rich and reliable statistical information and stable shapes, more
characteristics can be used in object-based image matching than pixel-based image matching. Initial experiments show
that matching algorithm proposed in this paper is not sensitive to rotation and resolution distortion, which can
accomplish the image matching and registration automatically.
The key operation in airports extraction from remote sensing images is to extract the airport edges and obtain their
approximate strait lines. For example, the Canny can be used to extract image edges and the result edges can be used to
obtain approximate strait lines by Hough Transform or other strait line fitting methods. However, background of airport
target is so complex that large numbers of useless edge pixels will be extracted from surroundings by Canny algorithm
and those disheveled edge pixels will interfere with the following analysis. For example, it is difficult to use Hough
Transform to extract useful strait lines of airport edges from the binary image composed of airport edges and other
useless edge pixels because the proportion of disheveled edges is far larger than the one of airport edges. One solution is
to smooth the image before edges detection. Unfortunately, most of image smoothing operation cannot weaken useless
edges effectively. Moreover, it will also damage the useful ones and makes it more difficult to extract useful strait lines.
Though some edge-preserve smoothing algorithms have been proposed, it is still difficult to solve this problem because
too many disheveled but robust edges will be preserved together. In this paper, a novel edge-preserve image smoothing
algorithm based on Convexity Model is discussed with its practical application in airport extraction. This smoothing
algorithm will whittle or restrain those regions whose features accord with the Convexity Model and whose sizes are
smaller than the specified one. The experimental results show that the algorithm is effective in removing noises and
small regions with few influences on those edges of interested targets whose scales are larger than the specified one. The
practical applications show that this smoothing algorithm can increase the efficiency and precision of airports extraction.
Spatial data mining and knowledge discovery (SDMKD) and remote sensing knowledge representation (KR) are important for the realization of intelligent and automatic image processing, so the mechanisms of knowledge acquisition, utilization, representation and storage are worthy of being researched. In this paper, an object-oriented image analysis (OOIA) is discussed and its knowledge will be summarized. Based on the object-oriented image analysis, a method of object-oriented knowledge representation and storage in XML will be proposed and the way to acquire and utilize knowledge in image analysis will also be described in a typical practice. Based on principle proposed, the authors implemented a prototype system and obtained expected results in extensive experiments.
The diversity of the spatial scale of landscape raises the requirement of multiscale analysis of remote sensing (RS) images. Usually the first step to analyze remote sensing images is image segmentation, in which the muitiscale effect should be taken into account to achieve satisfactory segmentation results. This paper describes an effective approach to segment remote sensing images in multiscale. Based on the fact that in a specific scale of a remote sensing image the same objects are similar, the image is first segmented in a small scale by uniting the most similar objects. After that, a set of multiscale objects with full topological relationship can be obtained. Based on the set of multiscale objects, the authors explore the application of this approach in object-oriented information extraction from remote sensing images.
Access to the requested content is limited to institutions that have purchased or subscribe to SPIE eBooks.
You are receiving this notice because your organization may not have SPIE eBooks access.*
*Shibboleth/Open Athens users─please
sign in
to access your institution's subscriptions.
To obtain this item, you may purchase the complete book in print or electronic format on
SPIE.org.
INSTITUTIONAL Select your institution to access the SPIE Digital Library.
PERSONAL Sign in with your SPIE account to access your personal subscriptions or to use specific features such as save to my library, sign up for alerts, save searches, etc.