|
1.IntroductionTexture features have shown significant advantages in the field of image classification,1 image segmentation,2 and content-based image retrieval (CBIR),3,4 etc. Particularly, texture features are one kind of low-level features which have been widely used in CBIR community because of characteristics of independence of image color and intensity. Some popular textural descriptors, such as gray level co-occurrence matrix (GLCM),5 Gabor filter,6 wavelet transform,7 and local binary pattern (LBP),8 have been extensively used in CBIR community. Unfortunately, these conventional texture features mentioned above are extracted from grayscale images directly and leave the discriminative information derived from different color channels, which can be regarded as complementary information for different texture patterns, out of consideration. With the intention of fully exploiting the discriminative information to improve the retrieval results of remote sensing images, many studies have been conducted on this topic. Strategies of such research can be roughly divided into two categories: (1) combination of color and texture features and (2) texture features integrating opponent process theory. Some works based on the former strategy are illustrated as follows. Lin et al.9 proposed a smart CBIR system based on color and texture features. Chun et al.10 presented a CBIR method based on a combination of color and texture features extracted in multiresolution wavelet domain. Liapis and Tziritas11 illustrated a new image retrieval mechanism based on a combination of texture and color features using discrete wavelet frames analysis and one-dimensional histograms of CIELab chromaticity coordinates, respectively. This strategy has also been accepted as one of the important retrieval mechanisms in some famous image retrieval systems, such as query by image content (QBIC).12,13 Some other similar works14–17 could be found in this research. Although these works simultaneously take discriminative information and texture features into consideration, problems such as computational complexity and definition of weight parameters with combinational features are still an open question. In the 1800s, Hurvich and Jameson18 proposed an opponent process theory of human color vision, and thus texture features integrating opponent process theory have increasingly drawn substantial attention in recent years. Jain and Healey19 proposed a multiscale representation based on the opponent process theory for texture recognition, and later this method was applied to hyperspectral image texture recognition.20 In one recent work by Choi et al.,21 two features, namely color local Gabor wavelets and color LBP, are proposed for the purpose of face recognition, which share similar principles and can be treated as an extensive application of the theory proposed in Ref. 19. The opponent process theory provides complementary information among color channels and generates a simple but effective feature representation. Motivated by the aforementioned applications of opponent process theory, in this study, we propose one descriptor named color Gabor wavelet texture (CGWT) for remote sensing image retrieval. Meanwhile, color Gabor opponent texture (CGOT) descriptor based on Gabor wavelets has also been presented so as to improve the retrieval results of certain image classes which have inferior precision using CGWT representation. The rest of this study is organized as follows. Section 2 shows the framework of remote sensing image retrieval based on the proposed descriptors and illustrates the details of the proposed features, parameters used, and similarity measure defined for CGOT descriptor. In Sec. 3, comparative experimental results and discussions are presented. Conclusions and future work constitute Sec. 4. 2.Improved Color Texture Descriptors2.1.Framework of Remote Sensing Image Retrieval Based on the Proposed DescriptorsGenerally an image retrieval system contains image database, feature database, and some important functional modules, such as feature extraction, indexing mechanism, and similarity measure. Figure 1 illustrates the framework of improved color texture descriptors for remote sensing image retrieval in this study, which mainly contains two parts: feature extraction and image retrieval. Feature extraction part indicates the extraction procedure of the proposed features. Given one RGB remote sensing image, three dependent color channel images, R, G, and B are obtained first. Then unichrome features corresponding to each color channel are extracted based on Gabor filter with orientation and scale . Finally, R unichrome feature, G unichrome feature, and B unichrome feature are combined together to form unichrome feature. For opponent feature, two Gabor filters with orientation and scale and are used for two color channel images, respectively. As with unichrome feature, RG opponent feature, RB opponent feature, and GB opponent feature are combined together to form opponent features. Image retrieval part illustrates a simple procedure of remote sensing image retrieval. All images and features are stored in image database and feature database, respectively. Meanwhile, images are associated with the corresponding features through an indexing mechanism. Given one query image, distances between query image and images in database are calculated using a predefined similarity measure, and then the first most similar images are returned in ascending or descending order of similarity. Feature extraction is an important and indispensable part in one image retrieval system. In Sec. 2.2, the details of extraction of proposed representations are illustrated. In addition, as the most important procedure of image retrieval part, similarity measure methods used in this study are discussed in Sec. 2.3 as well. 2.2.Feature ExtractionIn our methodology, all images are represented in RGB color space for convenience. Both CGWT and CGOT features are based on Gabor filter, illustrated as follows: where and are mean orientation and scale parameters of Gabor kernels, respectively, , means the norm operator, and is defined as follows: where and . is the maximum frequency, is the spacing factor, and is the standard deviation.Note that the Gabor filter may have many formula forms, and Eq. (1) in Ref. 22 is chosen because of its conciseness and convenience for setting parameters, such as direction and scale, in our algorithm. 2.2.1.Extraction of CGWT descriptorAs illustrated in Fig. 1, CGWT representation consists of two parts, unichrome feature and opponent feature. The terms “unichrome feature” and “opponent feature” follow the definition in Ref. 19, where you can find detailed information about the two features. Let , , and be the three grayscale images of corresponding color channels of an RGB image, respectively. The convolution results of three grayscale images and Gabor kernel are denoted as follows: where has the same meaning as Eq. (1) and * means convolution operator. , , and are the convolution results of three grayscale images with orientation and scale . Then, unichrome feature representation of one color image is represented by where the three components of unichrome feature are R unichrome feature, G unichrome feature, and B unichrome feature, respectively. It is clear that unichrome features are defined as values extracted from a single image band.Then, the difference of normalized , , is defined by where and , denote the orientation and scales of Gabor filters used, respectively. Note that according to Gabor kernels in Eq. (1), we choose and as adjacent scales, which means they should meet restriction condition . Then, opponent features can be defined by where the three components of opponent features are RG opponent feature, RB opponent feature, and GB opponent feature, respectively. It is clear that opponent features are defined as values extracted from the difference of two image bands.According to Eqs. (5) and (6), we can obtain three equations in Eq. (7). During the feature extraction procedure, dimension and efficiency are the two factors needed to be considered, while in the work by Jain and Healey19 and Choi et al.,21 the above factors are not taken into consideration. In our study, we just choose three of them in Eq. (7) to constitute opponent feature so as to decrease feature dimension and increase efficiency. Finally, the CGWT representation of an image is denoted by Eq. (8) 2.2.2.Extraction of CGOT descriptorCGOT representation combines Gabor texture6 and opponent feature together, which substantially decreases the feature dimension compared with CGWT representation. Given one grayscale image , then the convolution of and Gabor kernels with orientation and scale is given by The mean and standard deviation of the transform coefficients are defined by Gabor texture feature composed of and is denoted using . Then, CGOT representation of an image is denoted by 2.2.3.Extraction of comparative texture featuresSome widely used traditional texture features, such as wavelet texture, LBP, and GLCM, are introduced as comparative methods to give a quantitative analysis. Before extraction of these features, the color images are converted into intensity images using the equation , where , , and mean red, green, and blue channels, respectively. Details about these comparative methods are in the following. Wavelet transform makes a great difference in the field of texture analysis. Let be an original image, and then the extraction procedure is described as follows. First, “haar” wavelet is used to construct two decomposition filters, one low-pass filter, and one high-pass filter. Then, 2-level two-dimensional wavelet decomposition is applied to by means of above constructed decomposition filters, and six subband images are obtained. Note that the decomposition level is an important parameter and the size of the smallest subimage should not be less than .7 Finally, the energy of each subband image is calculated using the following equation: where are subband images, is the size of original image, , . Also, wavelet texture feature of image is defined by mean and standard deviation of energy of each subband image using .LBP describes the local structure of image texture through calculating the differences between each image pixel and its neighboring pixels. Ojala et al.8 improved an original LBP operator and developed a generalized grayscale and rotation invariant operator which can detect “uniform” patterns and is denoted by where , and is defined by is the radius of the circularly symmetric neighbor set and is the number of equally spaced pixels on the circle. is the center pixel of circular neighbor and are the neighbor pixels on the circle. is a uniformity measure corresponding to the number of spatial transitions in the “pattern” and riu2 stands for rotation invariant “uniform” patterns having value of at most 2.In our study, 8 pixels circular neighbor of radius 1, i.e., operator is used, and a total of 59 grayscale and rotation invariant LBP histogram is accepted. GLCM is one widely used texture analysis method that considers spatial dependencies of gray levels from the perspective of mathematics. In the work by Haralick et al.,5 14 statistical measures extracted from GLCM are introduced. Nevertheless, many of them are strongly correlated with each other and there are no definitive conclusions about which features are more important and discriminative than others. How to choose appropriate features for texture analysis from 14 statistical measures is still studied by some researchers. Haralick et al. selected four features, energy, entropy, correlation, and contrast, as texture features and conducted classification experiments using a satellite imagery data set, and good classification results are obtained.5 Considering the good performance of the above four features on remote sensing images, energy, entropy, correlation, and contrast are used in our study. They are defined by where , , , and stand for energy, entropy, correlation, and contrast, respectively. , , , and are defined by , , , and . is the entry in GLCMs, where means the number of distinct gray levels in the quantized image, and is the normalized GLCM of pixel distance and direction . In our study, pixel distance is set as 1 and four directions are chosen. In addition, because the used images have 256 gray levels and excessive gray levels will increase the workload of calculating GLCMs drastically, we scale the images to eight gray levels, which means GLCM is one symmetric matrix and . Consequently, a total of eight texture features composed of mean and standard deviation of four features in Eq. (15) is obtained.2.2.4.Parameters settingHow to choose optimal parameters for Gabor wavelets is still studied by some researchers because different parameters may result in different experimental results even for the same question. With respect to parameters used in this study, we choose default parameters used in Ref. 22, and the details are described as follows. Gabor wavelets of five scales and eight orientations , which have been used in most cases, are accepted because they can extract texture features from more scales and orientations. For the rest of the parameters, , , and are accepted, which can be regarded as empirical values. In addition, the size of Gabor window is also an important parameter, and it is set as in this study. Then, a total of 80 Gabor texture features is obtained. According to Eq. (5) and restriction condition , we can obtain 13 scale groups of and eight orientations of . Thus, CGWT and CGOT representations are a total of 432 () and 392 () feature vectors, respectively. 2.3.Similarity MeasureSimilarity measure is an indispensable and important step in image retrieval systems, and different methods may result in great difference even for identical query images. Some widely used similarity measure methods, such as Minkowski distance, histogram intersection, K-L distance, and Jeffrey divergence, etc. tend to have their own scope of application. In such cases, specific similarity measure methods are defined for certain features in this study. Given two images and with corresponding CGWT representations and , the distance measure of CGWT is defined as in Ref. 19 where is the standard deviation of CGWT representation over the entire image database. This distance measure has been used to classify the textural images and ideal classification results have been achieved.19For CGOT representation, considering it is the combination of Gabor texture feature and opponent feature, we integrate distance measure Eq. (16) and distance measure for Gabor texture features in Ref. 6 and define one much simpler distance measure by where is the standard deviation of CGOT representation over the entire image database, and are the corresponding CGOT representation of image and , respectively.Note similarity measure Eq. (17) has similar form but different meanings as similarity measure Eq. (16). Since Gabor texture and opponent feature constitute CGOT representation, distance measure Eq. (17) taking both of them into consideration is appropriate. In this similarity measure, CGOT representation is regarded as a unitary feature, which means it is unnecessary to pay attention to each component of the feature when calculating standard deviation . 3.Experiments and Discussions3.1.Data SetTo evaluate the performance of proposed descriptors, eight land-use/land-cover (LULC) classes from UC Merced LULC data set are chosen as retrieval image database. Original LULC is one manually constructed data set consisting of 21 image classes, and the 100 images in each class are tiles with the size of from large aerial images with the spatial resolution of 30 cm of some US regions.23 LULC data set has been used in many similar studies24,25 and made publicly available to other researchers. Some image patches of eight LULC classes used in our experiments are shown in Fig. 2. From left to right, they are agricultural, airplane, beach, buildings, chaparral, residential, forest, and harbor, respectively. 3.2.Performance of Proposed DescriptorsAccurate and objective evaluation criteria have also been a hot topic in the CBIR community. Precision, recall, precision-recall curves, and ANMRR are publicly accepted as evaluation criteria. However, due to the existence of semantic gap, evaluation of CBIR is not effortless. In addition, it is possible to get different performances with different evaluation methods even if the same data set is used.26 In order to avoid such problems, precision and precision–recall curves are chosen as evaluation methods in this study, because they can be treated as similar evaluations from a different perspective. Precision is the fraction of correct retrievals and recall is the fraction of ground truth items retrieved for a given result set.23 Figure 3 shows the performance of proposed features and conventional texture features. The last bin of the histogram with the label “average” gives the average precision of corresponding features. The chart indicates that CGOT and CGWT representations perform better on five classes, i.e., airplane, beach, chaparral, residential, forest, and harbor, and less than perfect on the other two classes, i.e., agricultural and buildings, compared with wavelet texture. Nevertheless, the two proposed features achieve highest average precision on the whole image classes. Meanwhile, we can see CGOT feature increases the average precision of agricultural, airplane, beach, buildings, residential, and harbor by CGWT feature further, which is particularly obvious with respect to agricultural and harbor due to abundant texture information on these image classes. In order to demonstrate the superiority of the proposed representation, precision–recall curves for different features are presented in Fig. 4 through setting different number of returned images. With the increase of returned images, precision by conventional texture features decrease rapidly, particularly GLCM and LBP. With regard to three rest features, it is evident that CGOT results in the best performance. For CGWT representation and wavelet texture, recall 0.5 can be treated as marginal value. When the value of recall is less than 0.5, CGWT representation performs better, and they have same performance with recall bigger than 0.5. Experimental results, here, are in accordance with the results in Fig. 3, and both of them have validated the effectiveness and good performance of the proposed color texture descriptors. 3.3.Comparisons of Used Similarity MeasuresAs aforementioned, appropriate similarity measure method is necessary in CBIR. For conventional texture features, i.e., GLCM, LBP, and wavelet texture, we choose distance as similarity measure. For CGWT representation, distance measure presented in Ref. 19 is used. Also, for CGOT representation, characteristics of existed distance measure for Gabor texture, unichrome and opponent features are considered and a simpler distance measure for CGOT representation is defined. Table 1 compares the performance of CGOT representation using proposed similarity measure in Eq. (17) with some other similarity measures, such as distance, distance, Jeffrey divergence,27 and distance measure in Ref. 19. Table 1Comparisons of CGOT using different distance measures.
For each group of returned images, the proposed similarity measure achieves highest precision and the average performance is best as well. Table 1 demonstrates that proposed distance measure is an appropriate and effective similarity measure method. 3.4.Examples of Remote Sensing Image RetrievalFigure 5 shows one remote sensing image retrieval example using two proposed descriptors. Figure 5(a) is the query image from agricultural class, and Figs. 5(b) and 5(c) are the first 30 retrieved images of CGOT and CGWT, respectively. Note that these images are returned in the order of descending similarity, which means images ranking front are more similar to the query image. According to the retrieval results of two descriptors, CGOT retrieves more similar images than CGWT. In addition, among the first 12 retrieved images, CGOT returns two irrelevant images, while CGWT returns five irrelevant images, which also indicates the better performance of CGOT descriptor. 3.5.DiscussionFrom the previous experiments of remote sensing image retrieval, some interesting points have been concluded.
In this study, all experiments are conducted using aerial remote sensing images from one public image database. However, not all the selected images have regular texture structure, which will have an effect on the performance of proposed descriptors. In addition, proposed descriptors are likely to be suitable for hyperspectral image retrieval because they have high spectral resolution and more discriminative information can be extracted from image bands. 4.ConclusionWith the rapid development of remote sensing technology, the amount of accessible remote sensing data has been increasing at an incredible rate, which not only provides researchers more choices for various applications, but also brings more challenges. Under the circumstances, CBIR is a better choice for effective organization and management of massive remote sensing data. Traditionally, low-level features, particular texture features, are widely used in CBIR community for their special characteristics. Nevertheless, conventional texture features tend to be extracted from grayscale images directly and ignore the complementary information that is of great importance between color bands. To exploit the complementary information and perform remote sensing image retrieval, CGWT and CGOT representations have been proposed based on Gabor filter and opponent process theory. The filtered images by Gabor filter with five scales and eight orientations are obtained first and then unichrome features, opponent features, and Gabor texture features are extracted. Finally, CGWT and CGOT representations are constituted and used in remote sensing image retrieval. Considering the existence of semantic gap and some other difficulties, two similar evaluations, i.e., precision and precision-recall curves are chosen to evaluate the performance of all texture features. Results demonstrate that CGWT and CGOT perform better than GLCM, LBP, and wavelet texture, and CGOT not only improves the performance of some image classes using CGWT but also increases overall precision of all queried remote sensing images. In addition, a similarity measure for CGOT based on two existed distance measures has been defined. Compared with some widely used distance measures, the proposed similarity measure shows better performance. In the future, the fusion mechanism of unichrome features and opponent features, Gabor texture and opponent features, as well as the influence of color space on proposed descriptors will be considered. AcknowledgmentsThe author would like to thank Shawn Newsam for his LULC data set and the anonymous reviewers for their comments and corrections. This work was supported in part by National Science and Technology Specific Projects under Grant No. 2012YQ16018505 and National Natural Science Foundation of China under Grant No. 61172174. ReferencesT. OjalaM. PietikäinenD. Harwood,
“A comparative study of texture measures with classification based on featured distributions,”
Pattern Recognit., 29
(1), 51
–59
(1996). http://dx.doi.org/10.1016/0031-3203(95)00067-4 PTNRA8 0031-3203 Google Scholar
J. H. du BufM. KardanM. Spann,
“Texture feature performance for image segmentation,”
Pattern Recognit., 23
(3), 291
–309
(1990). http://dx.doi.org/10.1016/0031-3203(90)90017-F PTNRA8 0031-3203 Google Scholar
A. W. Smeulderset al.,
“Content-based image retrieval at the end of the early years,”
IEEE Trans. Pattern Anal. Mach. Intell., 22
(12), 1349
–1380
(2000). http://dx.doi.org/10.1109/34.895972 ITPIDJ 0162-8828 Google Scholar
R. Dattaet al.,
“Image retrieval: ideas, influences, and trends of the new age,”
ACM Comput. Surv., 405
(2), 1
–5:60
(2008). http://dx.doi.org/10.1145/1348246.1348248 ACSUEY 0360-0300 Google Scholar
R. M. HaralickK. ShanmugamI. H. Dinstein,
“Textural features for image classification,”
IEEE Trans. Syst. Man Cybern., SMC-3
(6), 610
–621
(1973). http://dx.doi.org/10.1109/TSMC.1973.4309314 ITSHFX 1083-4427 Google Scholar
B. S. ManjunathW. Y. Ma,
“Texture features for browsing and retrieval of image data,”
IEEE Trans. Pattern Anal. Mach. Intell., 18
(8), 837
–842
(1996). http://dx.doi.org/10.1109/34.531803 ITPIDJ 0162-8828 Google Scholar
T. ChangC. C. Kuo,
“Texture analysis and classification with tree-structured wavelet transform,”
IEEE Trans. Image Process., 2
(4), 429
–441
(1993). http://dx.doi.org/10.1109/83.242353 IIPRE4 1057-7149 Google Scholar
T. OjalaM. PietikainenT. Maenpaa,
“Multiresolution gray-scale and rotation invariant texture classification with local binary patterns,”
IEEE Trans. Pattern Anal. Mach. Intell., 24
(7), 971
–987
(2002). http://dx.doi.org/10.1109/TPAMI.2002.1017623 ITPIDJ 0162-8828 Google Scholar
C. H. LinR. T. ChenY. K. Chan,
“A smart content-based image retrieval system based on color and texture feature,”
Image Vision Comput., 27
(6), 658
–665
(2009). http://dx.doi.org/10.1016/j.imavis.2008.07.004 IVCODK 0262-8856 Google Scholar
Y. D. ChunN. C. KimI. H. Jang,
“Content-based image retrieval using multiresolution color and texture features,”
IEEE Trans. Multimedia, 10
(6), 1073
–1084
(2008). http://dx.doi.org/10.1109/TMM.2008.2001357 ITMUF8 1520-9210 Google Scholar
S. LiapisG. Tziritas,
“Color and texture image retrieval using chromaticity histograms and wavelet frames,”
IEEE Trans. Multimedia, 6
(5), 676
–686
(2004). http://dx.doi.org/10.1109/TMM.2004.834858 ITMUF8 1520-9210 Google Scholar
M. Flickneret al.,
“Query by image and video content: the QBIC system,”
Computer, 28
(9), 23
–32
(1995). http://dx.doi.org/10.1109/2.410146 CPTRB4 0018-9162 Google Scholar
C. W. Niblacket al.,
“QBIC project: querying images by content, using color, texture, and shape,”
Proc. SPIE, 1908 173
–187
(1993). http://dx.doi.org/10.1117/12.143648 PSISDG 0277-786X Google Scholar
J. Yueet al.,
“Content-based image retrieval using color and texture fused features,”
Math. Comput. Modell., 54
(3), 1121
–1127
(2011). http://dx.doi.org/10.1016/j.mcm.2010.11.044 MCMOEG 0895-7177 Google Scholar
M. SinghaK. Hemachandran,
“Content based image retrieval using color and texture,”
SIPIJ, 3
(1), 39
–57
(2012). http://dx.doi.org/10.5121/sipij.2012.3104 Google Scholar
M. H. Tsaiet al.,
“Color-texture-based image retrieval system using Gaussian Markov random field model,”
Math. Prob. Eng., 2009 1
–17
(2010). http://dx.doi.org/10.1155/2009/410243 1024-123X Google Scholar
T. MäenpääM. Pietikäinen,
“Classification with color and texture: jointly or separately,”
Pattern Recognit., 37
(8), 1629
–1640
(2004). http://dx.doi.org/10.1016/j.patcog.2003.11.011 PTNRA8 0031-3203 Google Scholar
L. M. HurvichD. Jameson,
“An opponent-process theory of color vision,”
Psychol. Rev., 64
(6), 384
–404
(1957). http://dx.doi.org/10.1037/h0041403 PSRVAX 0033-295X Google Scholar
A. JainG. Healey,
“A multiscale representation including opponent color features for texture recognition,”
IEEE Trans. Image Process., 7
(1), 124
–128
(1998). http://dx.doi.org/10.1109/83.650858 IIPRE4 1057-7149 Google Scholar
M. ShiG. Healey,
“Hyperspectral texture recognition using a multiscale opponent representation,”
IEEE Trans. Geosci. Remote Sens., 41
(5), 1090
–1095
(2003). http://dx.doi.org/10.1109/TGRS.2003.811076 IGRSD2 0196-2892 Google Scholar
J. Y. ChoiY. M. RoK. N. Plataniotis,
“Color local texture features for color face recognition,”
IEEE Trans. Image Process., 21
(3), 1366
–1380
(2012). http://dx.doi.org/10.1109/TIP.2011.2168413 IIPRE4 1057-7149 Google Scholar
C. LiuH. Wechsler,
“Gabor feature based classification using the enhanced fisher linear discriminant model for face recognition,”
IEEE Trans. Image Process., 11
(4), 467
–476
(2002). http://dx.doi.org/10.1109/TIP.2002.999679 IIPRE4 1057-7149 Google Scholar
Y. YangS. Newsam,
“Geographic image retrieval using local invariant features,”
IEEE Trans. Geosci. Remote Sens., 51
(2), 818
–832
(2013). http://dx.doi.org/10.1109/TGRS.2012.2205158 IGRSD2 0196-2892 Google Scholar
V. RisojevićZ. Babić,
“Fusion of global and local descriptors for remote sensing image classification,”
IEEE Geosci. Remote Sens. Lett., 10
(4), 836
–840
(2013). http://dx.doi.org/10.1109/LGRS.2012.2225596 IGRSBY 1545-598X Google Scholar
E. Aptoula,
“Remote sensing image retrieval with global morphological texture descriptors,”
IEEE Trans. Geosci. Remote Sens., 52
(5), 3023
–3034
(2013). http://dx.doi.org/10.1109/TGRS.2013.2268736 IGRSD2 0196-2892 Google Scholar
H. MüllerS. Marchand-MailletT. Pun,
“The truth about corel-evaluation in image retrieval,”
Image and Video Retrieval, 38
–49 Springer, Berlin Heidelberg
(2002). http://dx.doi.org/10.1007/3-540-45479-9_5 Google Scholar
Y. RubnerC. TomasiL. J. Guibas,
“The earth mover’s distance as a metric for image retrieval,”
Int. J. Comput. Vision, 40
(2), 99
–121
(2000). http://dx.doi.org/10.1023/A:1026543900054 IJCVEQ 0920-5691 Google Scholar
BiographyZhenfeng Shao received his PhD degree from Wuhan University, China, in 2004. He is now a professor of the State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, China. His research interests are image retrieval, image fusion, and urban remote sensing application. Weixun Zhou received his BS degree from Anhui University of Science and Technology, Anhui, China, in 2012. He is now working toward his master’s degree at the State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, China. His research interests are remote sensing image retrieval and image processing. Lei Zhang received her BS degree from Xinyang Normal University, Henan, China, in 2011. She is currently working toward the PhD degree in State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, China. Her research interests include dimensionality reduction, hyperspectral classification, sparse representation, and pattern recognition in remote sensing images. Jihu Hou received his BS degree from Hubei University, China, in 2012. He is now working toward his master’s degree at the State Key Laboratory of Information Engineering in Surveying, Mapping and Remote Sensing, Wuhan University, China. His research interests are remote sensing image retrieval, image processing, and GIS applications. |