Open Access
10 January 2017 Applying object-based image analysis and knowledge-based classification to ADS-40 digital aerial photographs to facilitate complex forest land cover classification
Yi-Ta Hsieh, Chaur-Tzuhn Chen, Jan-Chang Chen
Author Affiliations +
Abstract
In general, considerable human and material resources are required for performing a forest inventory survey. Using remote sensing technologies to save forest inventory costs has thus become an important topic in forest inventory-related studies. Leica ADS-40 digital aerial photographs feature advantages such as high spatial resolution, high radiometric resolution, and a wealth of spectral information. As a result, they have been widely used to perform forest inventories. We classified ADS-40 digital aerial photographs according to the complex forest land cover types listed in the Fourth Forest Resource Survey in an effort to establish a classification method for categorizing ADS-40 digital aerial photographs. Subsequently, we classified the images using the knowledge-based classification method in combination with object-based analysis techniques, decision tree classification techniques, classification parameters such as object texture, shape, and spectral characteristics, a class-based classification method, and geographic information system mapping information. Finally, the results were compared with manually interpreted aerial photographs. Images were classified using a hierarchical classification method comprised of four classification levels (levels 1 to 4). The classification overall accuracy (OA) of levels 1 to 4 is within a range of 64.29% to 98.50%. The final result comparisons showed that the proposed classification method achieved an OA of 78.20% and a kappa coefficient of 0.7597. On the basis of the image classification results, classification errors occurred mostly in images of sunlit crowns because the image values for individual trees varied. Such a variance was caused by the crown structure and the incident angle of the sun. These errors lowered image classification accuracy and warrant further studies. This study corroborates the high feasibility for mapping complex forest land cover types using ADS-40 digital aerial photographs.

1.

Introduction

Aerial photography technology has undergone sophisticated development in recent years. The use of aerial photography to classify forest types, perform land cover change studies, and monitor disaster areas for the forestry industry subsequently becomes a crucial source of forest inventory-related information.16 The forest land cover situation in Taiwan is remarkably complex, and mixed forest types are markedly prevalent. The artificial stereoscopic interpretation method is generally adopted in the use of aerial photographs to determine forest types in Taiwan; however, this method creates interpretation results that are often inconsistent. Image classification is another method that can be employed to determine forest types. This method generally examines differences in image pixel values in which the results are used for subsequent analyses. When high-resolution images are used for analyses, they provide more detailed spatial information. However, conventional pixel value-based classification methods create excessively large data volumes during calculations and ignore the implication of adjacent pixels, which results in errors.7,8When performing image classifications, the use of spatial correlation can effectively elevate classification result accuracy.710 In high-resolution images, because terrain features may consist of several pixels, the characteristics of objects forming the terrain features can be identified more easily through manual interpretation. Therefore, many studies performed high-resolution image classifications using object-based image analysis (OBIA).1113 OBIA primarily uses image segmentation techniques to divide highly homogenous adjacent pixels of an image into spatially correlated objects.11 Next, the unequally sized image objects are separated into different categories. The image segmentation techniques are thus viewed as a type of classification technique that integrates spatial and spectral information.7,8,1113 In addition, shapes of image blocks obtained from the image segmentation process can be used to identify terrain features, and the interpretation process is similar to a manual-like interpretation process.7 All image blocks feature their own spectra and shapes.14 Similarly, knowledge-based systems can be used to interpret and classify the terrain features of image blocks. By adopting hierarchical logic and adding appropriate spatial characteristic-related information, classification result accuracy can be improved1,12,13,15 and combined with some statistical models, such as classification and regression tree (CART)1618 and random forest.13

This study investigated the feasibility of using high-resolution digital images in forest land cover classification. The feasibility test procedure is as follows: first, we selected ADS-40 digital airborne multispectral photographs as the study materials. Next, we combined OBIA techniques with a knowledge-based classification method. Finally, we performed forest land cover classification using a stratified classification method.

2.

Research Area and Materials

2.1.

Overview of the Research Area

The research site is located at the Da-Chia stream working cycle, Central Taiwan (longitude 120°43′ to 121°26′; latitude 24°03′ to 24°28′). This area is under the jurisdiction of the Dongshih Forest District Office (of the Forestry Bureau) and faces the Liwuhsi Business Area to the East, the Luanta Business Area to the southwest, the Puli Business Area to the south, and the Nanchuang, Taipingshan, Nan’ao, and Ta’anhsi Right Bank Business Areas to the north (Fig. 1). The forest land cover for this site primarily includes coniferous tree, broadleaf tree, bamboo forest, fruit tree, farmland, grassland, landslide area, wetland, roads, and bare grounds.

Fig. 1

Image and map of the research area.

JARS_11_1_015001_f001.png

2.2.

Images

This study used aerial photographs taken by the Aerial Survey Office on September 21, 2008, using a Leica ADS-40 airborne digital scanner and performed a series of orthorectification-related image processing to create orthoimages.19 The spatial resolution of the Leica ADS-40 ortho-images was 25 cm, providing detailed surface properties. The Leica ADS-40 airborne digital scanner was able to simultaneously acquire information from 11 wave bands. The panchromatic charge-coupled device (CCD) of the scanner contained three line scan cameras that captured images observed by the aircraft from a forward view angle of 27 deg, a nadir view angle of 2 deg, and a backward view angle of 14 deg. In addition, four multispectral line scan CCDs were installed at a nadir view angle of 0 deg and a backward view angle of 16 deg to provide red, green, and blue lights as well as capture near-infrared (NIR) light-related wave band information. The wavelength of each light type is listed as follows: panchromatic light, 465 to 680 nm; red light, 610 to 660 nm; green light, 535 to 585 nm; blue light, 430 to 490 nm, and NIR light, 835 to 885 nm. In addition, the images presented 14-bit radiometric resolution and excellent signal-to-noise ratios. The multispectral band was analyzed in this study. The images were stored in a 16-bit format but retained their original radiometric resolutions (multispectral band = 13 bits).

2.3.

Ground Truth Data and Supporting Images

Ground truth data used by the Dajia River Business Area (i.e., the sample area) are mainly comprised of land cover type mapping (prepared by the Taiwan Aerial Survey Office) as well as national land cover mapping, natural vegetation mapping, and the Third Forest Resource Survey mapping (compiled by the National Land Surveying and Mapping Center). Such reference data were then combined with aerial photographs to determine terrain object and type, and the ground verification was carried out between September 16, 2011, and September 17, 2011 (Fig. 2).

Fig. 2

Ground verification photos for each land cover type with coordinates (WGS 84: x, y).

JARS_11_1_015001_f002.png

In addition, the digital elevation model data for the area of Taiwan were collected by the Aerial Survey Office, Forestry Bureau. Analysis and digitization of the 3-D aerial photos resulted in a resolution of 20  m×20  m and files with a scale of 1/5000.

3.

Research Method

3.1.

Object-Based Image Analysis Performed During the Image Preprocessing Stage

3.1.1.

Image segmentation

This study performed an image segmentation scale experiment for image terrain boundaries. The segmentation method primarily used the discontinuity and heterogeneity observed in the spectral responses of original images to break the images into smaller blocks. These small blocks were then set as a basic unit in subsequent classifications for image reconstruction. To effectively extract complete terrain features, users were required to carefully assess the image segmentation situation and set the appropriate segmentation criteria. Likewise, the selection of a favorable segmentation scale facilitated the extraction of desired terrain features.

This study performed image segmentation using the remote sensing software ERDAS IMAGINE® image segmentation module and assessed the image segmentation situations using parameter adjustments. For example, the “edge detection threshold” function was used to determine whether the edge threshold existed between two pixels; the “minimal value difference” function was used to identify the difference between an object and its surrounding objects; and the variance factor was used to identify the extent of pixel value changes in images. The extent of pixel value changes in images was assessed to determine whether it fell within the threshold and, subsequently, ascertain whether the object block range was to be extended. The parameter settings influenced the number of object blocks segmented. To find the optimal image segmentation scale to objectify the images, the following segmentation parameters were used, as shown in Table 1.

Table 1

Segmentation parameter settings for the images.

CodeSegmentation parameters
ThresholdMinimum lengthMinimal value differenceCoefficient of variation
Segment20020032003.5
Segment25025032503.5
Segment26026032603.5
Segment28028032803.5
Segment33033033303.5
Segment35035033503.5
Segment37037033703.5
Segment40040034003.5

3.1.2.

Creating object images

After ADS-40 digital aerial photographs were segmented using segmentation scales of varying sizes, the object image creation process began. The original four wave bands, vegetation index, gray-level cooccurrence matrix (GLCM), and block area (i.e., areas of blocks created by the respective segmentation scale) of the images were obtained to create the object images. In this study, classification image-related data from 26 wave bands were obtained, as shown in Table 2.

Table 2

Image-related data.

Image segmentation scaleImage-related dataFormulaName of image-related data
Red light wave bandRRed light 200 and red light 400
Green light wave bandGGreen light 200 and green light 400
Blue light wave bandBBlue light 200 and blue light 400
NIR light wave bandNIRNIR light 200 and NIR light 400
Brightness

Eq. (1)

Brightness=R+G+B+NIR4
Zhou et al.20BRI 200 and BRI 400
200 and 400Normalized difference vegetation index

Eq. (2)

NDVI=NIRRNIR+R
Rouse et al. 21NDVI 200 and NDVI 400
Simple ratio vegetation index

Eq. (3)

SR=NIRR
Jordan22SR 200 and SR 400
GLCM—contrast

Eq. (4)

Contrast=i=0N1×j=0N1P(i,j)(ij)2
Haralick et al.23Contrast 200 and contrast 400
GLCM—dissimilarity

Eq. (5)

Dissimilarity=i=0N1×j=0N1P(i,j)|ij|
Haralick et al.23Dissimilarity 200 and dissimilarity 400
GLCM—homogeneity

Eq. (6)

Homogeneity=i=0N1×j=0N1P(i,j)[1+(ij)2]
Haralick et al.23Homogeneity 200 and homogeneity 400
200Block area calculationArea 200
250Block area calculationArea 250
260Block area calculationArea 260
280Block area calculationArea 280
330Block area calculationArea 330
350Block area calculationArea 350
370Block area calculationArea 370
400Block area calculationArea 400

3.2.

Training Data Assessment

In this study, aerial photographs were used to interpret and determine the training data, and land cover mapping was used to support the interpretation process. The nine land use classes which primarily include coniferous tree, broadleaf tree, bamboo forest, fruit tree, farmland, grassland, landslide area, wetland, roads, and bare grounds. The training data (50 training points for each class) were used to analyze each variable in this study.

To elevate the image classification accuracy, the training data underwent assessment in which a transformed divergence (TD) test was performed on each type of training data. TD indicies are shown as follows:

Eq. (7)

DijT=2000[1Exp(Dij8)],

Eq. (8)

Dij:12tr[(CiCj)(Ci1Cj1)]+12tr[(Ci1Cj1)(μiμj)(μiμj)T],
where i and j are the categories being compared; Ci is the covariance matrix of category i; μi is the mean vector of category i; tr is the sum of the trace and diagonal lines of a matrix; and DijT is the TD between categories i and j.

3.3.

Image Classification

3.3.1.

Classification rules

The classification method first selected terrains that possessed unique features and were highly distinguishable. Next, terrain type was divided into different subcategories in a hierarchical format, which effectively reduced terrain type-related confusions. Subsequently, logical classification rules were set by referring to expert interpretation knowledge, and a logical procedure was established by making logical inferences. This simplified the complex classification process and allowed intricate terrain features to be effectively classified.

The stratified classification method was divided into four levels: level 1 was comprised of shaded and nonshaded areas; level 2 consisted of vegetation-covered and nonvegetation-covered areas; level 3 incorporated high and low biomass areas; and level 4 involved terrain feature-related classifications. The stratified classification diagram is shown in Fig. 3.

Fig. 3

Stratified classification rules.

JARS_11_1_015001_f003.png

3.3.2.

Knowledge-based classification

This study employed a knowledge-based classification and constructed a stratified classification model using known classification knowledge. Next, the CART algorithm was used to assess training data of each terrain type from which classification rules were established. Concurrently, supporting mapping data were used to create the classification criteria. Classification methods for levels 1 to 4 were shown in Table 3.

Table 3

Classification methods for different classification level.

Classification levelClassification algorithm
Level 1Brightness with bimodal histogram splitting method
Level 2CART algorithm
Level 3CART algorithm
Level 4CART algorithm

Level 1 classification

In level 1 classification, the researchers initially performed shadow detection procedures to divide images into the shadow and non-shadow image areas. Shadow detection procedures were conducted using brightness. This study defined brightness as the mean of the R, G, B, and NIR wavebands [Eq. (1) in Table 2].

The difference between the shadow and the nonshadow areas was enhanced using the brightness of the image before dividing the image into the respective shadow and nonshadow areas. Because the brightness values for the shadow and nonshadow areas presented bimodal distribution, the brightness thresholds of these areas can be determined using the bimodal histogram splitting method. The minimum or trough value between two brightness peaks in the histogram can be used as the threshold value to differentiate between the shadow and the nonshadow areas.

Levels 2 to 4 classification

After level 1 classification, the image was divided into the respective shadow and nonshadow areas, and the nonshadow areas were used for the classification of levels 2 to 4. The classification of levels 2 to 4 were performed using CART algorithm. First, by selecting the training data of each terrain type, classification variables or wave band image-related data were extracted. Next, the CART algorithm proposed by Breiman et al.24 was used to establish classification rules. The CART is a technique that can be used to make classifications and predictions in the field of data mining-related research. By analyzing original data, appropriate classification rules are established and used to make predictions for data with an unknown outcome; the process also involves the use of single input variable functions to construct dichotomous decision trees. Using cross validation, this study randomly distributed data to n groups. A total of n1 groups were then used to build a model in which the remaining group was selected to verify the model; the verification process continued until all groups had been chosen as a model verifier. The maximum difference in risk was selected as the postpruning rule to create tree structures with minimum risk values. Next, changes in the misclassification rate were investigated for all tree structure sizes, and models were corrected to identify the optimal model size. The classification threshold values provided by decision trees can be used to set up classification rules. In addition, supporting data or classification conditions may be added according to the literature or expert experience to enhance image classification accuracy.

3.3.3.

Classification accuracy assessment

Image classification accuracy assessment refers to the process in which the image results obtained from the image classification process are compared with current ground truth data to determine the classification accuracy. In this study, 633 sample location data were randomly selected and paired with aerial photographs for verification. The verification indicators that measured the accuracy of the classification results consisted of producer’s accuracy, omission error, user’s accuracy, commission error (CE), overall accuracy (OA), and the kappa index.

4.

Results

4.1.

Image Segmentation

The selection of a favorable segmentation scale facilitates the extraction of terrain features. Images that underwent image segmentation using the segmentation scale 200 generated a higher number of image blocks, whereas those that underwent image segmentation using the segmentation scale 400 produced a lower number of image blocks because its parameter values were more tolerant of image pixel value changes (Table 4).

Table 4

Comparison between the number of segmented blocks.

Segmentation scaleNumber of segmented blocks
Segment2003,381,578
Segment4001,117,689

Figure 3 shows the segmented blocks of forest lands obtained using segmentation scales 200 and 400. According to the figure, the segmentation scale 200 produced more intricate images in which each canopy crown was segmented into multiple blocks. Conversely, the segmentation scale 400 produced more complete images in which each crown was segmented into one block; however, many crown clusters were not segmented. Figure 4 also compared the segmentation results of coniferous and broadleaf forests; the segmentation scale 200 oversegmented the broadleaf forests but favorably segmented the coniferous forests. Such a result may be caused by coniferous trees featuring smaller crowns, which enabled the smaller segmentation scale to display superior results. By contrast, the segmentation scale 400 produced more favorable broadleaf forest segmentation results but failed to segment a number of crowns in coniferous forests.

Fig. 4

Segmentation results for a coniferous forest obtained using segmentation scales (a) 200 and (b) 400; segmentation results for a broadleaf forest obtained using segmentation scales (c) 200 and (d) 400.

JARS_11_1_015001_f004.png

Concerning segmentation images of landslide areas and wetlands, the segmentation scale 400 produced more complete object blocks than the segmentation scale 200. However, the former showed slight boundary confusions at the boundary between the landslide areas and wet lands, whereas the latter showed no boundary confusions (Fig. 5).

Fig. 5

Boundary detection results of landslide areas and wetlands produced using segmentation scales (a and b) 200 and (c and d) 400.

JARS_11_1_015001_f005.png

Both segmentation scales 200 and 400 showed favorable terrain boundary detection results for roads, buildings, and farmlands in which the latter produced more complete blocks (Fig. 6). Regarding the segmentation results for farmlands, the segmentation scale 200 was able to accurately detect terrain boundary between vegetation and nonvegetation farmlands but oversegmented farmlands that grew the same crops. Conversely, the segmentation scale 400 showed more complete farmland segmentation blocks (Fig. 6). The completeness of the segmentation blocks improved the accuracy of the image classification results and prevented fragmented pixels of images from other image categories from affecting the classification accuracy.7

Fig. 6

Image segmentation results for nonforest areas obtained using segmentation scales (a) 200 and (b) 400.

JARS_11_1_015001_f006.png

4.2.

Divergence Test

To understand the image divergence situation for each terrain category, this study performed a divergence test on the training data in which 26 wave bands of the multiscale images were assessed. The TD value was 2000 for all terrain categories, showing that all the categories shared an identical divergence value.

4.3.

Classification Accuracy Assessment

In this study, images were classified using a hierarchical classification method comprised of four classification levels. The OA and the kappa coefficients for each level and classification are shown in Table 5, whereas the classification results for levels 1, 2, and 3 are shown in Fig. 7.

Table 5

Classification results of different level classifications.

LevelClassificationOverall accuracy (%)Kappa coefficient
1Shaded and nonshaded areas98.500.9608
2Vegetation-covered and nonvegetation-covered areas96.670.9333
3Woody and herbaceous species areas95.000.8854
4Herbaceous species areas93.400.8639
4Woody species areas64.290.5229
4Nonvegetation-covered areas85.090.8125
4Terrain types78.200.7597

Fig. 7

Classification results for levels 1, 2, and 3.

JARS_11_1_015001_f007.png

The classification results for vegetation-covered and nonvegetation-covered areas were combined in Fig. 8. A total of 633 inspection areas were selected for a classification accuracy assessment; the results showed an OA of 78.20% and an overall kappa coefficient of 0.7597. The confusion matrix is shown in Table 6.

Fig. 8

(a) Original image and (b) image classification results.

JARS_11_1_015001_f008.png

Table 6

Confusion matrix of the final classification results.

TypeConiferous treeBroadleaf treeBamboo forestFruit treeFarmlandGrasslandLandslide areaWetlandRoadBuildingBare groundSumUser’s accuracy (%)Commission error rate (%)
Coniferous tree31300000000006150.8249.18
Broadleaf tree7562000000006586.1513.85
Bamboo forest143017300000006426.5673.44
Fruit tree0405800000006293.556.45
Farmland0000596000006590.779.23
Grassland0000140000004197.562.44
Landslide area0000005531076683.3316.67
Wetland0000004391014586.6713.33
Road0000001149366081.6718.33
Building0000000024124591.118.89
Bare ground0000003042505984.7515.25
Sum52120196160466343574666633
Producer’s accuracy (%)59.6246.6789.4795.0898.3386.9687.3090.7085.9689.1375.76
Omission rate (%)40.3853.3310.534.921.6713.0412.709.3014.0410.8724.24
Overall accuracy: 78.20%; overall kappa coefficient: 0.7597.

5.

Discussion

Among vegetation types, the classification results for farmlands, fruit trees, and grasslands were superior. Their omission rates were 1.67%, 4.92%, and 13.04%, respectively, whereas their CE rates were 9.23%, 6.45%, and 2.44%, respectively (Table 6). The use of support mapping and stratified classification allowed the aforementioned terrain types to achieve superior classification accuracy. However, farmlands and grassland with herbaceous species showed poorer classification results when the general classification model was used because they featured similar spectral reflectance values. The use of geographical data as supporting data reduces classification errors, showing that the use of such data is a key to improving knowledge-based classification accuracy.25,26 Therefore, this study formulated classification rules by combining empirical knowledge and logic with support mapping using the knowledge-based system in which the classification range for farmlands was controlled.

The classification results for coniferous trees, broadleaf trees, and bamboo forests were inferior. Their omission rates were 40.38%, 53.33%, and 10.53%, respectively, whereas their CE rates were 49.18%, 13.85%, and 73.44%, respectively (Table 6). Coniferous trees were misclassified as broadleaf trees and bamboo forests; the misclassification of coniferous trees as bamboo forests was the most substantial. Similarly, broadleaf trees were misclassified as coniferous trees, bamboo forests, and fruit trees; the misclassification of broadleaf trees as bamboo forests was the most severe. However, misclassification of fruit trees was observed less, which was attributed to the inclusion of support mapping data. According to Fig. 9, misclassification of broadleaf trees generally occurred in blocks with bright crowns.

Fig. 9

Misclassification of broadleaf trees because of crowns; (a) original image, (b) object image obtained using segmentation scale 200, (c) object image obtained using segmentation scale 400, and (d) image classification.

JARS_11_1_015001_f009.png

In a study on the use of digital aerial photographs to detect pests and diseases in trees of different classifications, Meddens et al.24 showed that misclassifications were prone to occur in pixels containing bright crowns as a result of strong sunlight. Similar results were observed in this study in which both segmentation scales 200 and 400 segmented crowns with relatively brighter pixels into independent object blocks; misclassifications were also more likely to occur in such blocks, confirming the effect of bright crown blocks on classification accuracy. Figure 9 shows how the bright crown blocks of broadleaf trees were more prone to being classified as bamboo forests, which contributed to the high CE rates of bamboo forests and the high omission rates of broadleaf trees.

For nonvegetation-covered areas, wetlands showed the highest classification accuracy, followed by buildings, landslide areas, roads, and bare grounds. Misclassifications were effectively lowered for roads and buildings because of the inclusion of support mapping, which limited the distribution range for each terrain feature and lowered the CEs between the terrain features.

Wetlands were sometimes misclassified as landslide areas and vice versa. However, for other terrain features, misclassifications were less observed. Reference 27 performed image classification using an unsupervised classification method in which they found that the misclassification of water bodies as roads (and vice versa) was frequently observed. Conversely, in this study, the use of the knowledge-based classification in combination with classification rules established using support mapping effectively reduced the misclassification of wetlands as roads (and vice versa). Concerning wetlands that were misinterpreted as landslide areas, we compared the original images of such wetlands and found that image blocks with CEs were generally comprised of terrains with brighter image blocks. For example, CEs were mostly observed in wetlands containing a water body reflecting strong sunlight. Bare grounds displayed the lowest classification accuracy. The instances in which they were repeatedly misclassified as landslide areas or roads may be attributed to the three land types sharing similar spectral characteristics.

The aforementioned terrain classification results showed that by adopting knowledge-based classification rules and limiting the distribution areas for each terrain feature, the complexity of terrain classification can be lowered and classification accuracy can be elevated.

6.

Conclusion

In this study, we classified images using the knowledge-based classification method in combination with object-oriented analysis techniques, hierarchical classification rules, and decision tree classification techniques. Subsequently, classification parameters, such as object texture, shape, and spectral characteristics, were used to form a class-based classification method and used together with geographic information system mapping information. The results were then compared with manually interpreted aerial photographs. The result comparisons showed that the above classification method achieved an OA of 78.20% and a kappa coefficient of 0.7597. Such results indicate that using knowledge-based classification method, formulating classification rules using geographical and system data as support mapping, and including the concept of stratified classification, the likelihood of misclassification can be reduced and classification accuracy can be enhanced. On the basis of the image classification results, some classification errors occurred in images of sunlit crowns because the image values for individual trees varied. Such a variance was caused by the crown structure and the incident angle of the sun. These errors lowered image classification accuracy and warrant further studies.

Acknowledgments

The authors express their sincere gratitude to the Aerial Survey Office (Forestry Bureau of the Council of Agriculture) for assisting this study through a project grant (2011 agricultural science-8.7.1-affairs-e2) and providing the required images.

References

1. 

T. R. Martha et al., “Characterising spectral, spatial and morphometric properties of landslides for semi-automatic detection using object-oriented methods,” Geomorphology, 116 (1–2), 24 –36 (2010). http://dx.doi.org/10.1016/j.geomorph.2009.10.004 Google Scholar

2. 

I. Korpela, “Individual tree measurements by means of digital aerial photogrammetry,” Silva Fennica Monographs, 1 –93 Finnish Society of Forest Science, Helsinki, Finland (2004). Google Scholar

3. 

M. Gillis and and D. G. Leckie, “Forest inventory update in Canada,” For. Chron., 72 138 –156 (1996). http://dx.doi.org/10.5558/tfc72138-2 Google Scholar

4. 

V. Heikkinen et al., “An SVM classification of tree species radiometric signatures based on the Leica ADS40 sensor,” IEEE Trans. Geosci. Remote Sens., 49 4539 –4551 (2011). http://dx.doi.org/10.1109/TGRS.2011.2141143 IGRSD2 0196-2892 Google Scholar

5. 

P. Pant et al., “Evaluation of simulated bands in airborne optical sensors for tree species identification,” Remote Sens. Environ., 138 27 –37 (2013). http://dx.doi.org/10.1016/j.rse.2013.07.016 Google Scholar

6. 

Korpela et al., “Tree species identification in aerial image data using directional reflectance signatures,” Silva Fennica, 48 (3), 1087 (2014). http://dx.doi.org/10.14214/sf.1087 Google Scholar

7. 

Y. W. Cheng et al., “Object-oriented image classification with high spatial resolution satellite imagery,” J. Photogramm. Remote Sens., 13 (4), 273 –289 (2008). http://dx.doi.org/10.6574/JPRS.2008.13(4).5 Google Scholar

8. 

Y. Gao et al., “Comparison of pixel-based and objectoriented image classification approaches - a case study in a coal fire area, Wuda, Inner Mongolia, China,” Int. J. Remote Sens., 27 4039 –4055 (2006). http://dx.doi.org/10.1080/01431160600702632 Google Scholar

9. 

V. Walter, “Object-based classification of remote sensing data for change detection,” J. Photogramm. Remote Sens., 58 225 –238 (2004). http://dx.doi.org/10.1016/j.isprsjprs.2003.09.007 Google Scholar

10. 

G. X. Hsiao et al., “The evaluation of image classification methods for rice paddy interpretation,” J. Photogramm. Remote Sens., 9 (1), 13 –26 (2004). http://dx.doi.org/10.6574/JPRS.2004.9(1).2 Google Scholar

11. 

T. Blaschke et al., “Object based image analysis for remote sensing,” ISPRS Int. J. Photogramm. Remote Sens., 65 (1), 2 –16 (2010). http://dx.doi.org/10.1016/j.isprsjprs.2009.06.004 Google Scholar

12. 

J. Mullerova et al., “Remote sensing as a tool for monitoring plant invasions: testing the effects of data resolution and image classification approach on the detection of a model plant species Heracleum mantegazzianum (giant hogweed),” Int. J. Appl. Earth Obs. Geoinf., 25 55 –65 (2013). http://dx.doi.org/10.1016/j.jag.2013.03.004 Google Scholar

13. 

A. Juel et al., “Spatial application of random forest models for fine-scale coastal vegetation classification using object based analysis of aerial orthophoto and DEM data,” Int. J. Appl. Earth Obs. Geoinf., 42 106 –114 (2015). http://dx.doi.org/10.1016/j.jag.2015.05.008 Google Scholar

14. 

K. Takashi et al., “Classification of the SPOT image using spectral and spatial features of primitive regions that have nearly uniform color,” IEEE Trans. Geosci. Remote Sens., 28 (4), 749 –752 (1990). http://dx.doi.org/10.1109/TGRS.1990.573011 Google Scholar

15. 

W. K. Huang et al., “Applying object-oriented analysis to segmentation and classification of landslide and artificial facilities with remote sensing images,” J. Photogramm. Remote Sens., 15 (1), 29 –49 (2010). http://dx.doi.org/10.6574/JPRS.2010.15(1).3 Google Scholar

16. 

M. Xu et al., “Decision tree regression for soft classification of remote sensing data,” Remote Sens. Environ., 97 322 –336 (2005). http://dx.doi.org/10.1016/j.rse.2005.05.008 Google Scholar

17. 

T. R. Tooke et al., “Extracting urban vegetation characteristics using spectral mixture analysis and decision tree classifications,” Remote Sens. Environ., 113 398 –407 (2009). http://dx.doi.org/10.1016/j.rse.2008.10.005 Google Scholar

18. 

C. Hladik and M. Alber, “Classification of salt marsh vegetation using edaphic and remote sensing-derived variables,” Estuarine Coastal Shelf Sci., 141 47 –57 (2014). http://dx.doi.org/10.1016/j.ecss.2014.01.011 Google Scholar

19. 

K. S. Ye et al., “Applying digital aerial images in natural hazards mitigation in Taiwan,” J. Photogramm. Remote Sens., 15 (1), 123 –140 (2010). http://dx.doi.org/10.6574/JPRS.2010.15(1).9 SASOBR 0549-4451 Google Scholar

20. 

W. Zhou et al., “Object-based land cover classification of shaded areas in high spatial resolution imagery of urban areas: a comparison study,” Remote Sens. Environ., 113 1769 –1777 (2009). http://dx.doi.org/10.1016/j.rse.2009.04.007 Google Scholar

21. 

L. T. Waser et al., “Semi-automatic classification of tree species in different forest ecosystems by spectral and geometric variables derived from Airborne Digital Sensor (ADS40) and RC30 data,” Remote Sens. Environ., 115 76 –85 (2011). http://dx.doi.org/10.1016/j.rse.2010.08.006 Google Scholar

22. 

J. H. Meddens et al., “Evaluating the potential of multispectral imagery to map multiple stages of tree mortality,” Remote Sens. Environ., 115 1632 –1642 (2011). http://dx.doi.org/10.1016/j.rse.2011.02.018 Google Scholar

23. 

J. Rouse et al., “Monitoring the vernal advancement and retrogradation (green wave effect) of natural vegetation,” 69 College Station (1973). Google Scholar

24. 

L. Breiman et al., Classification and Regression Trees, 357 Chapman and Hall, New York (1984). Google Scholar

25. 

F. Jordan, “Derivation of leaf area index from quality of light on the forest floor,” Ecology, 50 663 –666 (1969). http://dx.doi.org/10.2307/1936256 ECGYAQ 0094-6621 Google Scholar

26. 

R. M. Haralick et al., “Textural features for image classification,” IEEE Trans. Syst. Man Cybern. SMC, 3 (6), 610 –621 (1973). http://dx.doi.org/10.1109/TSMC.1973.4309314 Google Scholar

27. 

R. Y. Lee and and C. W. Chuang, “The study on classification accuracy of landuse categories from high resolution satellite images using image segmentation algorithm,” J. Photogramm. Remote Sens., 11 (4), 403 –415 (2006). http://dx.doi.org/10.6574/JPRS.2006.11(4).4 Google Scholar

Biography

Yi-Ta Hsieh is a postdoctoral researcher at the National Pingtung University of Science and Technology, Department of Forestry. He received his PhD in the Department of Graduate Institute of Bioresources, National Pingtung University of Science and Technology, in 2015. His current research interests include remote sensing of forests.

Chaur-Tzuhn Chen is a professor at the National Pingtung University of Science and Technology, Department of Forestry. His current research interests include forest management and remote sensing of environment.

Jan-Chang Chen is an associate professor at the National Pingtung University of Science and Technology, Department of Forestry. His current research interests include climate change, forest mensuration, and remote sensing of forests.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Yi-Ta Hsieh, Chaur-Tzuhn Chen, and Jan-Chang Chen "Applying object-based image analysis and knowledge-based classification to ADS-40 digital aerial photographs to facilitate complex forest land cover classification," Journal of Applied Remote Sensing 11(1), 015001 (10 January 2017). https://doi.org/10.1117/1.JRS.11.015001
Received: 4 May 2016; Accepted: 9 December 2016; Published: 10 January 2017
Lens.org Logo
CITATIONS
Cited by 5 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Image segmentation

Image classification

Digital photography

Photography

Image analysis

Classification systems

Image processing

Back to Top