Classifying Multilevel Imagery From SAR and Optical Sensors by Decision Fusion

Abstract

A strategy for the joint classification of multiple segmentation levels from multisensor imagery is introduced by using synthetic aperture radar and optical data. At first, the two data sets are separately segmented, creating independent aggregation levels at different scales. Each individual level from the two sensors is then preclassified by a support vector machine (SVM). The original outputs of each SVM, i.e., images showing the distances of the pixels to the hyperplane fitted by the SVM, are used in a decision fusion to determine the final classes. The fusion strategy is based on the application of an additional classifier, which is applied on the preclassification results. Both a second SVM and random forests (RF) were tested for the decision fusion. The results are compared with SVM and RF applied to the full data set without preclassification. Both the integration of multilevel information and the use of multisensor imagery increase the overall accuracy. It is shown that the classification of multilevel-multisource data sets with SVM and RF is feasible and does not require a definition of ideal aggregation levels. The proposed decision fusion approach that applies RF to the preclassification outperforms all other approaches.

Publication
IEEE Transactions on Geoscience and Remote Sensing