Cooperative Fish and Wildlife Research Units Program: all
Education, Research and Technical Assistance for Managing Our Natural Resources


Gonzalez Perez, A., B. Wilkinson. A. Abd-Elrahman, R. R. Carthy, and D.J. Johnson. 2022. Deep and machine learning image classification of coastal wetlands using unpiloted aircraft system multispectral images and lidar datasets. Remote Sensing, 14(16), 3937; https://doi.org/10.3390/rs14163937

Abstract

The recent developments of new deep learning architectures create opportunities to accurately classify high-resolution Unoccupied Aerial System (UAS) images of natural coastal systems and mandate continuous evaluation of algorithm performance. We evaluated the performance of four machine learning techniques applied to UAS multispectral aerial imagery and canopy height models (CHM) prepared using both, UAS-acquired lidar, and structure-from-motion (SfM) point clouds combined with a DTM from a publicly accessible lidar dataset. We assessed the performance of the U-Net and DeepLabv3 deep convolutional network architectures and two traditional machine learning techniques (support vector machine (SVM) and random forest (RF)) applied to seventeen coastal land cover types in the Wolf Branch Creek Coastal Nature Preserve located in west Florida. Twelve combinations of spectral bands and CHMs were used to train the classifiers. The classification algorithms were trained with a total of 3,094 polygons created from an object-based image segmentation step. A total of 747 random, ground-truth accuracy assessment points, representing all 17 classes, were used to assess the classification results. Our results using the spectral bands showed that the U-Net (83.80% - 85.27% overall accuracy) and the DeepLabV3 (75.20% - 83.50% overall accuracy) deep learning techniques outperformed the SVM (60.50%-71.10% overall accuracy) and the RF (57.40% -71.0%) machine learning algorithms. Adding the CHM to the spectral bands slightly increased the overall accuracy as a whole in the deep learning models, with some vegetation classifications benefitting from the additional information, while the addition of a CHM notably improved the SVM and RF results. Similarly, using bands outside the three spectral bands, namely near infrared and red edge, increased the performance of the machine learning classifiers with minimal impact on the deep learning classification results. The difference in the overall accuracies produced by the use of UAS-based lidar and SfM point clouds, as supplementary geometrical information, in the classification process were minimal across all classification techniques. Our results highlight the advantage of using deep learning networks to classify high resolution UAS images in highly diverse coastal landscapes. We also found that low-cost, three-visible-band imagery is feasible without risking a significant reduction in the classification accuracy when deep learning models are adopted.