Indigenous forest classification in New Zealand – A comparison of classifiers and sensors

Ning Ye*, Justin Morgenroth, Cong Xu, Na Chen

*Corresponding author for this work

Research output: Contribution to journalArticleAcademicpeer-review

4 Citations (Scopus)


Understanding the composition and the changes of New Zealand’s woody vegetation communities is important for effective management. However, past national-scale mapped classifications emphasised mature rather than seral vegetation communities and forests were mapped in relative coarse spatial resolution. The integration of Sentinel-2 and PlanetScope imagery provides an opportunity for forest mapping with low cost and high accuracy.

This study aims to investigate the feasibility of the integrated image for detailed forest mapping. Free satellite data (Sentinel-2, PlanetScope, fused data) were compared with commercial data (WorldView-2, and WorldView-2 resampled to Sentinel-2 and PlanetScope spatial resolutions) by conducting pixel-based classification with three machine learning classifiers (Support Vector Machine radial basis function kernel, Random Forest, Artificial Neural Network). The combinations of imagery type and classifier were assessed on their potential for mapping nine land cover classes in podocarp forest in New Zealand’s central north island, including: conifer, low layer vegetation, broadleaf evergreen, highland softwood, wetland vegetation, water, dead tree, lowland softwood, and low-density vegetation and bare soil. Spectral features (single bands and indices), textural features, and an 8 m resolution digital terrain model (DTM) were used in classifications; the relative importance of these input features was also assessed.

In this study, it was found that the overall classification accuracy was dependent on the combination of classifier and imagery, with different combinations resulting in a range of accuracies between 0.669 and 0.956. The best overall accuracy was achieved by integrating Sentinel-2 and PlanetScope imagery (0.956) which was even greater than that of WorldView-2 (0.951). The digital terrain model was the most important feature for all scenarios; Gray-Level Co-Occurrence Matrix-Mean was the most important texture variable for WorldView-2 and integrated images. Original bands, as well as GI, Norm-G, and SR-NIRR, were also crucial for vegetation classification.
Original languageEnglish
Article number102395
JournalInternational Journal of applied Earth Observation and Geoinformation
Publication statusPublished - 1 Oct 2021


  • Artificial neural network
  • Image fusion
  • Land cover
  • Land use
  • Pixel-based classification
  • Vegetation classification


Dive into the research topics of 'Indigenous forest classification in New Zealand – A comparison of classifiers and sensors'. Together they form a unique fingerprint.

Cite this