Combining visual interpretation and image segmentation to derive canopy cover index from high resolution satellite imagery in functionally diverse coniferous forests
-
Published:2024-02-29
Issue:1
Volume:52
Page:13637
-
ISSN:1842-4309
-
Container-title:Notulae Botanicae Horti Agrobotanici Cluj-Napoca
-
language:
-
Short-container-title:Not Bot Horti Agrobo
Author:
BARNOAIEA Ionuţ,PALAGHIANU Ciprian,DRĂGOI Marian
Abstract
Forest canopy cover is one of the most significant structural parameters of the forest stand that can be estimated using of aerial and satellite remote sensing. Even though sub-pixel analysis can be used to estimate the index on low-resolution imagery, high-resolution imagery provides more accurate details on forest canopy variability for ecological and forestry applications. However, the high variability of the images demands a more advanced approach to canopy cover measurement than the visual interpretation of single images or stereo pairs. These traditional methods are inefficient and limited in providing a comprehensive and accurate canopy cover assessment. An improvement of the method could involve classifying high spatial resolution images, separating and extracting the areas corresponding to canopy gaps, and generating canopy cover maps. This study offers valuable insights and reveals key differences between three methods for estimating canopy cover: ground measurements, visual photo interpretation and automatic extraction from classified images using pixel and object-based methods. The texture analysis approach was used for separating the “shadow” objects corresponding to gaps in the canopy from the shades cast on the lower trees. The sample plot-based visual interpretation of the images revealed comparable results between ground and satellite image canopy cover values (correlation coefficient of 0.74 for all plots), with lower correlations (r = 0.39) for uneven-aged stands. The results encourage the use of the texture analysis method, with satisfying accuracy (forest canopy cover differences of maximum 0.06 between ground, photo interpreted and extracted datasets). The method could be further integrated with complementary data like LIDAR or hyperspectral images.
Publisher
University of Agricultural Sciences and Veterinary Medicine Cluj-Napoca