Crop and Weed Segmentation and Fractal Dimension Estimation Using Small Training Data in Heterogeneous Data Environment
-
Published:2024-05-10
Issue:5
Volume:8
Page:285
-
ISSN:2504-3110
-
Container-title:Fractal and Fractional
-
language:en
-
Short-container-title:Fractal Fract
Author:
Akram Rehan1ORCID, Hong Jin Seong1ORCID, Kim Seung Gu1, Sultan Haseeb1, Usman Muhammad1ORCID, Gondal Hafiz Ali Hamza1, Tariq Muhammad Hamza1, Ullah Nadeem1, Park Kang Ryoung1
Affiliation:
1. Division of Electronics and Electrical Engineering, Dongguk University, 30 Pildong-ro, 1-gil, Jung-gu, Seoul 04620, Republic of Korea
Abstract
The segmentation of crops and weeds from camera-captured images is a demanding research area for advancing agricultural and smart farming systems. Previously, the segmentation of crops and weeds was conducted within a homogeneous data environment where training and testing data were from the same database. However, in the real-world application of advancing agricultural and smart farming systems, it is often the case of a heterogeneous data environment where a system trained with one database should be used for testing with a different database without additional training. This study pioneers the use of heterogeneous data for crop and weed segmentation, addressing the issue of degraded accuracy. Through adjusting the mean and standard deviation, we minimize the variability in pixel value and contrast, enhancing segmentation robustness. Unlike previous methods relying on extensive training data, our approach achieves real-world applicability with just one training sample for deep learning-based semantic segmentation. Moreover, we seamlessly integrated a method for estimating fractal dimensions into our system, incorporating it as an end-to-end task to provide important information on the distributional characteristics of crops and weeds. We evaluated our framework using the BoniRob dataset and the CWFID. When trained with the BoniRob dataset and tested with the CWFID, we obtained a mean intersection of union (mIoU) of 62% and an F1-score of 75.2%. Furthermore, when trained with the CWFID and tested with the BoniRob dataset, we obtained an mIoU of 63.7% and an F1-score of 74.3%. We confirmed that these values are higher than those obtained by state-of-the-art methods.
Funder
Ministry of Science and ICT Information Technology Research Center Institute for Information & Communications Technology Planning & Evaluation
Reference70 articles.
1. Convolutional Neural Networks for Image-Based High-Throughput Plant Phenotyping: A Review;Jiang;Plant Phenomics,2020 2. Corn Forage Yield Prediction Using Unmanned Aerial Vehicle Images at Mid-Season Growth Stage;Fathipoor;J. Appl. Remote Sens,2019 3. Yang, Q., Wang, Y., Liu, L., and Zhang, X. (2024). Adaptive Fractional-Order Multi-Scale Optimization TV-L1 Optical Flow Algorithm. Fractal Fract., 8. 4. Huang, T., Wang, X., Xie, D., Wang, C., and Liu, X. (2023). Depth Image Enhancement Algorithm Based on Fractional Differentiation. Fractal Fract., 7. 5. Bai, X., Zhang, D., Shi, S., Yao, W., Guo, Z., and Sun, J. (2023). A Fractional-Order Telegraph Diffusion Model for Restoring Texture Images with Multiplicative Noise. Fractal Fract., 7.
|
|