Affiliation:
1. College of Mechanical and Electronic Engineering, Northwest A&F University, Xianyang 712100, China
2. College of Information Engineering, Northwest A&F University, Xianyang 712100, China
3. College of Water Resources and Architectural Engineering, Northwest A&F University, Xianyang 712100, China
4. Shanghai Institute of Satellite Engineering, Shanghai 200000, China
Abstract
Neural network models play an important role in crop extraction based on remote sensing data. However, when dealing with high-dimensional remote sensing data, these models are susceptible to performance degradation. In order to address the challenges associated with multi-source Gaofen satellite data, a novel method is proposed for dimension reduction and crop classification. This method combines the benefits of the stacked autoencoder network for data dimensionality reduction, and the convolutional neural network for classification. By leveraging the advantages of multi-dimensional remote sensing information, and mitigating the impact of dimensionality on the classification accuracy, this method aims to improve the effectiveness of crop classification. The proposed method was applied to the extraction of crop-planting areas in the Yangling Agricultural Demonstration Zone, using multi-temporal spectral data collected from the Gaofen satellites. The results demonstrate that the fusion network, which extracts low-dimensional characteristics, offers advantages in classification accuracy. At the same time, the proposed model is compared with methods such as the decision tree (DT), random forest (RF), support vector machine (SVM), hyperspectral image classification based on a convolutional neural network (HICCNN), and a characteristic selection classification method based on a convolutional neural network (CSCNN). The overall accuracy of the proposed method can reach 98.57%, which is 7.95%, 4.69%, 5.68%, 1.21%, and 1.10% higher than the above methods, respectively. The effectiveness of the proposed model was verified through experiments. Additionally, the model demonstrates a strong robustness when classifying based on new data. When extracting the crop area of the entire Yangling District, the errors for wheat and corn are only 9.6% and 6.3%, respectively, and the extraction results accurately reflect the actual planting situation of crops.
Funder
National Natural Science Foundation of China
Subject
General Earth and Planetary Sciences
Reference52 articles.
1. A new color index for vegetation segmentation and classification;Lee;Precis. Agric.,2021
2. Phenology-pigment based automated peanut mapping using sentinel-2 images;Qiu;GIScience Remote Sens.,2021
3. Precision agriculture and sustainability;Bongiovanni;Precis. Agric.,2004
4. Machine learning-based crop recognition from aerial remote sensing imagery;Tian;Front. Earth Sci.,2021
5. A joint learning Im-BiLSTM model for incomplete time-series Sentinel-2A data imputation and crop classification;Chen;Int. J. Appl. Earth Obs. Geoinf.,2022
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献