Synergy of Sentinel-1 and Sentinel-2 Imagery for Crop Classification Based on DC-CNN
-
Published:2023-05-24
Issue:11
Volume:15
Page:2727
-
ISSN:2072-4292
-
Container-title:Remote Sensing
-
language:en
-
Short-container-title:Remote Sensing
Author:
Zhang Kaixin123, Yuan Da123, Yang Huijin1234, Zhao Jianhui1234ORCID, Li Ning1234ORCID
Affiliation:
1. School of Computer and Information Engineering, Henan University, Kaifeng 475004, China 2. Henan Province Engineering Research Center of Spatial Information Processing, Kaifeng 475004, China 3. Henan Key Laboratory of Big Data Analysis and Processing, Kaifeng 475004, China 4. College of Agriculture, Henan University, Kaifeng 475004, China
Abstract
Over the years, remote sensing technology has become an important means to obtain accurate agricultural production information, such as crop type distribution, due to its advantages of large coverage and a short observation period. Nowadays, the cooperative use of multi-source remote sensing imagery has become a new development trend in the field of crop classification. In this paper, the polarimetric components of Sentinel-1 (S-1) decomposed by a new model-based decomposition method adapted to dual-polarized SAR data were introduced into crop classification for the first time. Furthermore, a Dual-Channel Convolutional Neural Network (DC-CNN) with feature extraction, feature fusion, and encoder-decoder modules for crop classification based on S-1 and Sentinel-2 (S-2) was constructed. The two branches can learn from each other by sharing parameters so as to effectively integrate the features extracted from multi-source data and obtain a high-precision crop classification map. In the proposed method, firstly, the backscattering components (VV, VH) and polarimetric components (volume scattering, remaining scattering) were obtained from S-1, and the multispectral feature was extracted from S-2. Four candidate combinations of multi-source features were formed with the above features. Following that, the optimal one was found on a trial. Next, the characteristics of optimal combinations were input into the corresponding network branches. In the feature extraction module, the features with strong collaboration ability in multi-source data were learned by parameter sharing, and they were deeply fused in the feature fusion module and encoder-decoder module to obtain more accurate classification results. The experimental results showed that the polarimetric components, which increased the difference between crop categories and reduced the misclassification rate, played an important role in crop classification. Among the four candidate feature combinations, the combination of S-1 and S-2 features had a higher classification accuracy than using a single data source, and the classification accuracy was the highest when two polarimetric components were utilized simultaneously. On the basis of the optimal combination of features, the effectiveness of the proposed method was verified. The classification accuracy of DC-CNN reached 98.40%, with Kappa scoring 0.98 and Macro-F1 scoring 0.98, compared to 2D-CNN (OA reached 94.87%, Kappa scored 0.92, and Macro-F1 scored 0.95), FCN (OA reached 96.27%, Kappa scored 0.94, and Macro-F1 scored 0.96), and SegNet (OA reached 96.90%, Kappa scored 0.95, and Macro-F1 scored 0.97). The results of this study demonstrated that the proposed method had significant potential for crop classification.
Funder
the National Natural Science Foundation of China the Plan of Science and Technology of Henan Province the College Key Research Project of Henan Province the Plan of Science and Technology of Kaifeng City the Key Laboratory of Natural Resources Monitoring and Regulation in Southern Hilly Region, Ministry of Natural Resources of the People’s Republic of China the Key Laboratory of Land Satellite Remote Sensing Application, Ministry of Natural Resources of the People’s Republic of China
Subject
General Earth and Planetary Sciences
Reference75 articles.
1. A CNN-Transformer Hybrid Approach for Crop Classification Using Multitemporal Multisensor Images;Li;IEEE J. Sel. Top. Appl. Earth Obs. Remote Sens.,2020 2. Yang, S., Zhang, Q., Yuan, X., Chen, Q., and Liu, X. (2017, January 23–28). Super pixel-based Classification Using Semantic Information for Polarimetric SAR Imagery. Proceedings of the 2017 IEEE International Geoscience and Remote Sensing Symposium (IGARSS), Fort Worth, TX, USA. 3. Xie, Y., and Huang, J. (2021). Integration of a Crop Growth Model and Deep Learning Methods to Improve Satellite-Based Yield620 Estimation of Winter Wheat in Henan Province, China. Remote Sens., 13. 4. Ezzahar, J., Ouaadi, N., Zribi, M., Elfarkh, J., Aouade, G., Khabba, S., Er-Raki, S., Chehbouni, A., and Jarlan, L. (2020). Evaluation of Backscattering Models and Support Vector Machine for the Retrieval of Bare Soil Moisture from Sentinel-1 Data. Remote Sens., 12. 5. Martos, V., Ahmad, A., Cartujo, P., and Ordoñez, J. (2021). Ensuring Agricultural Sustainability through Remote Sensing in the Era of Agriculture. Appl. Sci., 11.
Cited by
5 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|