EDPNet: An Encoding–Decoding Network with Pyramidal Representation for Semantic Image Segmentation
Author:
Chen Dong1ORCID, Li Xianghong1ORCID, Hu Fan1ORCID, Mathiopoulos P. Takis2ORCID, Di Shaoning3ORCID, Sui Mingming1ORCID, Peethambaran Jiju4ORCID
Affiliation:
1. College of Civil Engineering, Nanjing Forestry University, Nanjing 210037, China 2. Department of Informatics and Telecommunications, National and Kapodistrian University of Athens, 15784 Athens, Greece 3. School of Geosciences and Info Physics, Central South University, Changsha 410083, China 4. Department of Mathematics and Computing Science, Saint Mary’s University, Halifax, NS B3P 2M6, Canada
Abstract
This paper proposes an encoding–decoding network with a pyramidal representation module, which will be referred to as EDPNet, and is designed for efficient semantic image segmentation. On the one hand, during the encoding process of the proposed EDPNet, the enhancement of the Xception network, i.e., Xception+ is employed as a backbone to learn the discriminative feature maps. The obtained discriminative features are then fed into the pyramidal representation module, from which the context-augmented features are learned and optimized by leveraging a multi-level feature representation and aggregation process. On the other hand, during the image restoration decoding process, the encoded semantic-rich features are progressively recovered with the assistance of a simplified skip connection mechanism, which performs channel concatenation between high-level encoded features with rich semantic information and low-level features with spatial detail information. The proposed hybrid representation employing the proposed encoding–decoding and pyramidal structures has a global-aware perception and captures fine-grained contours of various geographical objects very well with high computational efficiency. The performance of the proposed EDPNet has been compared against PSPNet, DeepLabv3, and U-Net, employing four benchmark datasets, namely eTRIMS, Cityscapes, PASCAL VOC2012, and CamVid. EDPNet acquired the highest accuracy of 83.6% and 73.8% mIoUs on eTRIMS and PASCAL VOC2012 datasets, while its accuracy on the other two datasets was comparable to that of PSPNet, DeepLabv3, and U-Net models. EDPNet achieved the highest efficiency among the compared models on all datasets.
Funder
National Natural Science Foundation of China Natural Science Foundation of Jiangsu Province Qinglan Project of Jiangsu Province, China Key Laboratory of Land Satellite Remote-Sensing Applications, Ministry of Natural Resources of the People’s Republic of China
Subject
Electrical and Electronic Engineering,Biochemistry,Instrumentation,Atomic and Molecular Physics, and Optics,Analytical Chemistry
Reference66 articles.
1. Azad, R., Aghdam, E.K., Rauland, A., Jia, Y., Avval, A.H., Bozorgpour, A., Karimijafarbigloo, S., Cohen, J.P., Adeli, E., and Merhof, D. (2022). Medical image segmentation review: The success of u-net. arXiv. 2. Convolutional neural networks in medical image understanding: A survey;Sarvamangala;Evol. Intell.,2022 3. A review of the use of convolutional neural networks in agriculture;Kamilaris;J. Agric. Sci.,2018 4. Gradient convolutional neural network for classification of agricultural fields with contour levee;Meyarian;Int. J. Remote Sens.,2022 5. Lu, R., Wang, N., Zhang, Y., Lin, Y., Wu, W., and Shi, Z. (2022). Extraction of Agricultural Fields via DASFNet with Dual Attention Mechanism and Multi-scale Feature Fusion in South Xinjiang, China. Remote Sens., 14.
Cited by
3 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|