Affiliation:
1. SRM Institute of Science and Technology
Abstract
Abstract
The categorization of pixels in hyperspectral pictures into distinct classifications, such as kinds of land cover, satellite, crops, and urban areas, is a crucial task in remote sensing and image analysis. Due to its capacity to extract spatial and spectral information from hyperspectral data, convolutional autoencoders (CAEs) are a class of neural network design that may be used for this job. In this abstract a technique for hyperspectral image categorization using CAEs is discussed. A CAE is trained as an unsupervised feature extractor, where the encoder learns to extract useful spectral and spatial features from the hyperspectral data and the decoder learns to reconstruct the original data from the lower-dimensional latent representation. When the CAE has been trained, features from the hyperspectral pictures are extracted using the encoder, and a classifier is trained on top of these features for supervised classification.
Publisher
Research Square Platform LLC
Reference5 articles.
1. "t-Linear Tensor Subspace Learning for Robust Feature Extraction of Hyperspectral Images";Plaza Antonio;IEEE Transactions on Geoscience and Remote Sensing,2023
2. "Novel Data-Driven Spatial-Spectral Correlated Scheme for Dimensionality Reduction of Hyperspectral Images,";Zhang Y;IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing,2022
3. "Supervised Dimensionality Reduction of Hyperspectral Imagery Via Local and Global Sparse Representation,";Cao F;IEEE Journal of Selected Topics in Applied Earth Observations and Remote Sensing,2021
4. Z. Xue, S. Yang and M. Zhang, "Shape-Adaptive Tensor Factorization Model for Dimensionality Reduction of Hyperspectral Images," in IEEE Access, vol. 7, pp. 115160–115170, 2019, doi: 10.1109/ACCESS.2019.2935496.
5. X. Wu, D. Hong and D. Zhao, "Hyper-Embedder: Learning a Deep Embedder for Self-Supervised Hyperspectral Dimensionality Reduction," in IEEE Geoscience and Remote Sensing Letters, vol. 19, pp. 1–5, 2022, Art no. 5510605, doi: 10.1109/LGRS.2021.3119339.