Affiliation:
1. School of Civil Engineering Beijing Jiaotong University Beijing China
2. Beijing Key Laboratory of Track Engineering Beijing Jiaotong University Beijing China
3. School of Highway Chang'an University Xi'an China
4. School of Physical Science and Engineering Beijing Jiaotong University Beijing China
Abstract
AbstractThe noise within train is a paradox; while harmful to passenger health, it is useful to operators as it provides insights into the working status of vehicles and tracks. Recently, methods for identifying defects based on interior noise signals are emerging, among which representation learning is the foundation for deep neural network models to understand the key information and structure of the data. To provide foundational data for track fault detection, a representation learning framework for interior noise, named the interior noise representation framework, is introduced. The method includes: (i) using wavelet transform to represent the original noise signal and designing a soft and hard denoising module for dataset denoising; (ii) deep residual convolutional denoising variational autoencoder (VAE) module performs representation learning with a VAE and deep residual convolutional neural networks, enabling richer data augmentation for sparsely labeled samples by manipulating the embedding space; (iii) deep embedding clustering submodule balances the representation of reconstruction and clustering features through the joint optimization of these aspects, categorizing metro noise into three distinct classes and effectively discriminating significantly different features. The experimental results show that, compared to traditional mechanism‐based models for characterizing interior noise, this approach offers a data‐driven general analysis framework, providing a foundational model for downstream tasks.
Funder
Beijing Municipal Natural Science Foundation
Fundamental Research Funds for the Central Universities
Reference59 articles.
1. Aksan E. &Hilliges O.(2019).STCN: Stochastic temporal convolutional networks. arXiv preprint arXiv:1902.06568.https://arxiv.org/abs/1902.06568
2. A dynamic ensemble learning algorithm for neural networks
3. Enhancing multimodal patterns in neuroimaging by siamese neural networks with self‐attention mechanism;Arco J. E.;International Journal of Neural Systems,2023
4. Baevski A. Hsu W.‐N. Xu Q. Babu A. Gu J. &Auli M.(2022).data2vec: A general framework for self‐supervised learning in speech vision and language.International Conference on Machine Learning PMLR Baltimore MD(pp.1298–1312).
5. Baevski A. Zhou Y. Mohamed A. &Auli M.(2020).wav2vec 2.0: A framework for self‐supervised learning of speech representations.Advances in Neural Information Processing Systems 33 Online (pp.12449–12460).