Affiliation:
1. University of Szeged, Szeged
2. MTA-SZTE Research Group on Artificial Intelligence of the Hungarian Academy of Sciences, Hungary
Abstract
Abstract
Wireless sensors are recent, portable, low-powered devices, designed to record and transmit observations of their environment such as speech. To allow portability they are designed to have a small size and weight; this, however, along with their low power consumption, usually means that they have only quite basic recording equipment (e.g. microphone) installed. Recent speech technology applications typically require several dozen hours of audio recordings (nowadays even hundreds of hours is common), which is usually not available as recorded material by such sensors. Since systems trained with studio-level utterances tend to perform suboptimally for such recordings, a sensible idea is to adapt models which were trained on existing, larger, noise-free corpora. In this study, we experimented with adapting Deep Neural Network-based acoustic models trained on noise-free speech data to perform speech recognition on utterances recorded by wireless sensors. In the end, we were able to achieve a 5% gain in terms of relative error reduction compared to training only on the sensor-recorded, restricted utterance subset.
Cited by
7 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献