Abstract
AbstractIn this paper, we propose a method to generate an audio output based on spectroscopy data in order to discriminate two classes of data, based on the features of our spectral dataset. To do this, we first perform spectral pre-processing, and then extract features, followed by machine learning, for dimensionality reduction. The features are then mapped to the parameters of a sound synthesiser, as part of the audio processing, so as to generate audio samples in order to compute statistical results and identify important descriptors for the classification of the dataset. To optimise the process, we compare Amplitude Modulation (AM) and Frequency Modulation (FM) synthesis, as applied to two real-life datasets to evaluate the performance of sonification as a method for discriminating data. FM synthesis provides a higher subjective classification accuracy as compared with to AM synthesis. We then further compare the dimensionality reduction method of Principal Component Analysis (PCA) and Linear Discriminant Analysis in order to optimise our sonification algorithm. The results of classification accuracy using FM synthesis as the sound synthesiser and PCA as the dimensionality reduction method yields a mean classification accuracies of 93.81% and 88.57% for the coffee dataset and the fruit puree dataset respectively, and indicate that this spectroscopic analysis model is able to provide relevant information on the spectral data, and most importantly, is able to discriminate accurately between the two spectra and thus provides a complementary tool to supplement current methods.
Funder
Birmingham City University
Publisher
Springer Science and Business Media LLC
Subject
Computer Vision and Pattern Recognition,Linguistics and Language,Human-Computer Interaction,Language and Linguistics,Software
Reference40 articles.
1. Ahmad, A., Adie, S. G., Wang, M., & Boppart, S. A. (2010). Sonification of optical coherence tomography data and images. Optics Express, 18(10), 9934–9944. https://doi.org/10.1364/OE.18.009934.
2. Beghi, R., Giovanelli, G., Malegori, C., Giovenzana, V., & Guidetti, R. (2014). Testing of a VIS-NIR system for the monitoring of long-term apple storage. Food and Bioprocess Technology, 7(7), 2134–2143. https://doi.org/10.1007/s11947-014-1294-x.
3. Belhumeur, P. N., Hespanha, J. P., & Kriegman, D. J. (1997). Eigenfaces vs. fisherfaces: Recognition using class specific linear projection. IEEE Transactions on Pattern Analysis and Machine Intelligence, 19(7), 711–720. https://doi.org/10.1109/34.598228.
4. Briandet, R., Kemsley, E. K., & Wilson, R. H. (1996). Discrimination of Arabica and Robusta in instant coffee by Fourier transform infrared spectroscopy and chemometrics. Journal of Agricultural and Food Chemistry, 44(1), 170–174. https://doi.org/10.1021/jf950305a.
5. Cassidy, R. J., Berger, J., Lee, K., Maggioni, M., & Coifman, R. R. (2004). Auditory display of hyperspectral colon tissue images using vocal synthesis models. In Proceedings of ICAD 04-10th meeting of the international conference on auditory display, Georgia Institute of Technology.
Cited by
2 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
1. SPECTROSCOPY DATA CALIBRATION USING STACKED ENSEMBLE MACHINE LEARNING;IIUM Engineering Journal;2024-01-01
2. Advanced Optical Technologies in Food Quality and Waste Management;Innovation in the Food Sector Through the Valorization of Food and Agro-Food By-Products;2021-07-14