Semantic ontologies for multimedia indexing (SOMI)
Author:
Bendib Issam,Ridda Laouar Mohamed,Hacken Richard,Miles Mathew
Abstract
Purpose
– The overwhelming speed and scale of digital media production greatly outpace conventional indexing methods by humans. The management of Big Data for e-library speech resources requires an automated metadata solution. The paper aims to discuss these issues.
Design/methodology/approach
– A conceptual model called semantic ontologies for multimedia indexing (SOMI) allows for assembly of the speech objects, encapsulation of semantic associations between phonic units and the definition of indexing techniques designed to invoke and maximize the semantic ontologies for indexing. A literature review and architectural overview are followed by evaluation techniques and a conclusion.
Findings
– This approach is only possible because of recent innovations in automated speech recognition. The introduction of semantic keyword spotting allows for indexing models that disambiguate and prioritize meaning using probability algorithms within a word confusion network. By the use of AI error-training procedures, optimization is sought for each index item.
Research limitations/implications
– Validation and implementation of this approach within the field of digital libraries still remain under development, but rapid developments in technology and research show rich conceptual promise for automated speech indexing.
Practical implications
– The SOMI process has been preliminarily tested, showing that hybrid semantic-ontological approaches produce better accuracy than semantic automation alone.
Social implications
– Even as testing proceeds on recorded conference talks at the University of Tebessa (Algeria), other digital archives can look toward similar indexing. This will mean greater access to sound file metadata.
Originality/value
– Huge masses of spoken data, unmanageable for a human indexer, can prospectively find semantically sorted and prioritized indexing – not transcription, but generated metadata – automatically, quickly and accurately.
Subject
Library and Information Sciences,Information Systems
Reference14 articles.
1. Chelba, C.
,
Silva, J.
and
Acero, A.
(2007), “Soft indexing of speech content for search in spoken documents”, Computer Speech and Language, Vol. 21 No. 3, pp. 458-478. 2. El Meliani, R.
and
O'Shaughnessy, D.
(1995), “Lexical fillers for task-independent-training based keyword spotting and detection of new words”, EUROSPEECH, Fourth European Conference on Speech Communication and Technology, Madrid, September 18-21, Universität Trier, Trier, pp. 2129-2133. 3. Jones, G.J.F.
and
Foote, J.T.
(1996), “Retrieving spoken documents by combining multiple index sources”, in
Frei, H.-P.
,
Harman, D.
,
Schäuble, P.
and
Wilkinson, R.
(Eds), Proceedings of the 19th Annual International ACM SIGIR Conference on Research and Development in Information Retrieval, Zürich, August 18-22, ACM Press, New York, NY, pp. 30-38. 4. Larson, M.
(2001), “Sub-word-based language models for speech recognition: implications for spoken document retrieval”, Workshop Proceedings on Language Modeling and Information Retrieval, May 31-June 1, 2001, Carnegie Mellon University, Pittsburgh, pp. 78-82. 5. Logan, B.
,
Moreno, P.
and
Deshmukh, O.
(2002), “Word and sub-word indexing approaches for reducing the effects of OOV queries on spoken audio”, Proceedings of the Second International Conference on Human Language Technology Research, San Diego, March 24-27, Morgan Kaufmann Publishers, San Francisco, CA, pp. 31-35.
Cited by
6 articles.
订阅此论文施引文献
订阅此论文施引文献,注册后可以免费订阅5篇论文的施引文献,订阅后可以查看论文全部施引文献
|
|