Abstract
In this paper we present a new system to classify TV programs into predefined categories based on the analysis of their audio and video contents. This is very useful in intelligent display and storage systems that can select channels and record or skip contents according to the consumer's preference. Distinguishable patterns exist in different categories of TV programs in terms of human faces and audio. In this paper four categories divided into news, cartoon, variety and sport are of interest. News and variety have differences between frames less than sport and cartoon. For audio feature, we apply short time energy, zero crossing, spectral centroid and short time Fourier transform for feature extraction. For face feature, in the first step, Haar like feature is employed for face detection and eigenface is then applied for feature extraction. Then, neural network is implemented for classification. From experimental results, classification rate of 95% accuracy which is better than the other paper is achievable.
Publisher
Trans Tech Publications, Ltd.
Reference19 articles.
1. W. Aelsberg, S. Fischer, and R. Lienhart, Automatic recognition of film genres, in the Third ACM International Multimedia Conference and Exhibition (MULTIMEDIA '95), pp.367-368, New York, Nov. 1995. ACM Press.
2. V. Kobla, D. DeMenthon, and D. Doermann, Detection of slow-motion replay sequences for identifying sports videos, in IEEE 1999 International Workshop on Multimedia Signal Processing, Copenhagen, Denmark, Sept. (1999).
3. Z. Liu, J. Huang, and Y. Wang, Classification of TV programs based on audio information using hidden Markov model, in IEEE Signal Processing Society 1998 Workshop on Multimedia Signal Processing, pp.27-32, Dec. (1998).
4. M. Montagnuolo and A. Messina, Multimedia knowledge representation for automatic annotation of broadcast TV archives, in Proc. of the WMS06, June (2006).
5. C. G. Snoek, M. Worring, Multimodal Video Indexing: A Review of the State-of-the-art, Multimedia Tools and Applications, Vol. 25, Issue 1, Jan 2005, pp.5-35.