Speech and Vision Lab

  • Increase font size
  • Default font size
  • Decrease font size
Home Publications
Combining evidence from subsegmental and segmental features for audio clip classification
Research Area: Uncategorized Year: 2008
Type of Publication: In Proceedings Keywords: audio signal processing, hidden Markov models, signal classificationaudio clip classification, audio components, audio-specific excitation source, autoassociative neural networks, hidden Markov models, linear prediction residual, multilayer perceptron, sp
Authors: Anvita Bajpai, B. Yegnanarayana  
In this paper, we demonstrate the complementary nature of audio-specific excitation source (subsegmental) information present in the linear prediction (LP) residual, to the information derived using spectral (segmental) features, for audio clip classification. Classes considered for study are advertisement, cricket, cartoon, football and news, and the data is collected from TV broadcast with large intra-class variability. A baseline system based on segmental features and hidden Markov models (HMM), gives classification accuracy of 62.08\%. Another baseline system, based on subsegmental features present in the LP residual, built using autoassociative neural networks (AANN) to model audio components, and multilayer perceptron (MLP) to classify audio, gives classification accuracy of 52.72\%. The two systems are combined at abstract level and give classification accuracy of 86.96\%, indicating their complementary nature. The rank and measurement level combination of the two systems is further used to enhance the classification accuracy to 92.97\%.
Digital version