Speech and Vision Lab

  • Increase font size
  • Default font size
  • Decrease font size
Home Publications
Exploiting contextual information for improved phoneme recognition
Research Area: Uncategorized Year: 2008
Type of Publication: In Proceedings Keywords: hidden Markov models, multilayer perceptrons, speech processing, speech recognitionartificial neural network, contextual information, hidden Markov model, hierarchical estimation, multilayered perceptron, phoneme posterior probabilities, phoneme recogniti
Authors: J. Pinto, B. Yegnanarayana, H. Hermansky, M.Magimai-Doss  
In this paper, we investigate the significance of contextual information in a phoneme recognition system using the hidden Markov model - artificial neural network paradigm. Contextual information is probed at the feature level as well as at the output of the multilayered perceptron. At the feature level, we analyze and compare different methods to model sub-phonemic classes. To exploit the contextual information at the output of the multilayered perceptron, we propose the hierarchical estimation of phoneme posterior probabilities. The best phoneme (excluding silence) recognition accuracy of 73.4% on the TIMIT database is comparable to that of the state-of- the-art systems, but more emphasis is on analysis of the contextual information.
Digital version