Speech and Vision Lab

  • Increase font size
  • Default font size
  • Decrease font size
Home People
Use of Articulatory Bottle-Neck Features for Query-by-Example Spoken Term Detection in Low Resource Scenarios
Research Area: Uncategorized Year: 2014
Type of Publication: In Proceedings Keywords: Query-by-example spoken term detection, multi-layer perceptron, articulatory features, bottle-neck features, low resource
Authors: Gautam Varma Mantena, Kishore S. Prahallad  
   
Abstract:
For query-by-example spoken term detection (QbE-STD), generation of phone posteriorgrams requires labelled data which would be difficult for languages with low resources. One solution is to build models from rich resource languages and use them in the low resource scenario. However, phone classes are not language universal and alternate representation such as articulatory classes is explored. In this paper, we use articulatory information and their derivatives such as bottle-neck (BN) features (also referred to as articulatory BN features) for QbE-STD. We obtain Gaussian posteriorgrams of articulatory BN features in tandem with the acoustic parameters such as frequency domain linear prediction cepstral coefficients to perform the search. We compare the search performance of articulatory and phone BN features and show that articulatory BN features are a better representation. We also provide experimental results to show that low amounts (30 mins) of training data could be used to derive articulatory BN features.
   

Login Form

Who's Online

We have 5 guests online