Research Area: | Uncategorized | Year: | 2002 | ||||
Type of Publication: | Article | Keywords: | Autoassociative neural network models; Training error surface; Annealing gain parameter; Speaker verification | ||||
Authors: | B. Yegnanarayana, Kishore S. Prahallad | ||||||
Note: | |||||||
http://www.sciencedirect.com/science/article/B6T08-459952R-2/2/a53c123eaecb7ccb7b50baec88885192 |
|||||||
Abstract: | |||||||
The objective in any pattern recognition problem is to capture the characteristics common to each class from feature vectors of the training data. While Gaussian mixture models appear to be general enough to characterize the distribution of the given data, the model is constrained by the fact that the shape of the components of the distribution is assumed to be Gaussian, and the number of mixtures are fixed a priori. In this context, we investigate the potential of non-linear models such as autoassociative neural network (AANN) models, which perform identity mapping of the input space. We show that the training error surface realized by the neural network model in the feature space is useful to study the characteristics of the distribution of the input data. We also propose a method of obtaining an error surface to match the distribution of the given data. The distribution capturing ability of AANN models is illustrated in the context of speaker verification. |
|||||||
Digital version |