Combined Parameter Training and Reduction in Tied-Mixture HMM Design

Signal Compression Laboratory Research Project

 

Researcher: Liang Gu
Faculty: Prof. Kenneth Rose
Research Focus: Model accuracy and robustness has long been the research target in Speech Recognition. While the accuracy of HMM and tied-mixture HMM (TMHMM) can be enhanced by increasing the number of free parameters, the robustness is reduced for a fixed training set. In this presentation, we will introduce a new method in TMHMM design called Combined  Parameter Training and Reduction (CTR) algorithm. The parameter reduction technique is first applied for TMHMM both as a training method and a parameter-sharing method, along with several new algorithms, such as Minimum -Entropy -based Gaussian pdf reduction, dynamic weight distribution, etc. Experiments show that the CTR algorithm can reduce recognition error rate by 30% and 50% compared to TMHMM and CHMM respectively. To avoid the distance threshold in state tying, we have further developed the Shared-Mixture HMM, which achieves an 8% improvement compared to CTR-TMHMM in the experiment.
Presentation:

Reduced Tied-Mixture HMM and Shared-Mixture HMM