Professional Documents
Culture Documents
using GMMs
ECE5526 FINAL PROJECT
SPRING 2011
JIM BRYAN
Abstract
Provide an in depth look at how GMMs can be used for word recognition
based on Matlabs statistical toolbox.
The isolated digit recognizer is based on a voice activity detector using
energy thresholding and zero crossing detection. Moveover, the recognizer
uses MFCCs as the basis for acoustic speech representation. These are
standard voice processing techniques which it is assumed the reader is
familiar with. The focus of this presentation is on the details the GMM
implementation in Matlab, with the idea that a good understanding of the
Matlab approach will yield insight to other system implementations such as
Sphinx and HTK.
Word recognition is comprised of two components, Model training and Model
testing.
The statistical toolbox functionGmmdistribution.fit is used for training
The statistical toolbox Postpriori is used for testing
The purpose of this effort is to train and run the recognizer, and to
understand the basic functionality of functionGmmdistribution.fit and
Postpriori funcion calls.
Introduction
Based on
MATLAB Digest - January 2010
Developing an Isolated Word Recognition System in MATLAB
ByDaryl Ning
Describe the Matlab GUI base recognizer application
Provide introductory material on GMMs using a simple 2 Mixture
example with 2 models
Discuss in detail the algorithms used to determine the best model
match
Show examples of Matlabs statistical toolbox representation of GMMs
Run the simulation
Discuss simulaton results and show possible improvements
Summary
Conclusions
Areas for further study
Isolated digit recognizer overview
Uses 8 GMMs per digit to train and recognize an individual
users voice
Matlab GUI based digit recognizer uses the following
toolboxes
Signal Processing toolbox provides a filtering and signal
processing functions
Statistics toolbox is used to implement a GMM Expectation
Maximization algorithm to build the GMMs and to compute the
Mahalnobis distance during recognition
Data acquisition toolbox is used to stream the microphone
input to Matlab for continous recognition
Single digit recognizer implemented using dictionary of
digits 0 9
Training is done with 30 second captures of repeated
utterance of the given digit using the wavread function in
Matlab
Overview Continued
Uses laptops internal microphone
Sample rate is 8ksps
Uses 20msec frames with a 10 msec overlap with a
frame size of 160 samples per frame
Uses a simple voice activity detector based on energy
threshold and zero crossings per second for both
training and the recognizer
Voice activity energy and zero crossing thresholds
are programmable and must be the same for training
and recognition
No model for silence or missed digit, so the
recognizer displays the closest digit
GMM training and recognizer Matlab
function calls
The recognizer compute the posterior probabilities using
the Statistics Toolbox function posterior
Posterior accepts a gmm object/model as its input, along
with an input data set, and returns a log-likelihood number
that represents the data set match to the model
The smallest log-likelihood has the highest posterior
probability
The recognizer computes the probability of the current
word to each model in the dictionary. The model that
has the lowest posterior probability is the recognized digit.
A gmm object is created during training for each
dictionary entry, in this case digits 0-9, using the function
call gmdistribution.fit.
Example using 2 GMMs with 2 mixtures
pdf(obj2,[x,y])
6
gmm1
4
0
y
-2
-4 gmm2
-6
-8
-8 -6 -4 -2 0 2 4 6
x
Posterior
Posterior extracts gmdistribution object parameters
necessary to call Wdensity
Wdensity performs the actual log-likely-hood
calculation for the GMM, given the data set
Wdensity returns two arrays
log_lh is an array of size length(data)x order(GMM)
mahalaD is an array of size length(data)x order(GMM),
this is not the actual Mahalnoblis distance
mahalaD = (x -)-1 (x -)T
Estep calculates the loglikelihood based on the
log_lh array and returns ll which is the loglikelihood
of data x given the model
Wdensity function description
Example funtioncall
[log_lh,mahalaD]=wdensity(X, mu, Sigma, p, sharedCov,
CovType)
Where X is input data
Mu is an array of means with(j,:) corresponding to jth
mean vector
Sigma is an array of arrays with (:,:,j) corresponding to
the jth sigma in the model
P are the mixture weights
sharedCov indicates the covariance matrices may be
common to all mixtures
CovType may be either diagonal or full
,
P =Nlogl = -ll
55.3416 109.3820
184.7868 42.8043
1.2
0.8
0.6
0.4
0.2
-0.2
7.5 8 8.5 9 9.5 10 10.5 11
4
x 10
Voice Detect using default
thresholds 1,1 digit = one
1.2
0.8
0.6
0.4
0.2
-0.2
0.8
0.6
0.4
0.2
-0.2
-0.4
5.5 6 6.5 7 7.5 8 8.5 9
4
x 10
Transcript reads each model and
calls trainmodels
y = wavread('one.wav');
trainmodels(y,'one');
y = wavread('two.wav');
trainmodels(y,'two');
y = wavread('three.wav');
trainmodels(y,'three');
y = wavread('four.wav');
trainmodels(y,'four');
y = wavread('five.wav');
trainmodels(y,'five');
y = wavread('six.wav');
trainmodels(y,'six');
y = wavread('seven.wav');
trainmodels(y,'seven');
y = wavread('eight.wav');
trainmodels(y,'eight');
y = wavread('nine.wav');
trainmodels(y,'nine');
y = wavread('zero.wav');
trainmodels(y,'zero');
GMM dimensions for typical
utterance
Assume average digit length is 300 mSec
Fs = 8000Hz
1/Fs = 125sec
160 samples/Fs = 20msec
Since overlap and add using 50 % Hamming widow, 1
Frame occurs every 10msec
Average number of frames per word 300/10 = 30
MFCC takes in 30x160 samples and produces 30x39
MFCC vectors on average
Average size of log_lh vector per word for 8 Gaussian
mixtures = 30x8
Log-likelihood based on average 30x8 matrix
Voice Activity detect filter implemented as a
128 tap FIR filter based on a Chebyschev
window with 40 dB sidelobe attenation
Voice detector using 125-750 Hz 128 tap Chebyshev
bandpass filter with 40 dB side lobe suppression
and 20mse pre oneshot with 40msec post oneshot
digit = one
1000
500
-500
-1000
1 2 3 4 5 6 7
4
x 10
Training Vector for digit one after
modified VA detection
6
x 10
1.5
0.5
-0.5
-1
-1.5
0 0.5 1 1.5 2 2.5
5
x 10
Scoring
Difficult to score based on the real time recognizer.
Recognizer fires on ambient noise
Recognizer is slow as it has to perform GMM calculation for all
dictionary entries
Recorded test set of test set, counting from 1-9,0 produced 70%
accuracy two and seven and eight did not correctly classify
Had to lower zerocrossing threshold for test to collect all the
utterances
Accuracy might be due to insufficient training data
Could have bad models for some of the classes
Hand scoring difficult because must correctly label each
utterance for the classifier. Seven had a null portion in the middle
Lap top computers fan kicked on during training, this caused
ambient noise during training so data set was not perfect
Test Set counting 1-9,0 and repeat
frame based with silence removed
0.03
0.02
0.01
-0.01
-0.02
-0.03
-0.04
2 4 6 8 10 12 14
4
x 10
Summary
An 8 mixture GMMs for speech recognition were demonstrated.
Using only a small training set and an laptop microphone, digit
recognition was demonstrated using only 8000Hz sample rate
Care and feeding of the GMMs is very important for successful
implementation.
Garbage in, garbage out is especially true for speech recognition
Background noise is a very big problem in accurate speech
recognition. Adaptive noise cancellation using a second
microphone for just the background noise should improve accuracy
The voice activity detector is a critical component of the recognizer
Scoring is also a difficult problem as the acoustic data must be
synchronized with the dictionary to provide accurate results
Marking the speech pattern and word isolation is not without
difficulties as pauses between syllables occur during a single
utterance
Conclusion
GMMs are very powerful models for speech recognition.
Scoring the models is difficult. The EM algorithm will
produce different models based on the random seeding of
the starting conditions.
Simple utterances of ~15 repetitions is not sufficient for
good GMM accuracy
The voice activity detector plays a significant part in the
training and testing of the data
A new voice activity detector did not magically produce
100 percent scoring accuracy with a recorded test wav file
Noise cancellation techniques and sophisticated voice
detection algorithms are necessary for good performance
as well as model optimization
Areas for further
investigation
Automate the scoring process
Improve the Voice activity detector in the real
time recognizer
Add a second microphone for adaptive noise
cancellation
Convert GMMs to combination GMMs and HMMs
so dictionary search isnt so computationally
intensive
Modify the number of mixtures of the GMMs with
HMM phonetic implementation
HMMs will allow for continuous digit recognition