Skip to content

hmm_continuous

nickgillian edited this page Aug 14, 2016 · 1 revision

#Hidden Markov Model (Continuous)

##Description This class implements a Hidden Markov Model (HMM) classifier. The GRT HMM algorithm lets you select between using a Discrete HMM or a Continuous HMM. This example demonstrates how to use a Continuous HMM. If you require a Discrete HMM, you should read the HMM Discrete tutorial.

Hidden Markov Models are powerful classifiers that work well on temporal classification problems when you have a large training dataset. If you only have a few training samples, then the GRT HMM Continuous can work better than the HMM Discrete. Alternatively, the GRT DTW algorithm might work better.

The HMM algorithm is part of the GRT classification modules.

##Advantages Hidden Markov Models are powerful classifiers that work well on temporal classification problems when you have a large training dataset. If you only have a few training samples, then the GRT DTW algorithm might work better.

##Disadvantages The main limitation of the Continuous HMM algorithm is that the speed and accuracy of the classifier can vary dramatically by changing the HMM sigma and downsample parameters.

The sigma parameter controls how 'close' each input vector needs to be to the model to be considered as a good match with the model (the sigma parameter controls how wide each Gaussian is for each of the states in the model). Setting the sigma parameter too low will result in all of your model distances being zero (or NAN). Alternatively, setting the sigma parameter too high will result in all your models being a good match for the input data, regardless of the actual input value, which may result in one model always being recognized regardless of the input. Sigma will vary depending on the range of your sensor data (e.g., is your sensor data in the range [0 1], or [0 1024] or [-5000 5000], etc.) and the amount of variance in your gestures at each state in the model (i.e., do different users perform each gesture in a different way, or are all users very consistent). The downsample parameter controls how much each training timeseries is downsampled (via averaging) to create each state in the model. A downsample factor of 10, for example, would result in a model with K states, where K = round( training sample length / 10 ). The Gaussian at each state is then computed by averaging the 10 samples in the training sample at that state. A higher downsample factor will greatly speed up the analysis time required to compute the distance between each timeseries in the HMM model and the input timeseries, however, increasing the downsample factor too much will result in important information in the training timeseries being averaged out. A downsample of 5 or 10 works well for most cases.

Selecting these parameters can involve a large amount of trial and error.

##Training Data Format You should use the TimeSeriesClassificationData data structure to train the HMM classifier.

##Example Code Continuous HMM Example