-
Notifications
You must be signed in to change notification settings - Fork 79
/
README
42 lines (31 loc) · 1.09 KB
/
README
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
Gaussian Mixture Models in Python
Author: Jeremy Stober
Contact: stober@gmail.com
Version: 0.01
This is a standalone Pythonic implementation of Gaussian Mixture
Models. Various initialization strategies are included along with a
standard EM algorithm for determining the model parameters based on
data.
Example code for the GMM and Normal classes can be found in the
src/test_*.py files. The GMM and the underlying Normal class both
support conditioning on data and marginalization for any subset of the
variables. This makes this implementation ideal for experimenting with
Gaussian Mixture Regression. For example, the following code learns
the cosine function:
import numpy as np
from gmm import GMM
from plot_gmm import draw2dgmm
from test_func import noisy_cosine
import pylab as pl
x,y = noisy_cosine()
data = np.vstack([x,y]).transpose()
pl.scatter(data[:,0],data[:,1])
gmm = GMM(dim = 2, ncomps = 2, data = data, method = "kmeans")
draw2dgmm(gmm)
nx = np.arange(0,2 * np.pi, 0.1)
ny = []
for i in nx:
ngmm = gmm.condition([0],[i])
ny.append(ngmm.mean())
pl.plot(nx,ny,color='red')
pl.show()