Skip to content
nickgillian edited this page Aug 21, 2016 · 1 revision

#Principal Component Analysis

##Description The Principal Component Analysis class runs the PCA algorithm, a dimensionality reduction algorithm that projects an [M N] matrix (where M==samples and N==dimensions) onto a new K dimensional subspace, where K is normally much less than N.

This projection or transformation is defined in such a way that the first principal component has the largest possible variance (that is, accounts for as much of the variability in the data as possible), and each succeeding component has the highest variance possible under the constraint that it be orthogonal to (i.e., uncorrelated with) the preceding components. Principal components are guaranteed to be independent only if the data set is jointly normally distributed. PCA is sensitive to the relative scaling of the original variables.

The PCA algorithm will automatically mean subtract the input data, and also normalize the data if required. To use this algorithm, the user should first run the computeFeatureVector(...) function to build the PCA feature vector and then run the project(...) function to project new data onto the new principal subspace. Applications

The PCA algorithm can be used as a dimensionality reduction algorithm, automatically reducing a large number of features to a lower number (of hopefully) more useful features.

##Example PCA Example