Principal component analysis

From Citizendium
Revision as of 15:52, 4 August 2010 by imported>Alexander Wiebel (adde two links)
(diff) ← Older revision | Latest revision (diff) | Newer revision → (diff)
Jump to navigation Jump to search
This article is a stub and thus not approved.
Main Article
Discussion
Related Articles  [?]
Bibliography  [?]
External Links  [?]
Citable Version  [?]
 
This editable Main Article is under development and subject to a disclaimer.

Principle Component Analysis (PCA) is a popular data analysis method with applications in data compression, signal and noise separation, model order reduction, pattern recognition and feature extraction. In some contexts, PCA is also known as Karhunen-Loeve Transform or the Hotelling Transform.

The Principle Components (PCs) of a data set are its orthogonal components ascertained by eigenanalysis of the data sets covariance matrix. The most important components are found along the dimensions with the largest covariance values and they reveal the uncorrelated structure of the data set. The data set or signal can be processed independently along each of these dimensions. The goal of PCA, therefore, is to decompose a data set (such as a image from a satellite) into an orthogonal set of Principle Components so that it may be represented accurately without the need to store the entire signal.