Principal Component Analysis (PCA) is a handy statistical tool to always have available in your data analysis tool belt.

It’s a data reduction technique, which means it’s a way of capturing the variance in many variables in a smaller, easier-to-work-with set of variables.

There are many, many details involved, though, so here are a few things to remember as you run your PCA.

#### 1. The goal of PCA is to summarize the correlations among a set of observed variables with a smaller set of linear combinations.

So the first step your software is doing is creating a correlation or covariance matrix of those variables, and basing everything else on it.

Some software programs allow you to use a correlation or covariance matrix as an input data set.

This comes in very handy if you either don’t have the original data or have missing data. In the case of missing data, you can use the unbiased EM estimates of the correlation matrix as input.

#### 2. Because it’s trying to capture the total variance in the set of variables, PCA requires that the input variables have similar scales of measurement.

If the observed variables are all a set of 7-point likert items, it’s no problem. They’re all measured on the same scale and the variances will be relatively similar.

But if you’re trying to combine correlated variables that all get at the size of trees, like: the trunk diameter in cm, biomass of leaves in kg, number of branches, overall height in meters–those are going to be on vastly different scales. Variables whose numbers are just larger will have much bigger variance just because the numbers are so big. (Remember that variances are squared values, so big numbers get amplified).

If you’re starting with a covariance matrix, it’s a good idea to standardize those variables before you begin so that the variables with the biggest scales don’t overwhelm the PCA.

Alternatively, base it on the correlation matrix, since correlations are themselves standardized. This is generally an option in your software, and is likely the default. Just make sure.

#### 3. Each component’s eigenvalue represents how much variance it explains.

The PCA is, by definition, creating the same number of components as there are original variables. But usually only a few capture enough variance to be useful.

When we say we have a two component solution, we’re actually saying that the first two components capture enough variance in the full set of variables to be useful.

These components are ordered in terms of the amount of variance each explains.

So the first explains the most variance, the second explains the next. The variance that each explains is measured by its eigenvalue, which is scaled in terms of “number of variables worth of variance.”

You’ll notice that the eigenvalues always add up to the total number of variables.

So if a component has and eigenvalue of 2.5, it explains as much variance as 2.5 of the original variables. That’s probably useful.

A component with a small eigenvalue, say .3, isn’t so useful. You’d be better off using an original variable, which has one variables worth of variance (it’s own), than this component.

{ 2 comments… read them below or add one }

Hello Karen,

Thank you for the tips on PCA.

Actually, I have a “parameter covariance matrix”. I would like to perform PCA on it. I am not sure how to do that!? To me, it is easy to perform PCA on variables/observations. However, it is a bit challenge to do it using the parameter covariance matrix, as an input data. Any ideas? Thank you.

Puede ser obteniendo los autovalores y autovectores de dicha matriz, así verá la varianza de cada componente y decidirá cuál o cuáles utilizar. Los autovectores le darán la dirección.