Can We Use PCA for Reducing Both Predictors and Response Variables?

I recently gave a free webinar on Principal Component Analysis. We had almost 300 researchers attend and didn’t get through all the questions. This is part of a series of answers to those questions.

If you missed it, you can get the webinar recording here.

Question: Can we use PCA for reducing both predictors and response variables?

In fact, there were a few related but separate questions about using and interpreting the resulting component scores, so I’ll answer them together here.

How could you use the component scores?

A lot of times PCAs are used for further analysis — say, regression. How can we interpret the results of regression?

Let’s say I would like to interpret my regression results in terms of original data, but they are hiding under PCAs. What is the best interpretation that we can do in this case?

Answer:

So yes, the point of PCA is to reduce variables — create an index score variable that is an optimally weighted combination of a group of correlated variables.

And yes, you can use this index variable as either a predictor or response variable.

It is often used as a solution for multicollinearity among predictor variables in a regression model. Rather than include multiple correlated predictors, none of which is significant, if you can combine them using PCA, then use that.

It’s also used as a solution to avoid inflated familywise Type I error caused by running the same analysis on multiple correlated outcome variables. Combine the correlated outcomes using PCA, then use that as the single outcome variable. (This is, incidentally, what MANOVA does).

In both cases, you can no longer interpret the individual variables.

You may want to, but you can’t.

Let’s use the example we used in the webinar. In this example, the ultimate research question was about predicting the expected life span of different mammal species. We found that we had a set of correlated predictor variables: weight, exposure while sleeping, hours of sleep per day, and a rating of how vulnerable the animal is to predation.

These four variables are clearly very distinct concepts. We may want to be able to understand and interpret the relationship between weight and life span.

And we may want to separately understand the relationship between exposure during sleep and lifespan.

They’re conceptually different.

Even so, in this data set, you can’t entirely distinguish between them. You can’t entirely isolate the effect of weight on lifespan if they’re too correlated.

Think about it — if all the zebras and bison sleep out in the open and weigh a lot and the bats and shrews sleep in enclosed spaces and weigh little, then you can’t separate out weight from sleep exposure in your data set.

And in our PCA we said, it’s really not possible to separate out the effects of these four variables. We explain most of the information in these four variables in just one index.

So our combined index variable is what we have to interpret. If it turns out that being high on this combined variable predicts longer lifespan, you have to interpret your regression output that way.

 

Principal Component Analysis
Summarize common variation in many variables... into just a few. Learn the 5 steps to conduct a Principal Component Analysis and the ways it differs from Factor Analysis.

Reader Interactions

Comments

  1. adrienne says

    Thank you so much for your insight into PCA, it has been helpful.
    As far as interpretation goes however, if, like you said, you can no longer separate out the individual effects of X on Y from PCA, how do you then talk about and understand the relationship with the Y variable?
    For example: I have 2 variables, mean tree basal area and total tree basal area, that are highly correlated and so I would like to use PCA to combine them. However, when I then do my analysis and look at the relationship with my Y variable (animal density) what does it mean?

  2. John Grenci says

    Hey Karen, thank you for response. It is not something I do as a general rule in my job, I am just trying to learn about it. I will try to digest what you said.. thanks again.. John

  3. John says

    Karen, I really hope you can help me. I have watched and read so many things on PCA, and none of them seem to BRING IT HOME, so to speak with using it for regression analysis. you said something though that is different from everything I have read and that is that you cant interpret the individual variables.

    “Think about it — if all the zebras and bison sleep out in the open and weigh a lot and the bats and shrews sleep in enclosed spaces and weigh little, then you can’t separate out weight from sleep exposure in your data set.”

    you see,I would have thought that in one of your components you would have had a high exposure while sleeping along with a high weight value (maybe that is component 1, one has a coeff of 86 and the other 89) and component 2 has them with small coeffs, aren’t interpretations still interpreations? ie., it just seems that since you have resolved the collinearity problem there should be no issue with running a regression with both components with the desired dependent variable, perhaps you get one of them significant (lets assume component 1) and then back out the original values to interpret.

    (i.e. I would think you could then plug in values for, in this case, two variables, exposure and weight, and predict accordingly (using the results from regression. intercept + coeff* pc1. is that not correct? let me know if my question does not make sense. thanks for your help. John

    • Karen Grace-Martin says

      Hi John,

      First, let me start by saying I think I understand what you’re not getting and (if I’m right) I could probably get you to understand in a conversation. This medium may be insufficient. So if you’re interested in a further conversation, I would recommend our Statistically Speaking program. It’s a very inexpensive way to get this kind of help.

      That said, it sounds to me like maybe you’re interpreting PC1 as High weight/high exposure and PC2 as Low weight/low exp. But actually, PC1 is the quantity of weight and exposure, which are inseparable. So an animal with a high weight and a high exposure would have a strong positive component score on PC1 and an animal with a low weight and low exposure would have a strong negative score on component 1. Both come from the high component loading of both animal weight and exposure on PC1.

      You certainly can run a regression with PC1 as a predictors. If you have y=b0 + b1*PC1, you will interpret b1 as the effect of the Weight/Exposure combination on Y. What I was saying you can’t do is separately understand the effect of Weight on Y and Exposure on Y. That’s what people try to do, and the fact that we’ve combined Weight and Exposure into a component indicates you can’t separate their effects on Y.

  4. aineefarooq says

    hello..can u please help me iv have two dimension and generated 5 factor for 1 and 2 factor for second respectively..and dv have 1 factor.want to apply multiple regression plz guide how to interpert..
    iv (employee training) dv(emp.performance)
    (ist dimension )factor 1,2,3,4,5 ____ -factor 1
    (2nd dim )factor 1,2—————– factor 1
    or ist i have to see one to one effect.ie ist factor to ist,2nd to ist,3rd to ist


Leave a Reply

Your email address will not be published. Required fields are marked *

Please note that, due to the large number of comments submitted, any questions on problems related to a personal study/project will not be answered. We suggest joining Statistically Speaking, where you have access to a private forum and more resources 24/7.