• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
The Analysis Factor

The Analysis Factor

Statistical Consulting, Resources, and Statistics Workshops for Researchers

  • Home
  • About
    • Our Programs
    • Our Team
    • Our Core Values
    • Our Privacy Policy
    • Employment
    • Guest Instructors
  • Membership
    • Statistically Speaking Membership Program
    • Login
  • Workshops
    • Online Workshops
    • Login
  • Consulting
    • Statistical Consulting Services
    • Login
  • Free Webinars
  • Contact
  • Login

Can We Use PCA for Reducing Both Predictors and Response Variables?

by Karen Grace-Martin 5 Comments

I recently gave a free webinar on Principal Component Analysis. We had almost 300 researchers attend and didn’t get through all the questions. This is part of a series of answers to those questions.

If you missed it, you can get the webinar recording here.

Question: Can we use PCA for reducing both predictors and response variables?

In fact, there were a few related but separate questions about using and interpreting the resulting component scores, so I’ll answer them together here.

How could you use the component scores?

A lot of times PCAs are used for further analysis — say, regression. How can we interpret the results of regression?

Let’s say I would like to interpret my regression results in terms of original data, but they are hiding under PCAs. What is the best interpretation that we can do in this case?

Answer:

So yes, the point of PCA is to reduce variables — create an index score variable that is an optimally weighted combination of a group of correlated variables.

And yes, you can use this index variable as either a predictor or response variable.

It is often used as a solution for multicollinearity among predictor variables in a regression model. Rather than include multiple correlated predictors, none of which is significant, if you can combine them using PCA, then use that.

It’s also used as a solution to avoid inflated familywise Type I error caused by running the same analysis on multiple correlated outcome variables. Combine the correlated outcomes using PCA, then use that as the single outcome variable. (This is, incidentally, what MANOVA does).

In both cases, you can no longer interpret the individual variables.

You may want to, but you can’t.

Let’s use the example we used in the webinar. In this example, the ultimate research question was about predicting the expected life span of different mammal species. We found that we had a set of correlated predictor variables: weight, exposure while sleeping, hours of sleep per day, and a rating of how vulnerable the animal is to predation.

These four variables are clearly very distinct concepts. We may want to be able to understand and interpret the relationship between weight and life span.

And we may want to separately understand the relationship between exposure during sleep and lifespan.

They’re conceptually different.

Even so, in this data set, you can’t entirely distinguish between them. You can’t entirely isolate the effect of weight on lifespan if they’re too correlated.

Think about it — if all the zebras and bison sleep out in the open and weigh a lot and the bats and shrews sleep in enclosed spaces and weigh little, then you can’t separate out weight from sleep exposure in your data set.

And in our PCA we said, it’s really not possible to separate out the effects of these four variables. We explain most of the information in these four variables in just one index.

So our combined index variable is what we have to interpret. If it turns out that being high on this combined variable predicts longer lifespan, you have to interpret your regression output that way.

Principal Component Analysis
Summarize common variation in many variables... into just a few. Learn the 5 steps to conduct a Principal Component Analysis and the ways it differs from Factor Analysis.

Tagged With: Component Score, index variable, MANOVA, Multicollinearity, principal component analysis, regression models, Type I error

Related Posts

  • How To Calculate an Index Score from a Factor Analysis
  • How to Reduce the Number of Variables to Analyze
  • Eight Ways to Detect Multicollinearity
  • Four Common Misconceptions in Exploratory Factor Analysis

Reader Interactions

Comments

  1. adrienne says

    November 18, 2019 at 4:42 pm

    Thank you so much for your insight into PCA, it has been helpful.
    As far as interpretation goes however, if, like you said, you can no longer separate out the individual effects of X on Y from PCA, how do you then talk about and understand the relationship with the Y variable?
    For example: I have 2 variables, mean tree basal area and total tree basal area, that are highly correlated and so I would like to use PCA to combine them. However, when I then do my analysis and look at the relationship with my Y variable (animal density) what does it mean?

    Reply
  2. John Grenci says

    December 13, 2018 at 2:11 pm

    Hey Karen, thank you for response. It is not something I do as a general rule in my job, I am just trying to learn about it. I will try to digest what you said.. thanks again.. John

    Reply
  3. John says

    December 12, 2018 at 10:49 pm

    Karen, I really hope you can help me. I have watched and read so many things on PCA, and none of them seem to BRING IT HOME, so to speak with using it for regression analysis. you said something though that is different from everything I have read and that is that you cant interpret the individual variables.

    “Think about it — if all the zebras and bison sleep out in the open and weigh a lot and the bats and shrews sleep in enclosed spaces and weigh little, then you can’t separate out weight from sleep exposure in your data set.”

    you see,I would have thought that in one of your components you would have had a high exposure while sleeping along with a high weight value (maybe that is component 1, one has a coeff of 86 and the other 89) and component 2 has them with small coeffs, aren’t interpretations still interpreations? ie., it just seems that since you have resolved the collinearity problem there should be no issue with running a regression with both components with the desired dependent variable, perhaps you get one of them significant (lets assume component 1) and then back out the original values to interpret.

    (i.e. I would think you could then plug in values for, in this case, two variables, exposure and weight, and predict accordingly (using the results from regression. intercept + coeff* pc1. is that not correct? let me know if my question does not make sense. thanks for your help. John

    Reply
    • Karen Grace-Martin says

      December 13, 2018 at 11:21 am

      Hi John,

      First, let me start by saying I think I understand what you’re not getting and (if I’m right) I could probably get you to understand in a conversation. This medium may be insufficient. So if you’re interested in a further conversation, I would recommend our Statistically Speaking program. It’s a very inexpensive way to get this kind of help.

      That said, it sounds to me like maybe you’re interpreting PC1 as High weight/high exposure and PC2 as Low weight/low exp. But actually, PC1 is the quantity of weight and exposure, which are inseparable. So an animal with a high weight and a high exposure would have a strong positive component score on PC1 and an animal with a low weight and low exposure would have a strong negative score on component 1. Both come from the high component loading of both animal weight and exposure on PC1.

      You certainly can run a regression with PC1 as a predictors. If you have y=b0 + b1*PC1, you will interpret b1 as the effect of the Weight/Exposure combination on Y. What I was saying you can’t do is separately understand the effect of Weight on Y and Exposure on Y. That’s what people try to do, and the fact that we’ve combined Weight and Exposure into a component indicates you can’t separate their effects on Y.

      Reply
  4. aineefarooq says

    February 10, 2018 at 8:54 am

    hello..can u please help me iv have two dimension and generated 5 factor for 1 and 2 factor for second respectively..and dv have 1 factor.want to apply multiple regression plz guide how to interpert..
    iv (employee training) dv(emp.performance)
    (ist dimension )factor 1,2,3,4,5 ____ -factor 1
    (2nd dim )factor 1,2—————– factor 1
    or ist i have to see one to one effect.ie ist factor to ist,2nd to ist,3rd to ist

    Reply

Leave a Reply Cancel reply

Your email address will not be published. Required fields are marked *

Please note that, due to the large number of comments submitted, any questions on problems related to a personal study/project will not be answered. We suggest joining Statistically Speaking, where you have access to a private forum and more resources 24/7.

Primary Sidebar

Free Webinars

Effect Size Statistics on Tuesday, Feb 2nd

This Month’s Statistically Speaking Live Training

  • January Member Training: A Gentle Introduction To Random Slopes In Multilevel Models

Upcoming Workshops

  • Logistic Regression for Binary, Ordinal, and Multinomial Outcomes (May 2021)
  • Introduction to Generalized Linear Mixed Models (May 2021)

Read Our Book



Data Analysis with SPSS
(4th Edition)

by Stephen Sweet and
Karen Grace-Martin

Statistical Resources by Topic

  • Fundamental Statistics
  • Effect Size Statistics, Power, and Sample Size Calculations
  • Analysis of Variance and Covariance
  • Linear Regression
  • Complex Surveys & Sampling
  • Count Regression Models
  • Logistic Regression
  • Missing Data
  • Mixed and Multilevel Models
  • Principal Component Analysis and Factor Analysis
  • Structural Equation Modeling
  • Survival Analysis and Event History Analysis
  • Data Analysis Practice and Skills
  • R
  • SPSS
  • Stata

Copyright © 2008–2021 The Analysis Factor, LLC. All rights reserved.
877-272-8096   Contact Us

The Analysis Factor uses cookies to ensure that we give you the best experience of our website. If you continue we assume that you consent to receive cookies on all websites from The Analysis Factor.
Continue Privacy Policy
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled

Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.

Non-necessary

Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.