One of the most common causes of multicollinearity is when predictor variables are multiplied to create an interaction term or a quadratic or higher order terms (X squared, X cubed, etc.).
Why does this happen? When all the X values are positive, higher values produce high products and lower values produce low products. So the product variable is highly correlated with the component variable. I will do a very simple example to clarify. (Actually, if they are all on a negative scale, the same thing would happen, but the correlation would be negative).
In a small sample, say you have the following values of a predictor variable X, sorted in ascending order:
2, 4, 4, 5, 6, 7, 7, 8, 8, 8
It is clear to you that the relationship between X and Y is not linear, but curved, so you add a quadratic term, X squared (X2), to the model. The values of X squared are:
4, 16, 16, 25, 49, 49, 64, 64, 64
The correlation between X and X2 is .987–almost perfect.
To remedy this, you simply center X at its mean. The mean of X is 5.9. So to center X, I simply create a new variable XCen=X-5.9.
The correlation between XCen and XCen2 is -.54–still not 0, but much more managable. Definitely low enough to not cause severe multicollinearity. This works because the low end of the scale now has large absolute values, so its square becomes large.
The scatterplot between XCen and XCen2 is:
If the values of X had been less skewed, this would be a perfectly balanced parabola, and the correlation would be 0.
Tonight is my free teletraining on Multicollinearity, where we will talk more about it. Register to join me tonight or to get the recording after the call.
I was recently asked about whether centering (subtracting the mean) a predictor variable in a regression model has the same effect as standardizing (converting it to a Z score). My response:
They are similar but not the same.
In centering, you are changing the values but not the scale. So a predictor that is centered at the mean has new values–the entire scale has shifted so that the mean now has a value of 0, but one unit is still one unit. The intercept will change, but the regression coefficient for that variable will not. Since the regression coefficient is interpreted as the effect on the mean of Y for each one unit difference in X, it doesn’t change when X is centered.
And incidentally, despite the name, you don’t have to center at the mean. It is often convenient, but there can be advantages of choosing a more meaningful value that is also toward the center of the scale.
But a Z-score also changes the scale. A one-unit difference now means a one-standard deviation difference. You will interpret the coefficient differently. This is usually done so you can compare coefficients for predictors that were measured on different scales. I can’t think of an advantage for doing this for an interaction.
Statistical models, such as general linear models (linear regression, ANOVA, MANOVA), linear mixed models, and generalized linear models (logistic, Poisson, regression, etc.) all have the same general form.
On the left side of the equation is one or more response variables, Y. On the right hand side is one or more predictor variables, X, and their coefficients, B. The variables on the right hand side can have many forms and are called by many names.
There are subtle distinctions in the meanings of these names. Unfortunately, though, there are two practices that make them more confusing than they need to be.
First, they are often used interchangeably. So someone may use “predictor variable” and “independent variable” interchangably and another person may not. So the listener may be reading into the subtle distinctions that the speaker may not be implying.
Second, the same terms are used differently in different fields or research situations. So if you are an epidemiologist who does research on mostly observed variables, you probably have been trained with slightly different meanings to some of these terms than if you’re a psychologist who does experimental research.
Even worse, statistical software packages use different names for similar concepts, even among their own procedures. This quest for accuracy often renders confusion. (It’s hard enough without switching the words!).
Here are some common terms that all refer to a variable in a model that is proposed to affect or predict another variable.
I’ll give you the different definitions and implications, but it’s very likely that I’m missing some. If you see a term that means something different than you understand it, please add it to the comments. And please tell us which field you primarily work in.
Predictor Variable, Predictor
This is the most generic of the terms. There are no implications for being manipulated, observed, categorical, or numerical. It does not imply causality.
A predictor variable is simply used for explaining or predicting the value of the response variable. Used predominantly in regression.
Independent Variable
I’ve seen Independent Variable (IV) used different ways.
1. It implies causality: the independent variable affects the dependent variable. This usage is predominant in ANOVA models where the Independent Variable is manipulated by the experimenter. If it is manipulated, it’s generally categorical and subjects are randomly assigned to conditions.
2. It does not imply causality, but it is a key predictor variable for answering the research question. In other words, it is in the model because the researcher is interested in understanding its relationship with the dependent variable. In other words, it’s not a control variable.
3. It does not imply causality or the importance of the variable to the research question. But it is uncorrelated (independent) of all other predictors.
Honestly, I only recently saw someone define the term Independent Variable this way. Predictor Variables cannot be independent variables if they are at all correlated. It surprised me, but it’s good to know that some people mean this when they use the term.
Explanatory Variable
A predictor variable in a model where the main point is not to predict the response variable, but to explain a relationship between X and Y.
Control Variable
A predictor variable that could be related to or affecting the dependent variable, but not really of interest to the research question.
Covariate
Generally a continuous predictor variable. Used in both ANCOVA (analysis of covariance) and regression. Some people use this to refer to all predictor variables in regression, but it really means continuous predictors. Adding a covariate to ANOVA (analysis of variance) turns it into ANCOVA (analysis of covariance).
Sometimes covariate implies that the variable is a control variable (as opposed to an independent variable), but not always.
And sometimes people use covariate to mean control variable, either numerical or categorical.
These terms are used differently in different fields. In experimental design, it’s used to mean a variable whose effect cannot be distinguished from the effect of an independent variable.
In observational fields, it’s used to mean one of two situations. The first is a variable that is so correlated with an independent variable that it’s difficult to separate out their effects on the response variable. The second is a variable that causes the independent variable’s effect on the response.
The distinction in those interpretations are slight but important.
Exposure Variable
This is a term for independent variable in some fields, particularly epidemiology. It’s the key predictor variable.
Risk Factor
Another epidemiology term for a predictor variable. Unlike the term “Factor” listed below, it does not imply a categorical variable.
Factor
A categorical predictor variable. It may or may not indicate a cause/effect relationship with the response variable (this depends on the study design, not the analysis).
Independent variables in ANOVA are almost always called factors. In regression, they are often referred to as indicator variables, categorical predictors, or dummy variables. They are all the same thing in this context.
Also, please note that Factor has completely other meanings in statistics, so it too got its own Confusing Statistical Terms article.
Feature
Used in Machine Learning and Predictive models, this is simply a predictor variable.
Grouping Variable
Same as a factor.
Fixed factor
A categorical predictor variable in which the specific values of the categories are intentional and important, often chosen by the experimenter. Examples include experimental treatments or demographic categories, such as sex and race.
A categorical predictor variable in which the specific values of the categories were randomly assigned. Generally used in mixed modeling. Examples include subjects or random blocks.
This term is generally used in experimental design, but I’ve also seen it in randomized controlled trials.
A blocking variable is a variable that indicates an experimental block: a cluster or experimental unit that restricts complete randomization and that often results in similar response values among members of the block.
Blocking variables can be either fixed or random factors. They are never continuous.
Dummy variable
A categorical variable that has been dummy coded. Dummy coding (also called indicator coding) is usually used in regression models, but not ANOVA. A dummy variable can have only two values: 0 and 1. When a categorical variable has more than two values, it is recoded into multiple dummy variables.
Indicator variable
Same as dummy variable.
The Take Away Message
Whenever you’re using technical terms in a report, an article, or a conversation, it’s always a good idea to define your terms. This is especially important in statistics, which is used in many, many fields, each of whom adds their own subtleties to the terminology.
I was recently asked about when to use one and two tailed tests.
The long answer is: Use one tailed tests when you have a specific hypothesis about the direction of your relationship. Some examples include you hypothesize that one group mean is larger than the other; you hypothesize that the correlation is positive; you hypothesize that the proportion is below .5.
The short answer is: Never use one tailed tests.
Why?
1. Only a few statistical tests even can have one tail: z tests and t tests. So you’re severely limited. F tests, Chi-square tests, etc. can’t accommodate one-tailed tests because their distributions are not symmetric. Most statistical methods, such as regression and ANOVA, are based on these tests, so you will rarely have the chance to implement them.
2. Probably because they are rare, reviewers balk at one-tailed tests. They tend to assume that you are trying to artificially boost the power of your test. Theoretically, however, there is nothing wrong with them when the hypothesis and the statistical test are right for them.
I just wanted to follow up on my last post about Regression without Intercepts.
Regression through the Origin means that you purposely drop the intercept from the model. When X=0, Y must = 0.
The thing to be careful about in choosing any regression model is that it fit the data well. Pretty much the only time that a regression through the origin will fit better than a model with an intercept is if the point X=0, Y=0 is required by the data.
Yes, leaving out the intercept will increase your df by 1, since you’re not estimating one parameter. But unless your sample size is really, really small, it won’t matter. So it really has no advantages.
Should you drop outliers? Outliers are one of those statistical issues that everyone knows about, but most people aren’t sure how to deal with. Most parametric statistics, like means, standard deviations, and correlations, and every statistic based on these, are highly sensitive to outliers.
And since the assumptions of common statistical procedures, like linear regression and ANOVA, are also based on these statistics, outliers can really mess up your analysis.
Despite all this, as much as you’d like to, it is NOT acceptable to drop an observation just because it is an outlier. They can be legitimate observations and are sometimes the most interesting ones. It’s important to investigate the nature of the outlier before deciding.
If it is obvious that the outlier is due to incorrectly entered or measured data, you should drop the outlier:
For example, I once analyzed a data set in which a woman’s weight was recorded as 19 lbs. I knew that was physically impossible. Her true weight was probably 91, 119, or 190 lbs, but since I didn’t know which one, I dropped the outlier.
This also applies to a situation in which you know the datum did not accurately measure what you intended. For example, if you are testing people’s reaction times to an event, but you saw that the participant is not paying attention and randomly hitting the response key, you know it is not an accurate measurement.
If the outlier does not change the results but does affect assumptions, you may drop the outlier. But note that in a footnote of your paper.
Neither the presence nor absence of the outlier in the graph below would change the regression line:
More commonly, the outlier affects both results and assumptions. In this situation, it is not legitimate to simply drop the outlier. You may run the analysis both with and without it, but you should state in at least a footnote the dropping of any such data points and how the results changed.
If the outlier creates a strong association, you should drop the outlier and should not report any association from your analysis.
In the following graph, the relationship between X and Y is clearly created by the outlier. Without it, there is no relationship between X and Y, so the regression coefficient does not truly describe the effect of X on Y.
So in those cases where you shouldn’t drop the outlier, what do you do?
One option is to try a transformation. Square root and log transformations both pull in high numbers. This can make assumptions work better if the outlier is a dependent variable and can reduce the impact of a single point if the outlier is an independent variable.
Another option is to try a different model. This should be done with caution, but it may be that a non-linear model fits better. For example, in example 3, perhaps an exponential curve fits the data with the outlier intact.
Whichever approach you take, you need to know your data and your research area well. Try different approaches, and see which make theoretical sense.
The Analysis Factor uses cookies to ensure that we give you the best experience of our website. If you continue we assume that you consent to receive cookies on all websites from The Analysis Factor.
This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.