Confusing Statistical Terms #1: The Many Names of Independent Variables

Statistical models, such as general linear models (linear regression, ANOVA, MANOVA), linear mixed models, and generalized linear models (logistic, Poisson, regression, etc.) all have the same general form.

On the left side of the equation is one or more response variables, Y. On the right hand side is one or more predictor variables, X, and their coefficients, B. The variables on the right hand side can have many forms and are called by many names.

There are subtle distinctions in the meanings of these names. Unfortunately, though, there are two practices that make them more confusing than they need to be.

First, they are often used interchangeably. So someone may use “predictor variable” and “independent variable” interchangably and another person may not. So the listener may be reading into the subtle distinctions that the speaker may not be implying.

Second, the same terms are used differently in different fields or research situations. So if you are an epidemiologist who does research on mostly observed variables, you probably have been trained with slightly different meanings to some of these terms than if you’re a psychologist who does experimental research.

Even worse, statistical software packages use different names for similar concepts, even among their own procedures. This quest for accuracy often renders confusion. (It’s hard enough without switching the words!).

Here are some common terms that all refer to a variable in a model that is proposed to affect or predict another variable.

I’ll give you the different definitions and implications, but it’s very likely that I’m missing some. If you see a term that means something different than you understand it, please add it to the comments. And please tell us which field you primarily work in.

Predictor Variable, Predictor

This is the most generic of the terms. There are no implications for being manipulated, observed, categorical, or numerical. It does not imply causality.

A predictor variable is simply used for explaining or predicting the value of the response variable. Used predominantly in regression.

Independent Variable

I’ve seen Independent Variable (IV) used different ways.

1. It implies causality: the independent variable affects the dependent variable. This usage is predominant in ANOVA models where the Independent Variable is manipulated by the experimenter. If it is manipulated, it’s generally categorical and subjects are randomly assigned to conditions.

2. It does not imply causality, but it is a key predictor variable for answering the research question. In other words, it is in the model because the researcher is interested in understanding its relationship with the dependent variable. In other words, it’s not a control variable.

3. It does not imply causality or the importance of the variable to the research question. But it is uncorrelated (independent) of all other predictors.

Honestly, I only recently saw someone define the term Independent Variable this way. Predictor Variables cannot be independent variables if they are at all correlated. It surprised me, but it’s good to know that some people mean this when they use the term.

Explanatory Variable

A predictor variable in a model where the main point is not to predict the response variable, but to explain a relationship between X and Y.

Control Variable

A predictor variable that could be related to or affecting the dependent variable, but not really of interest to the research question.

Covariate

Generally a continuous predictor variable. Used in both ANCOVA (analysis of covariance) and regression. Some people use this to refer to all predictor variables in regression, but it really means continuous predictors. Adding a covariate to ANOVA (analysis of variance) turns it into ANCOVA (analysis of covariance).

Sometimes covariate implies that the variable is a control variable (as opposed to an independent variable), but not always.

And sometimes people use covariate to mean control variable, either numerical or categorical.

This one is so confusing it got it’s own Confusing Statistical Terms article.

Confounding Variable, Confounder

These terms are used differently in different fields. In experimental design, it’s used to mean a variable whose effect cannot be distinguished from the effect of an independent variable.

In observational fields, it’s used to mean one of two situations. The first is a variable that is so correlated with an independent variable that it’s difficult to separate out their effects on the response variable. The second is a variable that causes the independent variable’s effect on the response.

The distinction in those interpretations are slight but important.

Exposure Variable

This is a term for independent variable in some fields, particularly epidemiology. It’s the key predictor variable.

Risk Factor

Another epidemiology term for a predictor variable. Unlike the term “Factor” listed below, it does not imply a categorical variable.

Factor

A categorical predictor variable. It may or may not indicate a cause/effect relationship with the response variable (this depends on the study design, not the analysis).

Independent variables in ANOVA are almost always called factors. In regression, they are often referred to as indicator variables, categorical predictors, or dummy variables. They are all the same thing in this context.

Also, please note that Factor has completely other meanings in statistics, so it too got its own Confusing Statistical Terms article.

Feature

Used in Machine Learning and Predictive models, this is simply a predictor variable.

Grouping Variable

Same as a factor.

Fixed factor

A categorical predictor variable in which the specific values of the categories are intentional and important, often chosen by the experimenter. Examples include experimental treatments or demographic categories, such as sex and race.

If you’re not doing a mixed model (and you should know if you are), all your factors are fixed factors. For a more thorough explanation of fixed and random factors, see Specifying Fixed and Random Factors in Mixed or Multi-Level Models

Random factor

A categorical predictor variable in which the specific values of the categories were randomly assigned. Generally used in mixed modeling. Examples include subjects or random blocks.

For a more thorough explanation of fixed and random factors, see Specifying Fixed and Random Factors in Mixed or Multi-Level Models

Blocking variable

This term is generally used in experimental design, but I’ve also seen it in randomized controlled trials.

A blocking variable is a variable that indicates an experimental block: a cluster or experimental unit that restricts complete randomization and that often results in similar response values among members of the block.

Blocking variables can be either fixed or random factors. They are never continuous.

Dummy variable

A categorical variable that has been dummy coded. Dummy coding (also called indicator coding) is usually used in regression models, but not ANOVA. A dummy variable can have only two values: 0 and 1. When a categorical variable has more than two values, it is recoded into multiple dummy variables.

Indicator variable

Same as dummy variable.

The Take Away Message

Whenever you’re using technical terms in a report, an article, or a conversation, it’s always a good idea to define your terms. This is especially important in statistics, which is used in many, many fields, each of whom adds their own subtleties to the terminology.

 

Confusing Statistical Terms Series

Confusing Statistical Terms #1: The Many Names of Independent Variables

Confusing Statistical Terms #2: Alpha and Beta

Confusing Statistical Terms #3: Levels

Confusing Statistical Term #4: Hierarchical Regression vs. Hierarchical Model

Confusing Statistical Term #5: Covariate

Confusing Statistical Term #6: Factor

Confusing Statistical Term #7: GLM

 

Interpreting Linear Regression Coefficients: A Walk Through Output
Learn the approach for understanding coefficients in that regression as we walk through output of a model that includes numerical and categorical predictors and an interaction.

Reader Interactions

Comments

  1. Tee says

    Thank you for these. The question that brought me here is, do I count control variables included in an analysis as part of my predictor variables when calculating my degrees of freedom?

  2. ben morris says

    Thanks for this. When coding sports contest, I use “dummy variable” values of 1 for first team, -1 for opponent, and 0 for others. I’ve found that this reduces standard error relative to using 2 categorical variables, even though they are functionally equivalent.

    • Karen says

      Hi Ben,

      You’re right, that’s functionally equivalent, although it does change interpretation of the coefficients. That’s generally called “effect coding” and it has real advantages, especially when there are interactions in the model.

      Karen

  3. Anthony Gambino says

    This is very helpful, however there is one part that is slightly misleading. A dummy variable can be many more values than just 0 or 1. For example, simple contrast coding involves creating dummy variables such that, if you have k groups, you would make the observations in the group have a dummy variable value of (k-1)/k, and all the other observations have a dummy variable value of -1/k. Perhaps a better way of explaining the dummy variable would be to say that it is always one of two values (in the case of binary coding, either 1 or 0, and in the case of simple contrast coding, either (k-1)/k or -1/k). The reason this is important is because if your multiple regression model has an interaction term between two categorical variables, then coding with zeroes and ones does not work.

    • Karen says

      Hi Anthony,

      I’ve never heard of any contrast coding other than 0/1 being called dummy coding. There are many ways of coding contrasts other than this one.

      And regression models with interactions terms between two categorical variables does work with dummy coding. You just have to know how to interpret it. You’re right, though, that the interpretation it gives may not be the best choice to answer your research question and another coding scheme may work better.

  4. Lara says

    Thank you so much for the information! Almost everything was pretty clear. However, I didn’t get the difference between predictive and independent variable.
    As far as I understood a predictive variable in the following equation would be X1 as well as X2.

    Y = aX1 + cX2

    But this does not mean they are independent between each other, right?
    Then, is this correct? ‘All independent variables are predictives but the opposite does not need to be true.’

    Thanks!

    • Karen says

      Hi Lara,

      You’re right–independent variable doesn’t mean independent of each other. It’s just in relation to the dependent variable. The idea is that an independent variable couldn’t possibly be affected by the dependent variable. The direction is very clear.

      The term Predictor variable hedges a little. Use it when you can’t be sure the direction of causality.

      Karen


Leave a Reply

Your email address will not be published. Required fields are marked *

Please note that, due to the large number of comments submitted, any questions on problems related to a personal study/project will not be answered. We suggest joining Statistically Speaking, where you have access to a private forum and more resources 24/7.