One of the difficult decisions in mixed modeling is deciding which factors are fixed and which are random. And as difficult as it is, it’s also very important. Correctly specifying the fixed and random factors of the model is vital to obtain accurate analyses.
Now, you may be thinking of the fixed and random effects in the model, rather than the factors themselves, as fixed or random. If so, remember that each term in the model (factor, covariate, interaction or other multiplicative term) has an effect. We’ll come back to how the model measures the effects for fixed and random factors.
Sadly, the definitions in many texts don’t help much with decisions to specify factors as fixed or random. Textbook examples are often artificial and hard to apply to the real, messy data you’re working with.
Here’s the real kicker. The same factor can often be fixed or random, depending on the researcher’s objective.
That’s right, there isn’t always a right or a wrong decision here. It depends on the inferences you want to make about this factor. This article outlines a different way to think about fixed and random factors.
Consider an experiment that examines beetle damage on cucumbers. The researcher replicates the experiment at five farms and on four fields at each farm.
There are two varieties of cucumbers, and the researcher measures beetle damage on each of 50 plants at the end of the season. The researcher wants to measure differences in how much damage the two varieties sustain.
The experiment then has the following factors: VARIETY, FARM, and FIELD.
Fixed factors can be thought of in terms of differences.
The effect of a categorical fixed factor is measured by differences from the overall mean.
The effect of a continuous fixed predictor (usually called a covariate) is defined by its slope–how the mean of the dependent variable differs with differing values of the predictor.
The output for fixed predictors, then, gives estimates for mean-differences or slopes.
Conclusions regarding fixed factors are particular to the values of these factors. For example, if one variety of cucumber has significantly less damage than the other, you can make conclusions about these two varieties. This says nothing about cucumber varieties that were not tested.
Random factors, on the other hand, are defined by a distribution and not by differences.
The values of a random factor are assumed to be chosen from a population with a normal distribution with a certain variance.
The output for a random factor is an estimate of this variance and not a set of differences from a mean. Effects of random factors are measured in terms of variance, not mean differences.
For example, we may find that the variance among fields makes up a certain percentage of the overall variance in beetle damage. But we’re not measuring how much higher the beetle damage is in one field compared to another. We only care that they vary, not how much they differ.
Software specification of fixed and random factors
Before we get into specifying factors as fixed and random, please note:
When you’re specifying random effects in software, like intercepts and slopes, the random factor is itself the Subject variable in your software. This differs in different software procedures. Some specifically ask for code like “Subject=FARM.” Others just put FARM after a || or some other indicator of the subject for which random intercepts and slopes are measured.
Some software, like SAS and SPSS, allow you to specify the random factors two ways. One is to list the random effects (like intercept) along with the random factor listed as a subject.
/Random Intercept | Subject(Farm)
The other is to list the random factors themselves.
Both of those give you the same output.
Other software, like R and Stata, only allow the former. And neither uses the term “Subject” anywhere. But that random factor, the subject, still needs to be specified after the | or || in the random part of the model.
Specifying fixed effects is pretty straightforward.
Situations that indicate a factor should be specified as fixed:
1. The factor is the primary treatment that you want to compare.
In our example, VARIETY is definitely fixed as the researcher wants to compare the mean beetle damage on the two varieties.
2. The factor is a secondary control variable, and you want to control for differences in the specific values of this factor.
Say the researcher chose these farms specifically for some feature they had, such as specific soil types or topographies that may affect beetle damage. If the researcher wants to compare the farms as representatives of those soil types, then FARM should be fixed.
3. The factor has only two values.
Even if everything else indicates that a factor should be random, if it has only two values, the variance cannot be calculated, and it should be fixed.
Situations that indicate random factors:
1. Your interest is in quantifying how much of the overall variance to attribute to this factor.
If you want to know how much of the variation in beetle damage you can attribute to the farm at which the damage took place, FARM would be random.
2. Your interest is not in knowing which specific means differ, but you want to account for the variation in this factor.
If the farms were chosen at random, FARM should be random.
This choice of the specific farms involved in the study is key. If you can rerun the study using different specific farms–different values of the Farm factor–and still be able to draw the same conclusions, then Farm should be random. However, if you want to compare or control for these particular farms, then Farm is a fixed factor.
3. You would like to generalize the conclusions about this factor to the whole population.
There is nothing about comparing these specific fields that is of interest to the researcher. Rather, the researcher wants to generalize the results of this experiment to all fields, so FIELD is random.
4. Any interaction with a random factor is also random.
How the factors of a model are specified can have great influence on the results of the analysis and on the conclusions drawn.
One of the most confusing things about mixed models arises from the way it’s coded in most statistical software. Of the ones I’ve used, only HLM sets it up differently and so this doesn’t apply.
But for the rest of them—SPSS, SAS, R’s lme and lmer, and Stata, the basic syntax requires the same pieces of information.
1. The dependent variable
2. The predictor variables for which to calculate fixed effects and whether those (more…)
Statistical models, such as general linear models (linear regression, ANOVA, MANOVA), linear mixed models, and generalized linear models (logistic, Poisson, regression, etc.) all have the same general form.
On the left side of the equation is one or more response variables, Y. On the right hand side is one or more predictor variables, X, and their coefficients, B. The variables on the right hand side can have many forms and are called by many names.
There are subtle distinctions in the meanings of these names. Unfortunately, though, there are two practices that make them more confusing than they need to be.
First, they are often used interchangeably. So someone may use “predictor variable” and “independent variable” interchangably and another person may not. So the listener may be reading into the subtle distinctions that the speaker may not be implying.
Second, the same terms are used differently in different fields or research situations. So if you are an epidemiologist who does research on mostly observed variables, you probably have been trained with slightly different meanings to some of these terms than if you’re a psychologist who does experimental research.
Even worse, statistical software packages use different names for similar concepts, even among their own procedures. This quest for accuracy often renders confusion. (It’s hard enough without switching the words!).
Here are some common terms that all refer to a variable in a model that is proposed to affect or predict another variable.
I’ll give you the different definitions and implications, but it’s very likely that I’m missing some. If you see a term that means something different than you understand it, please add it to the comments. And please tell us which field you primarily work in.
Predictor Variable, Predictor
This is the most generic of the terms. There are no implications for being manipulated, observed, categorical, or numerical. It does not imply causality.
A predictor variable is simply used for explaining or predicting the value of the response variable. Used predominantly in regression.
I’ve seen Independent Variable (IV) used different ways.
1. It implies causality: the independent variable affects the dependent variable. This usage is predominant in ANOVA models where the Independent Variable is manipulated by the experimenter. If it is manipulated, it’s generally categorical and subjects are randomly assigned to conditions.
2. It does not imply causality, but it is a key predictor variable for answering the research question. In other words, it is in the model because the researcher is interested in understanding its relationship with the dependent variable. In other words, it’s not a control variable.
3. It does not imply causality or the importance of the variable to the research question. But it is uncorrelated (independent) of all other predictors.
Honestly, I only recently saw someone define the term Independent Variable this way. Predictor Variables cannot be independent variables if they are at all correlated. It surprised me, but it’s good to know that some people mean this when they use the term.
A predictor variable in a model where the main point is not to predict the response variable, but to explain a relationship between X and Y.
A predictor variable that could be related to or affecting the dependent variable, but not really of interest to the research question.
Generally a continuous predictor variable. Used in both ANCOVA (analysis of covariance) and regression. Some people use this to refer to all predictor variables in regression, but it really means continuous predictors. Adding a covariate to ANOVA (analysis of variance) turns it into ANCOVA (analysis of covariance).
Sometimes covariate implies that the variable is a control variable (as opposed to an independent variable), but not always.
And sometimes people use covariate to mean control variable, either numerical or categorical.
This one is so confusing it got it’s own Confusing Statistical Terms article.
Confounding Variable, Confounder
These terms are used differently in different fields. In experimental design, it’s used to mean a variable whose effect cannot be distinguished from the effect of an independent variable.
In observational fields, it’s used to mean one of two situations. The first is a variable that is so correlated with an independent variable that it’s difficult to separate out their effects on the response variable. The second is a variable that causes the independent variable’s effect on the response.
The distinction in those interpretations are slight but important.
This is a term for independent variable in some fields, particularly epidemiology. It’s the key predictor variable.
Another epidemiology term for a predictor variable. Unlike the term “Factor” listed below, it does not imply a categorical variable.
A categorical predictor variable. It may or may not indicate a cause/effect relationship with the response variable (this depends on the study design, not the analysis).
Independent variables in ANOVA are almost always called factors. In regression, they are often referred to as indicator variables, categorical predictors, or dummy variables. They are all the same thing in this context.
Also, please note that Factor has completely other meanings in statistics, so it too got its own Confusing Statistical Terms article.
Used in Machine Learning and Predictive models, this is simply a predictor variable.
Same as a factor.
A categorical predictor variable in which the specific values of the categories are intentional and important, often chosen by the experimenter. Examples include experimental treatments or demographic categories, such as sex and race.
If you’re not doing a mixed model (and you should know if you are), all your factors are fixed factors. For a more thorough explanation of fixed and random factors, see Specifying Fixed and Random Factors in Mixed or Multi-Level Models
A categorical predictor variable in which the specific values of the categories were randomly assigned. Generally used in mixed modeling. Examples include subjects or random blocks.
For a more thorough explanation of fixed and random factors, see Specifying Fixed and Random Factors in Mixed or Multi-Level Models
This term is generally used in experimental design, but I’ve also seen it in randomized controlled trials.
A blocking variable is a variable that indicates an experimental block: a cluster or experimental unit that restricts complete randomization and that often results in similar response values among members of the block.
Blocking variables can be either fixed or random factors. They are never continuous.
A categorical variable that has been dummy coded. Dummy coding (also called indicator coding) is usually used in regression models, but not ANOVA. A dummy variable can have only two values: 0 and 1. When a categorical variable has more than two values, it is recoded into multiple dummy variables.
Same as dummy variable.
The Take Away Message
Whenever you’re using technical terms in a report, an article, or a conversation, it’s always a good idea to define your terms. This is especially important in statistics, which is used in many, many fields, each of whom adds their own subtleties to the terminology.
Confusing Statistical Terms #1: The Many Names of Independent Variables