SAS

Specifying Fixed and Random Factors in Mixed Models

January 10th, 2022 by

One of the difficult decisions in mixed modeling is deciding which factors are fixed and which are random. And as difficult as it is, it’s also very important. Correctly specifying the fixed and random factors of the model is vital to obtain accurate analyses.

Now, you may be thinking of the fixed and random effects in the model, rather than the factors themselves, as fixed or random. If so, remember that each term in the model (factor, covariate, interaction or other multiplicative term) has an effect. We’ll come back to how the model measures the effects for fixed and random factors.

Sadly, the definitions in many texts don’t help much with decisions to specify factors as fixed or random. Textbook examples are often artificial and hard to apply to the real, messy data you’re working with.

Here’s the real kicker. The same factor can often be fixed or random, depending on the researcher’s objective.

That’s right, there isn’t always a right or a wrong decision here. It depends on the inferences you want to make about this factor. This article outlines a different way to think about fixed and random factors.

An Example

Consider an experiment that examines beetle damage on cucumbers. The researcher replicates the experiment at five farms and on four fields at each farm.

There are two varieties of cucumbers, and the researcher measures beetle damage on each of 50 plants at the end of the season. The researcher wants to measure differences in how much damage the two varieties sustain.

The experiment then has the following factors: VARIETY, FARM, and FIELD.

Fixed factors can be thought of in terms of differences.

The effect of a categorical fixed factor is measured by differences from the overall mean.

The effect of a continuous fixed predictor (usually called a covariate) is defined by its slope–how the mean of the dependent variable differs with differing values of the predictor.

The output for fixed predictors, then, gives estimates for mean-differences or slopes.

Conclusions regarding fixed factors are particular to the values of these factors. For example, if one variety of cucumber has significantly less damage than the other, you can make conclusions about these two varieties. This says nothing about cucumber varieties that were not tested.

Random factors, on the other hand, are defined by a distribution and not by differences.

The values of a random factor are assumed to be chosen from a population with a normal distribution with a certain variance.

The output for a random factor is an estimate of this variance and not a set of differences from a mean. Effects of random factors are measured in terms of variance, not mean differences.

For example, we may find that the variance among fields makes up a certain percentage of the overall variance in beetle damage. But we’re not measuring how much higher the beetle damage is in one field compared to another. We only care that they vary, not how much they differ.

Software specification of fixed and random factors

Before we get into specifying factors as fixed and random, please note:

When you’re specifying random effects in software, like intercepts and slopes, the random factor is itself the Subject variable in your software. This differs in different software procedures. Some specifically ask for code like “Subject=FARM.” Others just put FARM after a || or some other indicator of the subject for which random intercepts and slopes are measured.

Some software, like SAS and SPSS, allow you to specify the random factors two ways. One is to list the random effects (like intercept) along with the random factor listed as a subject.

/Random Intercept | Subject(Farm)

The other is to list the random factors themselves.

/Random Farm

Both of those give you the same output.

Other software, like R and Stata, only allow the former. And neither uses the term “Subject” anywhere. But that random factor, the subject, still needs to be specified after the | or || in the random part of the model.

Specifying fixed effects is pretty straightforward.

Situations that indicate a factor should be specified as fixed:

1. The factor is the primary treatment that you want to compare.

In our example, VARIETY is definitely fixed as the researcher wants to compare the mean beetle damage on the two varieties.

2. The factor is a secondary control variable, and you want to control for differences in the specific values of this factor.

Say the researcher chose these farms specifically for some feature they had, such as specific soil types or topographies that may affect beetle damage. If the researcher wants to compare the farms as representatives of those soil types, then FARM should be fixed.

3. The factor has only two values.

Even if everything else indicates that a factor should be random, if it has only two values, the variance cannot be calculated, and it should be fixed.

Situations that indicate random factors:

1. Your interest is in quantifying how much of the overall variance to attribute to this factor.

If you want to know how much of the variation in beetle damage you can attribute to the farm at which the damage took place, FARM would be random.

2. Your interest is not in knowing which specific means differ, but you want to account for the variation in this factor.

If the farms were chosen at random, FARM should be random.

This choice of the specific farms involved in the study is key. If you can rerun the study using different specific farms–different values of the Farm factor–and still be able to draw the same conclusions, then Farm should be random. However, if you want to compare or control for these particular farms, then Farm is a fixed factor.

3. You would like to generalize the conclusions about this factor to the whole population.

There is nothing about comparing these specific fields that is of interest to the researcher. Rather, the researcher wants to generalize the results of this experiment to all fields, so FIELD is random.

4. Any interaction with a random factor is also random.

How the factors of a model are specified can have great influence on the results of the analysis and on the conclusions drawn.

 

 


Multiple Imputation in a Nutshell

September 20th, 2021 by

Updated 9/20/2021

Imputation as an approach to missing data has been around for decades.

You probably learned about mean imputation in methods classes, only to be told to never do it for a variety of very good reasons. Mean imputation, in which each missing value is replaced, or imputed, with the mean of observed values of that variable, is not the only type of imputation, however.

Two Criteria for Better Imputations

Better, although still problematic, imputation methods have two qualities. They use other variables in the data set to predict the missing value, and they contain a random component.

Using other variables preserves the relationships among variables in the imputations. It feels like cheating, but it isn’t. It ensures that any estimates of relationships using the imputed variable are not too low. Sure, underestimates are conservatively biased, but they’re still biased.

The random component is important so that all missing values of a single variable are not exactly equal. Why is that important? If all imputed values are equal, standard errors for statistics using that variable will be artificially low.

There are a few different ways to meet these criteria. One example would be to use a regression equation to predict missing values, then add a random error term.

Although this approach solves many of the problems inherent in mean imputation, one problem remains. Because the imputed value is an estimate–a predicted value–there is uncertainty about its true value. Every statistic has uncertainty, measured by its standard error. Statistics computed using imputed data have even more uncertainty than their standard errors measure.

Your statistical package cannot distinguish between an imputed value and a real value.

Since the standard errors of statistics based on imputed values are too small, corresponding p-values are also too small. P-values that are reported as smaller than they actually are? Those lead to Type I errors.

How Multiple Imputation Works

Multiple imputation solves this problem by incorporating the uncertainty inherent in imputation. It has four steps:

  1. Create m sets of imputations for the missing values using a good imputation process. This means it uses information from other variables and has a random component.
  2. The result is m full data sets. Each data set will have slightly different values for the imputed data because of the random component.
  3. Analyze each completed data set. Each set of parameter estimates will differ slightly because the data differs slightly.
  4. Combine results, calculating the variation in parameter estimates.

Remarkably, m, the number of sufficient imputations, can be only 5 to 10 imputations, although it depends on the percentage of data that are missing. A good multiple imputation model results in unbiased parameter estimates and a full sample size.

Doing multiple imputation well, however, is not always quick or easy. First, it requires that the missing data be missing at random. Second, it requires a very good imputation model. Creating a good imputation model requires knowing your data very well and having variables that will predict missing values.


Statistical Software Access From Home

March 30th, 2020 by

Of all the stressors you’ve got right now, accessing your statistical software from home shouldn’t be one of them. (You know, the one on your office computer).

We’ve gotten some updates from some statistical software companies on how they’re making it easier to access the software you have a license to or to extend a free trial while you’re working from home.

(more…)


Member Training: What’s the Best Statistical Package for You?

February 1st, 2019 by

Choosing statistical software is part of The Fundamentals of Statistical Skill and is necessary to learning a second software (something we recommend to anyone progressing from Stage 2 to Stage 3 and beyond).

You have many choices for software to analyze your data: R, SAS, SPSS, and Stata, among others. They are all quite good, but each has its own unique strengths and weaknesses.

(more…)


The Secret to Importing Excel Spreadsheets into SAS

January 21st, 2019 by

My poor colleague was pulling her hair out in frustration today.

You know when you’re trying to do something quickly, and it’s supposed to be easy, only it’s not? And you try every solution you can think of and it still doesn’t work?

And even in the great age of the Internet, which is supposed to know all the things you don’t, you still can’t find the answer anywhere?

Cue hair-pulling.

Here’s what happened: She was trying to import an Excel spreadsheet into SAS, and it didn’t work.

Instead she got:

(more…)


Tricks for Using Word to Make Statistical Syntax Easier

March 13th, 2017 by

We’ve talked a lot around here about the reasons to use syntax — not only menus — in your statistical analyses.

Regardless of which software you use, the syntax file is pretty much always a text file. This is true for R, SPSS, SAS, Stata — just about all of them.

This is important because it means you can use an unlikely tool to help you code: Microsoft Word.

I know what you’re thinking. Word? Really?

Yep, it’s true. Essentially it’s because Word has much better Search-and-Replace options than your stat software’s editor.

Here are a couple features of Word’s search-and-replace that I use to help me code faster:

(more…)