I have recently worked with two clients who were running generalized linear mixed models in SPSS.

Both had repeated measures experiments with a binary outcome.

The details of the designs were quite different, of course. But both had pretty complicated combinations of within-subjects factors.

Fortunately, both clients are intelligent, have a good background in statistical modeling, and are willing to do the work to learn how to do this. So in both cases, we made a lot of progress in just a couple meetings.

I found it interesting, through, that both were getting stuck on the same subtle point. It’s the same point *I* was missing for a long time in my own learning of mixed models.

Once I finally got it, a huge light bulb turned on.

### That Subtle, Elusive, Important Difference between Repeated and Random Effects

There are *two different ways* of dealing with repeated measures in a mixed model.

These two ways have both theoretical and analytical implications, so you need to make deliberate choices about which to use.

One is through *modeling the random effects*. This is called G-side modeling because it’s about estimating parts of the G matrix: the covariance matrix of the random effects.

(Warning–not every software manual and textbook calls it G. D is also common).

The other is through *modeling the multiple residuals for each subject*. This is called R-side modeling because it estimates the R matrix: the covariance matrix of residuals for each subject (warning–also often called the Sigma matrix).

Mixed models for clustered designs *don’t have this distinction*. They do everything through modeling the random effects (which is hard enough on its own).

### Why understanding the distinction is so important and so hard to figure out

1. Your statistical software package doesn’t spell this out for you, so you need to understand what each command is doing.

Read the manual before clicking through menus.

A *general* rule of thumb is that any command that mentions “Repeated” is R-side modeling and is about residuals. Any command that mentions “Random” is G-side modeling and is about random effects.

But of course not all software uses this language.

2. Most designs are simple enough that you can model one or the other *but not both*. If you specify both, you start to get warning messages about Hessians or iterations not converging.

Other designs have enough degrees of freedom at each level that you can (and need to) model both.

So if you have a repeated measures design but are doing random effects modeling, it’s very likely that you can’t also model the repeated effects. This is very counter-intuitive but important.

3. You need to define a Subject and a covariance structure for both G and R.

The subject can be obvious in simple designs, but isn’t always.

The covariance structure usually has to be chosen through a combination of logic and testing different models. Some covariance structures make sense for either G or R, but most only make sense in one or the other.

4. In a few simple designs you can get identical results from one specific random effects model and one specific repeated effects model. Once you understand how it all works, it’s pretty logical. But it makes the distinction harder to figure out.

{ 2 comments… read them below or add one }

I’ve been having problems at exactly the same point (and I could not understand). Thanks for giving some light!

Would you have any videos which demonstrates how to perform/interprete generalized linear mixed models on SPSS? Need some help on GLMM for my clinical trial data.