It seems very many researchers are needing to learn multilevel and mixed models, and I have to say, it’s not so easy on your own.
I too went to graduate school before it was taught in classes–we did learn mixed models as in Split Plot designs, but things have progressed a bit since then. So I too have had to learn them without benefit of a class, or teacher. So I feel your pain. But I’ve struggled through and learned a lot although I had the advantage of seeing very, very many examples as I’ve worked with clients over the years and of having knowledgeable colleagues to ponder and learn with. So if reading Singer makes you feel stupid, you’re not alone.
I’ve also had the advantage of having really good training in the general linear model. This came not only from numerous classes in graduate school, but once again, from seeing so many examples in consulting and having to explain it so many times. (Teaching is when you learn best).
Why is this important? Because multilevel models are just general linear models with extra sources of variation. It’s much harder to learn multilevel models if you’re still unsure about some of the concepts in linear regression. You want to focus on figuring out what a random slope really means, not a centered predictor.
So here are 4 concepts in linear regression that you really, really should get clear about before you attempt to read the Singer article. It won’t make it easy, but it will make it easier.
1. What centering does to your variables: Intercepts are pretty important in multilevel models, so centering is often required to make intercepts meaningful.
2. Working with categorical and continuous predictors: You will want to use both dummy and effect coding in different situations. Likewise, you want to be able to understand what it means if you make a variable continuous or categorical. What different information do you get from it and what does it mean? Even if you’re a regular ANOVA user, it may make sense to treat time as continuous, not categorical.
3. Interactions: I know you know that you have to leave in lower order terms for each interaction (right?). Make sure you can interpret interactions regardless of how many categorical and continuous variables they contain. And make sure you can interpret an interaction regardless of whether the variables in the interaction are both continuous, both categorical, or one of each.
4. Polynomial terms: Random slopes can be hard enough to grasp (and keep straight from random intercepts). Random curvature is worse. Know thy Polynomials.
And finally, understand how they all fit together. These 4 concepts really come down to understanding what the estimates in your model mean. How to interpret them. And that will come largely from practice, asking questions, and relearning the basics in the context of your data.
The advantage is that understanding these concepts will behoove you in learning any advanced modeling–nonlinear models, logistic regression, Cox regression, and so on.