Like some of the other terms in our list–level and beta–GLM has two different meanings.
It’s a little different than the others, though, because it’s an abbreviation for two different terms:
General Linear Model and Generalized Linear Model.
It’s extra confusing because their names are so similar on top of having the same abbreviation.
And, oh yeah, Generalized Linear Models are an extension of General Linear Models.
And neither should be confused with Generalized Linear Mixed Models, abbreviated GLMM.
Naturally. (more…)
Odds ratios are one of those concepts in statistics that are just really hard to wrap your head around. Although probability and odds both measure how likely it is that something will occur, probability is just so much easier to understand for most of us.
I’m not sure if it’s just a more intuitive concepts, or if it’s something were just taught so much earlier so that it’s more ingrained. In either case, without a lot of practice, most people won’t have an immediate understanding of how likely something is if it’s communicated through odds.
So why not always use probability? (more…)
I am reviewing your notes from your workshop on assumptions. You have made it very clear how to analyze normality for regressions, but I could not find how to determine normality for ANOVAs. Do I check for normality for each independent variable separately? Where do I get the residuals? What plots do I run? Thank you!
I received this great question this morning from a past participant in my Assumptions of Linear Models workshop.
It’s one of those quick questions without a quick answer. Or rather, without a quick and useful answer. The quick answer is:
Do it exactly the same way. All of it.
The longer, useful answer is this: (more…)
R² is such a lovely statistic, isn’t it? Unlike so many of the others, it makes sense–the percentage of variance in Y accounted for by a model.
I mean, you can actually understand that. So can your grandmother. And the clinical audience you’re writing the report for.
A big R² is always good and a small one is always bad, right?
Well, maybe. (more…)
Generalized linear models, linear mixed models, generalized linear mixed models, marginal models, GEE models. You’ve probably heard of more than one of them and you’ve probably also heard that each one is an extension of our old friend, the general linear model.
This is true, and they extend our old friend in different ways, particularly in regard to the measurement level of the dependent variable and the independence of the measurements. So while the names are similar (and confusing), the distinctions are important.
It’s important to note here that I am glossing over many, many details in order to give you a basic overview of some important distinctions. These are complicated models, but I hope this overview gives you a starting place from which to explore more. (more…)
You may have noticed conflicting advice about whether to leave insignificant effects in a model or take them out in order to simplify the model.
One effect of leaving in insignificant predictors is on p-values–they use up precious df in small samples. But if your sample isn’t small, the effect is negligible.
The bigger effect is on interpretation, and really the above cases are about whether it aids interpretation to leave them in. Models do get so cluttered it’s hard to figure out what’s going on, and it makes sense to eliminate effects that aren’t serving a purpose, but even insignificant effects can have a purpose. (more…)