Cross-over trials provide a very powerful approach for comparing two treatment conditions. Research subjects get both treatment conditions, which we will label arbitrarily as A and B.
(more…)
Cross-over trials provide a very powerful approach for comparing two treatment conditions. Research subjects get both treatment conditions, which we will label arbitrarily as A and B.
(more…)
Estimated marginal means (EMMs)—sometimes called least-squares means—are a powerful way to interpret and visualize results from linear and mixed-effects models. Yet many researchers struggle to extract, understand, and plot them.
In this 60-minute hands-on tutorial, participants will learn how to compute, interpret, and visualize EMMs using only base R functions together with the emmeans, car, and lme4 packages. We will start with simple linear models and progress to mixed models with random effects, highlighting how to obtain EMMs, confidence intervals, pairwise contrasts, and publication-ready base R plots. The session emphasizes conceptual understanding and practical code you can adapt immediately to your own analyses.
Note: This training is an exclusive benefit to members of the Statistically Speaking Membership Program and part of the Stat’s Amore Trainings Series. Each Stat’s Amore Training is approximately 90 minutes long.
Manolo Romero Escobar is a seasoned statistical consultant and psychometrician with a passion for helping researchers.
Throughout his career, Manolo has worked extensively as a research and statistical consultant. He has served a diverse range of clients including health researchers, educational institutions, and government agencies. With a focus on linear mixed effects modeling, latent variable modeling, and scale development, Manolo brings a wealth of knowledge and experience to every project he undertakes.
Manolo is also proficient in statistical programming languages such as R, SPSS, and Mplus, and has experience with Python and SQL. He is passionate about leveraging technology as an educational and training tool, and he continuously enhances his skills to stay at the forefront of his field.
He holds a B.A. and Licentiate degree in Psychology from Universidad del Valle de Guatemala and a M.A. in Psychology (Area: Developmental and Cognitive Processes) from York University.
Just head over and sign up for Statistically Speaking. You'll get access to this training webinar, 130+ other stats trainings, a pathway to work through the trainings that you need — plus the expert guidance you need to build statistical skill with live Q&A sessions and an ask-a-mentor forum.
In April and May, we’re doing something new: including in membership the workshop Interpreting (Even Tricky) Regression Coefficients with Karen Grace-Martin.
We’ll be releasing the first 3 of 6 modules in April and modules 4-6 in May and holding a special Q&A with Karen at the end of each month.
If you’ve ever wanted to know how to interpret your results or set up your model to get the information you needed, you’ll love this workshop.
Although it’s at Stage 2 and focuses entirely on linear models, everything applies to all sorts of regression models — logistic, multilevel, count models. All of them.
Level is a statistical term that is confusing because it has multiple meanings in different contexts (much like alpha and beta).
There are three different uses of the term Level in statistics that mean completely different things. What makes this especially confusing is that all three of them can be used in the exact same analysis context.
I’ll show you an example of that at the end.
So when you’re talking to someone who is learning statistics or who happens to be thinking of that term in a different context, this gets especially confusing.
The most widespread of these is levels of measurement. Stanley Stevens came up with this taxonomy of assigning numerals to variables in the 1940s. You probably learned about them in your Intro Stats course: the nominal, ordinal, interval, and ratio levels.
Levels of measurement is really a measurement concept, not a statistical one. It refers to how much and the type of information a variable contains. Does it indicate an unordered category, a quantity with a zero point, etc?
So if you hear the following phrases, you’ll know that we’re using the term level to mean measurement level:
It is important in statistics because it has a big impact on which statistics are appropriate for any given variable. For example, you would not do the same test of association between two variables measured at a nominal level as you would between two variables measured at an interval level.
That said, levels of measurement aren’t the only information you need about a variable’s measurement. There is, of course, a lot more nuance.
Another common usage of the term level is within experimental design and analysis. And this is for the levels of a factor. Although Factor itself has multiple meanings in statistics, here we are talking about a categorical independent variable.
In experimental design, the predictor variables (also often called Independent Variables) are generally categorical and nominal. They represent different experimental conditions, like treatment and control conditions.
Each of these categorical conditions is called a level.
Here are a few examples:
Now, you may have noticed that some of these examples actually indicate a high or low level of something. I’m pretty sure that’s where this word usage came from. But you’ll see it used for all sorts of variables, even when they’re not high or low.
Although this use of level is very widespread, I try to avoid it personally. Instead I use the word “value” or “category” both of which are accurate, but without other meanings. That said, “level” is pretty entrenched in this context.
A completely different use of the term is in the context of multilevel models. Multilevel models is a
term for some mixed models. (The terms multilevel models and mixed models are often used interchangably, though mixed model is a bit more flexible).
Multilevel models are used for multilevel (also called hierarchical or nested) data, which is where they get their name. The idea is that the units we’ve sampled from the population aren’t independent of each other. They’re clustered in such a way that their responses will be more similar to each other within a cluster.
The models themselves have two or more sources of random variation. A two level model has two sources of random variation and can have predictors at each level.
A common example is a model from a design where the response variable of interest is measured on students. It’s hard though, to sample students directly or to randomly assign them to treatments, since there is a natural clustering of students within schools.
So the resource-efficient way to do this research is to sample students within schools.
Predictors can be measured at the student level (eg. gender, SES, age) or the school level (enrollment, % who go on to college). The dependent variable has variation from student to student (level 1) and from school to school (level 2).
We always count these levels from the bottom up. So if we have students clustered within classroom and classroom clustered within school and school clustered within district, we have:
So this use of the term level describes the design of the study, not the measurement of the variables or the categories of the factors.
So this is the truly unfortunate part. There are situations where all three definitions of level are relevant within the same statistical analysis context.
I find this unfortunate because I think using the same word to mean completely different things just confuses people. But here it is:
Picture that study in which students are clustered within school (a two-level design). Each school is assigned to use one of three math curricula (the independent variable, which happens to be categorical).
So, the variable “math curriculum” is a factor with three levels (ie, three categories).
Because those three categories of “math curriculum” are unordered, “math curriculum” has a nominal level of measurement.
And since “math curriculum” is assigned to each school, it is considered a level 2 variable in the two-level model.
See the rest of the Confusing Statistical Terms series.
First published December 12, 2008
Last Updated January 21, 2025
Binary logistic regression is one of the most useful regression models. It allows you to predict, classify, or understand explanatory relationships between a set of predictors and a binary outcome.
(more…)
You might be surprised to hear that not only can linear regression fit lines between a response variable Y and one or more predictor variables, X, it can fit curves too. There are many ways to do this, but the simplest is by adding a polynomial term.
So what is a polynomial term and how do you know you need one?
A linear regression model has a few key parameters. These include the intercept coefficient, the slope coefficient, and the residual variance.
That intercept defines the height of the regression line. It does so by measuring the height of the line at one specific point: when all X = 0.
The slope defines how much Y differs, on average, for each one unit difference in X. In other words, it measures the constant relationship between X and Y. Yes, there can be multiple Xs and each one has its own slope.
A polynomial term–a quadratic (squared) or cubic (cubed) term turns a linear regression model into a curve.