Latest Blog Posts

Spoiler alert, real data are seldom normally distributed. How does the distribution influence the estimate of the population mean and the resulting confidence interval? To figure this out, we randomly draw 100 observations 100 times from three distinct populations and plot the mean and corresponding 95% confidence interval of each sample.

What does it mean for two variables to be correlated? Is that the same or different than if they’re associated or related? This is the kind of question that can feel silly, but shouldn’t. It’s just a reflection of the confusing terminology used in statistics. In this case, the technical statistical term looks like, but […]

Effect size statistics are required by most journals and committees these days ⁠— for good reason.  They communicate just how big the effects are in your statistical results ⁠— something p-values can’t do. But they’re only useful if you can choose the most appropriate one and if you can interpret it. This can be hard […]

By Kim Love When learning about linear models —that is, regression, ANOVA, and similar techniques—we are taught to calculate an R2. The R2 has the following useful properties: The range is limited to [0,1], so we can easily judge how relatively large it is. It is standardized, meaning its value does not depend on the […]

The results of any statistical analysis should include the confidence intervals for estimated parameters. How confident are you that you can explain what they mean? Even those of us who have a solid understand of confidence intervals can get tripped up by the wording. Let’s look at an example.

Whether or not you run experiments, there are elements of experimental design that affect how you need to analyze many types of studies. The most fundamental of these are replication, randomization, and blocking. These key design elements come up in studies under all sorts of names: trials, replicates, multi-level nesting, repeated measures. Any data set […]

The following statement might surprise you, but it’s true. To run a linear model, you don’t need an outcome variable Y that’s normally distributed. Instead, you need a dependent variable that is: Continuous Unbounded Measured on an interval or ratio scale The normality assumption is about the errors in the model, which have the same […]

by Christos Giannoulis Many data sets contain well over a thousand variables. Such complexity, the speed of contemporary desktop computers, and the ease of use of statistical analysis packages can encourage ill-directed analysis. It is easy to generate a vast array of poor ‘results’ by throwing everything into your software and waiting to see what […]

Many of us love performing statistical analyses but hate writing them up in the Results section of the manuscript. We struggle with big-picture issues (What should I include? In what order?) as well as minutia (Do tables have to be double-spaced?).

By Karen Grace-Martin Confounding variable is one of those statistical terms that confuses a lot of people. Not because it represents a confusing concept, but because of how it’s used. (Well, it’s a bit of a confusing concept, but that’s not the worst part). First, it has slightly different meanings to different types of researchers. […]