When I was in graduate school, stat professors would say “ANOVA is just a special case of linear regression.”  But they never explained why.

And I couldn’t figure it out.

The model notation is different.

The output looks different.

The vocabulary is different.

The focus of what we’re testing is completely different. How can they be the same model?

[click to continue…]


What Is Specification Error in Statistical Models?

When we think about model assumptions, we tend to focus on assumptions like independence, normality, and constant variance. The other big assumption, which is harder to see or test, is that there is no specification error. The assumption of linearity is part of this, but it’s actually a bigger assumption. What is this assumption of no specification error? The basic idea is that when you choose a final model, you want to choose one that accurately represents the real relationships among variables. There are a few common ways of specifying a linear model inaccurately.

Read the full article →

What Is Regression to the Mean?

Have you ever heard that “2 tall parents will have shorter children”?

This phenomenon, known as regression to the mean, has been used to explain everything from patterns in hereditary stature (as Galton first did in 1886) to why movie sequels or sophomore albums so often flop.

So just what is regression to the mean (RTM)?

Read the full article →

April 2018 Member Webinar: Equivalence Tests and Non-Inferiority

Statistics is, to a large extent, a science of comparison. You are trying to test whether one group is bigger, faster, or smarter than another.
You do this by setting up a null hypothesis that your two groups have equal means or proportions and an alternative hypothesis that one group is “better” than the other. The test has interesting results only when the data you collect ends up rejecting the null hypothesis.
But there are times when the interesting research question you’re asking is not about whether one group is better than the other, but whether the two groups are equivalent.

Read the full article →

What Is an Exact Test?

Most of the p-values we calculate are based on an assumption that our test statistic meets some distribution. These distributions are generally a good way to calculate p-values as long as assumptions are met. But it’s not the only way to calculate a p-value. Rather than come up with a theoretical probability based on a distribution, exact tests calculate a p-value empirically. The simplest (and most common) exact test is a Fisher’s exact for a 2×2 table. Remember calculating empirical probabilities from your intro stats course? All those red and white balls in urns? The calculation of empirical probability starts with the number of all the possible outcomes.

Read the full article →

Poisson or Negative Binomial? Using Count Model Diagnostics to Select a Model

How do you choose between Poisson and negative binomial models for discrete count outcomes? One key criterion is the relative value of the variance to the mean after accounting for the effect of the predictors. A previous article discussed the concept of a variance that is larger than the model assumes: overdispersion. (Underdispersion is also […]

Read the full article →

Link Functions and Errors in Logistic Regression

I recently held a free webinar in our The Craft of Statistical Analysis program about Binary, Ordinal, and Nominal Logistic Regression. It was a record crowd and we didn’t get through everyone’s questions, so I’m answering some here on the site. They’re grouped by topic, and you will probably get more out of it if […]

Read the full article →

Getting Accurate Predicted Counts When There Are No Zeros in the Data

We previously examined why a linear regression and negative binomial regression were not viable models for predicting the expected length of stay in the hospital for people with the flu.  A linear regression model was not appropriate because our outcome variable, length of stay, was discrete and not continuous. A negative binomial model wasn’t the […]

Read the full article →

March 2018 Member Webinar: Using Transformations to Improve Your Linear Regression Model

Transformations don’t always help, but when they do, they can improve your linear regression model in several ways simultaneously.

They can help you better meet the linear regression assumptions of normality and homoscedascity (i.e., equal variances). They also can help avoid some of the artifacts caused by boundary limits in your dependent variable — and sometimes even remove a difficult-to-interpret interaction.

In this webinar, we will review the assumptions of the linear regression model and explain when to consider a transformation of the dependent variable or independent variable.

Read the full article →

The Problem with Linear Regression for Count Data: Predicting Length of Stay in Hospital

This year’s flu strain is very vigorous. The number of people checking in at hospitals is rapidly increasing. Hospitals are desperate to know if they have enough beds to handle those who need their help. You have been asked to analyze a previous year’s hospitalization length of stay by people with the flu who had […]

Read the full article →