The Analysis Factor Newsletter Banner

Volume 13, Issue 3
October 2017

A Note From Karen

Karen Grace-Martin Photo

I hope your October is off to a good start. Here in New York we reached fall weather... and then went back to summer heat. Now the crisp air has finally cooled off for the beautiful season we have ahead! 

This is an exciting quarter for The Analysis Factor. We are currently running two workshops, but then we are taking a break from workshops for the rest of the year. Instead, we will be working on putting together some new offerings for you. A big one involves our free Craft of Statistical Analysis webinar program... We'll tell you more once everything is ready.

We're also busy planning the winter 2018 calendar for webinars in our Statistically Speaking membership program. If you are a member and have suggestions for topics you'd like to see covered, please share them with us in the "Suggestions" section of the forum. Or give a thumbs-up to another member's suggestion with our new rating feature.

To answer a very common question about how to check assumptions, I wrote the article below. I hope you enjoy it!

Happy analyzing,
Karen


The Problem with Using Tests for Statistical Assumptions

Every statistical model and hypothesis test has assumptions.

And yes, if you’re going to use a statistical test, you need to check whether those assumptions are reasonable to whatever extent you can.

Some assumptions are easier to check than others. Some are so obviously reasonable that you don’t need to do much to check them most of the time. And some have no good way of being checked directly, so you have to use situational clues. 

There are so many nuances with assumptions as well: depending on a lot of the details of your particular study and data set, violations of some assumptions may be more or less serious. It depends on a lot of details: sample sizes, imbalance in the data across groups, whether the study is exploratory or confirmatory, etc. 

And here’s the kicker -- the simple rules your stats professor told you to use to test assumptions were absolutely sufficient when you were learning about tests and assumptions. But now that you’re doing real data analysis? It’s time to dig into the details.

Before You Begin

1.    Make sure you understand what the assumptions are for the statistical test you're using and what they mean. There is a lot of misinformation and vague information out there about assumptions.

2.    Don’t forget that when assumptions violations are serious, it will call into question all your results. This really is important.

Don’t rely on a single statistical test to decide if another test’s assumptions have been met.

There are many tests -- Levene’s test for homogeneity of variance, the Kolmogorov-Smirnov test for normality, the Bartlett’s test for sphericity -- whose main usage is to test the assumptions of another test.

I suspect they have other uses, but this is how I’ve generally seen them used.

These tests provide useful information about whether an assumption is being met. But each test is just one indication of whether an assumption is reasonably being met by your data.

Because, nuances.

Let’s use the example of Levene’s test of homogeneity of variance. It’s often used in ANOVA as a sole decision criterion. And software makes it so easy.

But basing your decision on one test is too simplistic for real research. Why?

1.    It relies too much on p-values, and therefore, sample sizes. Levene’s will have a smaller p-value for large samples than for small samples, given the same population variances.

So it’s very likely that you’re overstating a problem with the assumption in large samples and understating it in small samples. You can’t ignore the actual size difference in the sample variances when making this decision.

So sure, look at the Levene's p-value, but also look at the actual variances and how much bigger some are than others. (In other words, actually look at the effect size, not just the p-value).

2.    The ANOVA is generally considered robust to violations of this assumption when sample sizes across groups are equal. So even if Levene’s is significant, moderately different variances may not be a problem in balanced data sets.

Keppel (p. 98 of the 1992 edition) suggests that a good rule of thumb is that if sample sizes are equal, robustness should hold until the largest variance is more than 9 times the smallest variance.

3.    This robustness goes away the more unbalanced the samples are. So you need to use judgment here, taking into account both the imbalance and the actual difference in variances.

Gather Evidence Instead

The Levene’s test is one piece of evidence you’ll use to make a decision. 

In addition to that and Keppel's ratio rule of thumb, the other pieces of info could potentially include:

A graph of your data. Is there an obvious difference in spread across groups? 

A different test of the same assumption, if it exists. (Brown-Forsythe is another one for equal variances, but there are others.) See if the results match.

Then consider each piece of evidence within the wider data context.

Now make a judgment, taking into account the sample sizes when interpreting p-values from any tests. 

I know it’s not comfortable to make judgments based on uncertain information.

(Experience helps here.)

But remember, in data analysis, it’s impossible not to. 

Just be transparent in your results section of your report about what you did and on what evidence you based your decision.

 


References and Further Reading:

Keppel, Geoffrey: Design and Analysis, 3rd Edition. Prentice-Hall.

Neter, Kutner, Nachtsheim, Wasserman: Applied Linear Regression Models. Irwin.

When Assumptions of ANCOVA Are Irrelevant

The Assumptions of Linear Models: Explicit and Implicit

 
 
 

Quick Links

The Analysis Factor Consulting

The Analysis Institute Workshops

Statistically Speaking Membership

More About Us


Who We Are

The Analysis Factor is your go-to source for expert training and mentorship in all things statistics. Our trusty team of top-of-the-line statistics experts is at the helm, ready to help anyone who gets their hands messy with data.

The Analysis Factor is the difference between knowing about statistics and knowing how to use statistics in data analysis.

Statistical analysis is an applied skill. And you have to learn how to use statistical tools within the context of your own data. We specialize in doing just that.

What We Do

At The Analysis Factor, we offer one-on-one consulting services, live and on-demand workshops, and monthly webinars complete with Q&A sessions. All with friendly faces and plucky personalities, to boot.

Our valuable resources and learning programs empower researchers to become confident, able, and skilled statistical practitioners.

We aim to make your journey through the real-world application of statistical analysis better, easier, and (dare we say) more fun.

Why We Do It

Karen Grace-Martin, our founder, spent 7 years as a statistical consultant at Cornell University. While there, she learned that being an excellent statistical advisor is not only about having the goods (i.e., the best statistical skills ever), but about understanding the real pressures and issues that researchers face.

Combine this understanding with fabulous customer service and the rare ability to communicate technical ideas in a way that each client understands and, well, you've got the best darn stats training around.

Here at The Analysis Factor, we're on a mission to make data analysis affordable and accessible for everyone.

Yep, that means you.

Learn more at theanalysisfactor.com.


Think the world needs a little more oomph with their data?

So do we. Forward this newsletter to a friend, colleague, or hey, even your mom. (Little known fact: Moms love what you love.)

And if you got this from a friend (or from your mom), sign up here to get your very own copy each month.


You got this email because you subscribed to The Analysis Factor's list community. (Smart move, if you ask us.)

If you've changed your mind, just click the link at the end of this email. We'll welcome you back with open arms any time.

Share the love. Forward this newsletter to friends, fans, and colleagues who might be interested. Your recommendation is how we grow.

Get this email from a friend, colleague, or secret admirer of all things statistics? Click here to subscribe.

Newsletter not displaying right? View it in your browser instead.
Need to change your email address? Check out the details below.
No longer wish to receive this newsletter? Sorry to see you go. Unsubscribe below.