Regression

Series on Confusing Statistical Terms

December 3rd, 2009 by

One of the biggest challenges in learning statistics and data analysis is learning the lingo.  It doesn’t help that half of the notation is in Greek (literally).

The terminology in statistics is particularly confusing because often the same word or symbol is used to mean completely different concepts.

I know it feels that way, but it really isn’t a master plot by statisticians to keep researchers feeling ignorant.

Really.

It’s just that a lot of the methods in statistics were created by statisticians working in different fields–economics, psychology, medicine, and yes, straight statistics.  Certain fields often have specific types of data that come up a lot and that require specific statistical methodologies to analyze.

Economics needs time series, psychology needs factor analysis.  Et cetera, et cetera.

But separate fields developing statistics in isolation has some ugly effects.

Sometimes different fields develop the same technique, but use different names or notation.

Other times different fields use the same name or notation on different techniques they developed.

And of course, there are those terms with slightly different names, often used in similar contexts, but with different meanings. These are never used interchangeably, but they’re easy to confuse if you don’t use this stuff every day.

And sometimes, there are different terms for subtly different concepts, but people use them interchangeably.  (I am guilty of this myself).  It’s not a big deal if you understand those subtle differences.  But if you don’t, it’s a mess.

And it’s not just fields–it’s software, too.

SPSS uses different names for the exact same thing in different procedures.  In GLM, a continuous independent variable is called a Covariate.  In Regression, it’s called an Independent Variable.

Likewise, SAS has a Repeated statement in its GLM, Genmod, and Mixed procedures.  They all get at the same concept there (repeated measures), but they deal with it in drastically different ways.

So once the fields come together and realize they’re all doing the same thing, people in different fields or using different software procedures, are already used to using their terminology.  So we’re stuck with different versions of the same word or method.

So anyway, I am beginning a series of blog posts to help clear this up.  Hopefully it will be a good reference you can come back to when you get stuck.

We’ve expanded on this list with a member training, if you’re interested.

If you have good examples, please post them in the comments.  I’ll do my best to clear things up.

 

Why Statistics Terminology is Especially Confusing

Confusing Statistical Term #1: Independent Variable

Confusing Statistical Terms #2: Alpha and Beta

Confusing Statistical Term #3: Levels

Confusing Statistical Terms #4: Hierarchical Regression vs. Hierarchical Model

Confusing Statistical Term #5: Covariate

Confusing Statistical Term #6: Factor

Same Statistical Models, Different (and Confusing) Output Terms

Confusing Statistical Term #7: GLM

Confusing Statistical Term #8: Odds

Confusing Statistical Term #9: Multiple Regression Model and Multivariate Regression Model

Confusing Statistical Term #10: Mixed and Multilevel Models

Confusing Statistical Terms #11: Confounder

Six terms that mean something different statistically and colloquially

Confusing Statistical Term #13: MAR and MCAR Missing Data

 


3 Situations When it Makes Sense to Categorize a Continuous Predictor in a Regression Model

July 24th, 2009 by

In many research fields, a common practice is to categorize continuous predictor variables so they work in an ANOVA. This is often done with median splits. This is a way of splitting the sample into two categories: the “high” values above the median and the “low” values below the median.

Reasons Not to Categorize a Continuous Predictor

There are many reasons why this isn’t such a good idea: (more…)


On Puzzles, Statistics, Algorithms, and Understanding

July 1st, 2009 by

My 8 year-old son got a Rubik’s cube in his Christmas stocking this year.

I had gotten one as a birthday present when I was about 10.  It was at the height of the craze and I was so excited.

I distinctly remember bursting into tears when I discovered that my little sister sneaked playing with it, and messed it up the day I got it.  I knew I would mess it up to an unsolvable point soon myself, but I was still relishing the fun of creating patterns in the 9 squares, then getting it back to 6 sides of single-colored perfection.  (I loved patterns even then). (more…)


Continuous and Categorical Variables: The Trouble with Median Splits

February 16th, 2009 by

Stage 2A Median Split is one method for turning a continuous variable into a categorical one.  Essentially, the idea is to find the median of the continuous variable.  Any value below the median is put it the category “Low” and every value above it is labeled “High.”

This is a very common practice in many social science fields in which researchers are trained in ANOVA but not Regression.  At least that was true when I was in grad school in psychology.  And yes, oh so many years ago, I used all these techniques I’m going to tell you not to.

There are problems with median splits.  The first is purely logical.  When a continuum is categorized, every value above the median, for example, is considered equal.  Does it really make sense that a value just above the median is considered the same as values way at the end?  And different than values just below the median?  Not so much.

So one solution is to split the sample into three groups, not two, then drop the middle group.  This at least creates some separation between the two groups.  The obvious problem, here though, is you’re losing a third of your sample.

The second problem with categorizing a continuous predictor, regardless of how you do it, is loss of power (Aiken & West, 1991).  It’s simply harder to find effects that are really there.

So why is it common practice?  Because categorizing continuous variables is the only way to stuff them into an ANOVA, which is the only statistics method researchers in many fields are trained to do.

Rather than force a method that isn’t quite appropriate, it would behoove researchers, and the quality of their research, to learn the general linear model and how ANOVA fits into it.  It’s really only a short leap from ANOVA to regression but a necessary one.  GLMs can include interactions among continuous and categorical predictors just as ANOVA does.

If left continuous, the GLM would fit a regression line to the effect of that continuous predictor.  Categorized, the model will compare the means.  It often happens that while the difference in means isn’t significant, the slope is.

Reference: Aiken & West (1991). Multiple Regression: Testing and interpreting interactions.