Regression models

3 Mistakes Data Analysts Make in Testing Assumptions in GLM

September 1st, 2009 by

I know you know it–those assumptions in your regression or ANOVA model really are important.  If they’re not met adequately, all your p-values are inaccurate, wrong, useless.

But, and this is a big one, linear models are robust to departures from those assumptions.  Meaning, they don’t have to fit exactly for p-values to be accurate, right, and useful.

You’ve probably heard both of these contradictory statements in stats classes and a million other places, and they are the kinds of statements that drive you crazy.  Right?

I mean, do statisticians make this stuff up just to torture researchers? Or just to keep you feeling stupid?

No, they really don’t.   (I promise!)  And learning how far you can push those robust assumptions isn’t so hard, with some training and a little practice.  Over the years, I’ve found a few mistakes researchers commonly make because of one, or both, of these statements:

1.  They worry too much about the assumptions and over-test them. There are some nice statistical tests to determine if your assumptions are met.  And it’s so nice having a p-value, right?  Then it’s clear what you’re supposed to do, based on that golden rule of p<.05.

The only problem is that many of these tests ignore that robustness.  They find that every distribution is non-normal and heteroskedastic.  They’re good tools, but  these hammers think every data set is a nail.  You want to use the hammer when needed, but don’t hammer everything.

2.They assume everything is robust anyway, so they don’t test anything. It’s easy to do.  And once again, it probably works out much of the time.  Except when it doesn’t.

Yes, the GLM is robust to deviations from some of the assumptions.  But not all the way, and not all the assumptions.  You do have to check them.

3. They test the wrong assumptions. Look at any two regression books and they’ll give you a different set of assumptions.

This is partially because many of these “assumptions”  need to be checked, but they’re not really model assumptions, they’re data issues.  And it’s also partially because sometimes the assumptions have been taken to their logical conclusions.  That textbook author is trying to make it more logical for you.  But sometimes that just leads you to testing the related, but wrong thing.  It works out most of the time, but not always.

 


To Compare Regression Coefficients, Include an Interaction Term

August 14th, 2009 by

Just yesterday I got a call from a researcher who was reviewing a paper.  She didn’t think the authors had run their model correctly, but wanted to make sure.  The authors had run the same logistic regression model separately for each sex because they expected that the effects of the predictors were different for men and women.

On the surface, there is nothing wrong with this approach.  It’s completely legitimate to consider men and women as two separate populations and to model each one separately.

As often happens, the problem was not in the statistics, but what they were trying to conclude from them.   The authors went on to compare the two models, and specifically compare the coefficients for the same predictors across the two models.

Uh-oh. Can’t do that.

If you’re just describing the values of the coefficients, fine.  But if you want to compare the coefficients AND draw conclusions about their differences, you need a p-value for the difference.

Luckily, this is easy to get.  Simply include an interaction term between Sex (male/female) and any predictor whose coefficient you want to compare.  If you want to compare all of them because you believe that all predictors have different effects for men and women, then include an interaction term between sex and each predictor.  If you have 6 predictors, that means 6 interaction terms.

In such a model, if Sex is a dummy variable (and it should be), two things happen:

1.the coefficient for each predictor becomes the coefficient for that variable ONLY for the reference group.

2. the interaction term between sex and each predictor represents the DIFFERENCE in the coefficients between the reference group and the comparison group.  If you want to know the coefficient for the comparison group, you have to add the coefficients for the predictor alone and that predictor’s interaction with Sex.

The beauty of this approach is that the p-value for each interaction term gives you a significance test for the difference in those coefficients.

 


3 Situations When it Makes Sense to Categorize a Continuous Predictor in a Regression Model

July 24th, 2009 by

In many research fields, a common practice is to categorize continuous predictor variables so they work in an ANOVA. This is often done with median splits. This is a way of splitting the sample into two categories: the “high” values above the median and the “low” values below the median.

Reasons Not to Categorize a Continuous Predictor

There are many reasons why this isn’t such a good idea: (more…)


Beyond Median Splits: Meaningful Cut Points

June 26th, 2009 by

I’ve talked a bit about the arbitrary nature of median splits and all the information they just throw away.Stage 2

But I have found that as a data analyst, it is incredibly freeing to be able to choose whether to make a variable continuous or categorical and to make the switch easily.  Essentially, this means you need to be (more…)


Likert Scale Items as Predictor Variables in Regression

May 22nd, 2009 by

Stage 2I was recently asked about whether it’s okay to treat a likert scale as continuous as a predictor in a regression model.  Here’s my reply.  In the question, the researcher asked about logistic regression, but the same answer applies to all regression models.

1. There is a difference between a likert scale item (a single 1-7 scale, eg.) and a full likert scale , which is composed of multiple items.  If it is a full likert scale, with a combination of multiple items, go ahead and treat it as numerical. (more…)


Why Logistic Regression for Binary Response?

May 5th, 2009 by

Logistic regression models can seem pretty overwhelming to the uninitiated.  Why not use a regular regression model?  Just turn Y into an indicator variable–Y=1 for success and Y=0 for failure.

For some good reasons.

1.It doesn’t make sense to model Y as a linear function of the parameters because Y has only two values.  You just can’t make a line out of that (at least not one that fits the data well).

2. The predicted values can be any positive or negative number, not just 0 or 1.

3. The values of 0 and 1 are arbitrary.The important part is not to predict the numerical value of Y, but the probability that success or failure occurs, and the extent to which that probability depends on the predictor variables.

So okay, you say.  Why not use a simple transformation of Y, like probability of success–the probability that Y=1.

Well, that doesn’t work so well either.

Why not?

1. The right hand side of the equation can be any number, but the left hand side can only range from 0 to 1.

2. It turns out the relationship is not linear, but rather follows an S-shaped (or sigmoidal) curve.

To obtain a linear relationship, we need to transform this response too, Pr(success).

As luck would have it, there are a few functions that:

1. are not restricted to values between 0 and 1

2. will form a linear relationship with our parameters

These functions include:

Arcsine

Probit

Logit

All three of these work just as well, but (believe it or not) the Logit function is the easiest to interpret.

But as it turns out, you can’t just run the transformation then do a regular linear regression on the transformed data.  That would be way too easy, but also give inaccurate results.  Logistic Regression uses a different method for estimating the parameters, which gives better results–better meaning unbiased, with lower variances.