Logistic Regression

6 Types of Dependent Variables that will Never Meet the Linear Model Normality Assumption

September 17th, 2009 by

The assumptions of normality and constant variance in a linear model (both OLS regression and ANOVA) are quite robust to departures.  That means that even if the assumptions aren’t met perfectly, the resulting p-values will still be reasonable estimates.

But you need to check the assumptions anyway, because some departures are so far off that the p-values become inaccurate.  And in many cases there are remedial measures you can take to turn non-normal residuals into normal ones.

But sometimes you can’t.

Sometimes it’s because the dependent variable just isn’t appropriate for a linear model.  The (more…)


Why Logistic Regression for Binary Response?

May 5th, 2009 by

Logistic regression models can seem pretty overwhelming to the uninitiated.  Why not use a regular regression model?  Just turn Y into an indicator variable–Y=1 for success and Y=0 for failure.

For some good reasons.

1.It doesn’t make sense to model Y as a linear function of the parameters because Y has only two values.  You just can’t make a line out of that (at least not one that fits the data well).

2. The predicted values can be any positive or negative number, not just 0 or 1.

3. The values of 0 and 1 are arbitrary.The important part is not to predict the numerical value of Y, but the probability that success or failure occurs, and the extent to which that probability depends on the predictor variables.

So okay, you say.  Why not use a simple transformation of Y, like probability of success–the probability that Y=1.

Well, that doesn’t work so well either.

Why not?

1. The right hand side of the equation can be any number, but the left hand side can only range from 0 to 1.

2. It turns out the relationship is not linear, but rather follows an S-shaped (or sigmoidal) curve.

To obtain a linear relationship, we need to transform this response too, Pr(success).

As luck would have it, there are a few functions that:

1. are not restricted to values between 0 and 1

2. will form a linear relationship with our parameters

These functions include:

Arcsine

Probit

Logit

All three of these work just as well, but (believe it or not) the Logit function is the easiest to interpret.

But as it turns out, you can’t just run the transformation then do a regular linear regression on the transformed data.  That would be way too easy, but also give inaccurate results.  Logistic Regression uses a different method for estimating the parameters, which gives better results–better meaning unbiased, with lower variances.

 


Logistic Regression Models: Reversed odds ratios in SAS Proc Logistic–Use ‘Descending’

March 18th, 2009 by

If you’ve ever been puzzled by odds ratios in a logistic regression that seem backward, stop banging your head on the desk.

Odds are (pun intended) you ran your analysis in SAS Proc Logistic.

Proc logistic has a strange (I couldn’t say odd again) little default.  If your dependent variable Y is coded 0 and 1, SAS will model the probability of Y=0.  Most of us are trying to model the probability that Y=1.  So, yes, your results ARE backward, but only because SAS is testing a hypothesis opposite yours.

Luckily, SAS made the solution easy.  Simply add the ‘Descending’ option right in the proc logisitic command line.  For example:

PROC LOGISTIC DESCENDING;
MODEL Y = X1 X2;
RUN;

All of your parameter estimates (B) will reverse signs, although p-values will not be affected.

 

[Logistic_Regression_Workshop]


When NOT to Center a Predictor Variable in Regression

February 9th, 2009 by

There are two reasons to center predictor variables in any type of regression analysis–linear, logistic, multilevel, etc.

1. To lessen the correlation between a multiplicative term (interaction or polynomial term) and its component variables (the ones that were multiplied).

2. To make interpretation of parameter estimates easier.

I was recently asked when is centering NOT a good idea? (more…)


Proportions as Dependent Variable in Regression–Which Type of Model?

January 26th, 2009 by

When the dependent variable in a regression model is a proportion or a percentage, it can be tricky to decide on the appropriate way to model it.

The big problem with ordinary linear regression is that the model can predict values that aren’t possible–values below 0 or above 1.  But the other problem is that the relationship isn’t linear–it’s sigmoidal.  A sigmoidal curve looks like a flattened S–linear in the middle, but flattened on the ends.  So now what?

The simplest approach is to do a linear regression anyway.  This approach can be justified only in a few situations.

1. All your data fall in the middle, linear section of the curve.  This generally translates to all your data being between .2 and .8 (although I’ve heard that between .3-.7 is better).  If this holds, you don’t have to worry about the two objections.  You do have a linear relationship, and you won’t get predicted values much beyond those values–certainly not beyond 0 or 1.

2. It is a really complicated model that would be much harder to model another way.  If you can assume a linear model, it will be much easier to do, say, a complicated mixed model or a structural equation model.  If it’s just a single multiple regression, however, you should look into one of the other methods.

A second approach is to treat the proportion as a binary response then run a logistic or probit regression.  This will only work if the proportion can be thought of and you have the data for the number of successes and the total number of trials.  For example, the proportion of land area covered with a certain species of plant would be hard to think of this way, but the proportion of correct answers on a 20-answer assessment would.

The third approach is to treat it the proportion as a censored continuous variable.  The censoring means that you don’t have information below 0 or above 1.  For example, perhaps the plant would spread even more if it hadn’t run out of land.  If you take this approach, you would run the model as a two-limit tobit model (Long, 1997).  This approach works best if there isn’t an excessive amount of censoring (values of 0 and 1).

Reference: Long, J.S. (1997). Regression Models for Categorical and Limited Dependent Variables. Sage Publishing.

 


Logistic Regression Models for Multinomial and Ordinal Variables

January 14th, 2009 by

Multinomial Logistic Regression

The multinomial (a.k.a. polytomous) logistic regression model is a simple extension of the binomial logistic regression model.  They are used when the dependent variable has more than two nominal (unordered) categories.

Dummy coding of independent variables is quite common.  In multinomial logistic regression the dependent variable is dummy coded into multiple 1/0 variables.  There is a variable for all categories but one, so if there are M categories, there will be M-1 dummy variables.  All but one category has its own dummy variable.  Each category’s dummy variable has a value of 1 for its category and a 0 for all others.  One category, the reference category, doesn’t need its own dummy variable as it is uniquely identified by all the other variables being 0.

The multinomial logistic regression then estimates a separate binary logistic regression model for each of those dummy variables.  The result is (more…)