Linear Regression

Continuous and Categorical Variables: The Trouble with Median Splits

February 16th, 2009 by

Stage 2A Median Split is one method for turning a continuous variable into a categorical one.  Essentially, the idea is to find the median of the continuous variable.  Any value below the median is put it the category “Low” and every value above it is labeled “High.”

This is a very common practice in many social science fields in which researchers are trained in ANOVA but not Regression.  At least that was true when I was in grad school in psychology.  And yes, oh so many years ago, I used all these techniques I’m going to tell you not to.

There are problems with median splits.  The first is purely logical.  When a continuum is categorized, every value above the median, for example, is considered equal.  Does it really make sense that a value just above the median is considered the same as values way at the end?  And different than values just below the median?  Not so much.

So one solution is to split the sample into three groups, not two, then drop the middle group.  This at least creates some separation between the two groups.  The obvious problem, here though, is you’re losing a third of your sample.

The second problem with categorizing a continuous predictor, regardless of how you do it, is loss of power (Aiken & West, 1991).  It’s simply harder to find effects that are really there.




So why is it common practice?  Because categorizing continuous variables is the only way to stuff them into an ANOVA, which is the only statistics method researchers in many fields are trained to do.

Rather than force a method that isn’t quite appropriate, it would behoove researchers, and the quality of their research, to learn the general linear model and how ANOVA fits into it.  It’s really only a short leap from ANOVA to regression but a necessary one.  GLMs can include interactions among continuous and categorical predictors just as ANOVA does.

If left continuous, the GLM would fit a regression line to the effect of that continuous predictor.  Categorized, the model will compare the means.  It often happens that while the difference in means isn’t significant, the slope is.

Reference: Aiken & West (1991). Multiple Regression: Testing and interpreting interactions.

 


When NOT to Center a Predictor Variable in Regression

February 9th, 2009 by

There are two reasons to center predictor variables in any type of regression analysis–linear, logistic, multilevel, etc.

1. To lessen the correlation between a multiplicative term (interaction or polynomial term) and its component variables (the ones that were multiplied).

2. To make interpretation of parameter estimates easier.

I was recently asked when is centering NOT a good idea? (more…)


Proportions as Dependent Variable in Regression–Which Type of Model?

January 26th, 2009 by

When the dependent variable in a regression model is a proportion or a percentage, it can be tricky to decide on the appropriate way to model it.

The big problem with ordinary linear regression is that the model can predict values that aren’t possible–values below 0 or above 1.  But the other problem is that the relationship isn’t linear–it’s sigmoidal.  A sigmoidal curve looks like a flattened S–linear in the middle, but flattened on the ends.  So now what?

The simplest approach is to do a linear regression anyway.  This approach can be justified only in a few situations.

1. All your data fall in the middle, linear section of the curve.  This generally translates to all your data being between .2 and .8 (although I’ve heard that between .3-.7 is better).  If this holds, you don’t have to worry about the two objections.  You do have a linear relationship, and you won’t get predicted values much beyond those values–certainly not beyond 0 or 1.

2. It is a really complicated model that would be much harder to model another way.  If you can assume a linear model, it will be much easier to do, say, a complicated mixed model or a structural equation model.  If it’s just a single multiple regression, however, you should look into one of the other methods.

A second approach is to treat the proportion as a binary response then run a logistic or probit regression.  This will only work if the proportion can be thought of and you have the data for the number of successes and the total number of trials.  For example, the proportion of land area covered with a certain species of plant would be hard to think of this way, but the proportion of correct answers on a 20-answer assessment would.

The third approach is to treat it the proportion as a censored continuous variable.  The censoring means that you don’t have information below 0 or above 1.  For example, perhaps the plant would spread even more if it hadn’t run out of land.  If you take this approach, you would run the model as a two-limit tobit model (Long, 1997).  This approach works best if there isn’t an excessive amount of censoring (values of 0 and 1).

Reference: Long, J.S. (1997). Regression Models for Categorical and Limited Dependent Variables. Sage Publishing.

 


The Exposure Variable in Poisson Regression Models

January 23rd, 2009 by

Poisson Regression Models and its extensions (Zero-Inflated Poisson, Negative Binomial Regression, etc.) are used to model counts and rates. A few examples of count variables include:

– Number of words an eighteen month old can say

– Number of aggressive incidents performed by patients in an impatient rehab center

Most count variables follow one of these distributions in the Poisson family. Poisson regression models allow researchers to examine the relationship between predictors and count outcome variables.

Using these regression models gives much more accurate parameter (more…)


Interpreting Interactions in Regression

January 19th, 2009 by

Adding interaction terms to a regression model has real benefits. It greatly expands your understanding of the relationships among the variables in the model. And you can test more specific hypotheses.  But interpreting interactions in regression takes understanding of what each coefficient is telling you.

The example from Interpreting Regression Coefficients was a model of the height of a shrub (Height) based on the amount of bacteria in the soil (Bacteria) and whether the shrub is located in partial or full sun (Sun). Height is measured in cm, Bacteria is measured in thousand per ml of soil, and Sun = 0 if the plant is in partial sun, and Sun = 1 if the plant is in full sun.

(more…)


SPSS GLM: Choosing Fixed Factors and Covariates

December 30th, 2008 by

The beauty of the Univariate GLM procedure in SPSS is that it is so flexible.  You can use it to analyze regressions, ANOVAs, ANCOVAs with all sorts of interactions, dummy coding, etc.

The down side of this flexibility is it is often confusing what to put where and what it all means.

So here’s a quick breakdown.

The dependent variable I hope is pretty straightforward.  Put in your continuous dependent variable.

Fixed Factors are categorical independent variables.  It does not matter if the variable is (more…)