categorical predictor

Should I Specify a Model Predictor as Categorical or Continuous?

October 22nd, 2018 by

Predictor variables in statistical models can be treated as either continuous or categorical.

Usually, this is a very straightforward decision.

Categorical predictors, like treatment group, marital status, or highest educational degree should be specified as categorical.

Likewise, continuous predictors, like age, systolic blood pressure, or percentage of ground cover should be specified as continuous.

But there are numerical predictors that aren’t continuous. And these can sometimes make sense to treat as continuous and sometimes make sense as categorical.

(more…)


The Impact of Removing the Constant from a Regression Model: The Categorical Case

December 9th, 2016 by

Stage 2In a simple linear regression model, how the constant (a.k.a., intercept) is interpreted depends upon the type of predictor (independent) variable.

If the predictor is categorical and dummy-coded, the constant is the mean value of the outcome variable for the reference category only. If the predictor variable is continuous, the constant equals the predicted value of the outcome variable when the predictor variable equals zero.

Removing the Constant When the Predictor Is Categorical

When your predictor variable X is categorical, the results are logical. Let’s look at an example. (more…)


Pros and Cons of Treating Ordinal Variables as Nominal or Continuous

July 1st, 2016 by

There are not a lot of statistical methods designed just for ordinal variables.Stage 2

But that doesn’t mean that you’re stuck with few options.  There are more than you’d think. (more…)


Confusing Statistical Term #6: Factor

April 27th, 2012 by

Factor is confusing much in the same way as hierarchical and beta, because it too has different meanings in different contexts.  Factor might be a little worse, though, because its meanings are related.

In both meanings, a factor is a variable.  But a factor has a completely different meaning and implications for use in two different contexts. (more…)


3 Situations When it Makes Sense to Categorize a Continuous Predictor in a Regression Model

July 24th, 2009 by

In many research fields, a common practice is to categorize continuous predictor variables so they work in an ANOVA. This is often done with median splits. This is a way of splitting the sample into two categories: the “high” values above the median and the “low” values below the median.

Reasons Not to Categorize a Continuous Predictor

There are many reasons why this isn’t such a good idea: (more…)


Continuous and Categorical Variables: The Trouble with Median Splits

February 16th, 2009 by

Stage 2A Median Split is one method for turning a continuous variable into a categorical one.  Essentially, the idea is to find the median of the continuous variable.  Any value below the median is put it the category “Low” and every value above it is labeled “High.”

This is a very common practice in many social science fields in which researchers are trained in ANOVA but not Regression.  At least that was true when I was in grad school in psychology.  And yes, oh so many years ago, I used all these techniques I’m going to tell you not to.

There are problems with median splits.  The first is purely logical.  When a continuum is categorized, every value above the median, for example, is considered equal.  Does it really make sense that a value just above the median is considered the same as values way at the end?  And different than values just below the median?  Not so much.

So one solution is to split the sample into three groups, not two, then drop the middle group.  This at least creates some separation between the two groups.  The obvious problem, here though, is you’re losing a third of your sample.

The second problem with categorizing a continuous predictor, regardless of how you do it, is loss of power (Aiken & West, 1991).  It’s simply harder to find effects that are really there.

So why is it common practice?  Because categorizing continuous variables is the only way to stuff them into an ANOVA, which is the only statistics method researchers in many fields are trained to do.

Rather than force a method that isn’t quite appropriate, it would behoove researchers, and the quality of their research, to learn the general linear model and how ANOVA fits into it.  It’s really only a short leap from ANOVA to regression but a necessary one.  GLMs can include interactions among continuous and categorical predictors just as ANOVA does.

If left continuous, the GLM would fit a regression line to the effect of that continuous predictor.  Categorized, the model will compare the means.  It often happens that while the difference in means isn’t significant, the slope is.

Reference: Aiken & West (1991). Multiple Regression: Testing and interpreting interactions.