In many research fields, particularly those that mostly use ANOVA, a common practice is to categorize continuous predictor variables so they work in an ANOVA. This is often done with median splits—splitting the sample into two categories—the “high” values above the median and the “low” values below the median. There are many reasons why this isn’t such a good idea:

- the median varies from sample to sample, making the categories in different samples have different meanings
- all values on one side of the median are considered equivalent—any variation within the category is ignored, and two values right next to each other on either side of the median are considered different
- the categorization is completely arbitrary. A ‘High” score isn’t necessarily high. If the scale is skewed, as many are, even a value near the low end can end up in the “high” category.

But it can be very useful and legitimate to be able to choose whether to treat an independent variable as categorical or continuous. Knowing when it is appropriate and understanding how it affects interpretation of parameters allows the data analyst to find real results that might otherwise have been missed.

The main thing to understand is that the general linear model doesn’t really care if your predictor is continuous or categorical. But you, as the analyst, choose the information you want from the analysis based on the coding of the predictor. Numerical predictors are usually coded with the actual numerical values. Categorical variables are often coded with dummy variables—0 or 1.

Without getting into the details of coding schemes, when all the values of the predictor are 0 and 1, there is no real information about the distance between them. There’s only one fixed distance—the difference between 0 and 1. So the model returns a parameter estimate that only really gives information about the response—the difference in the means of the response for the two groups.

But when a predictor has many numerical values, the model returns a parameter estimate that uses the information about the distances between the predictor values—the slope of a line. The slope uses both the horizontal distances—the variation in the predictor—and the vertical distances—the variation in the response.

So when deciding whether to treat a predictor as continuous or categorical, what you are really deciding is how important it is to include the detailed information about the variation of values in the predictor. As I already mentioned, median splits create arbitrary groupings that eliminate all detail. Some other situations also throw away the detail, but in a way that is less arbitrary and that may give you all the information you need:

1. Sometimes the quantitative scale reflects meaningful qualitative differences. For example, a well tested depression scale may have meaningful cut points that reflect “non”, “mild”, and “clinical” depression levels. Likewise, some quantitative variables have natural meaningful qualitative cutpoints. One example would be age in a study of retirement planning. People who have reached retirement age and eligibility for pensions, social security and medicare, will have qualitatively different situations than those who have not.

2. When there are few values of the quantitative variable, fitting a line may not be appropriate. Mathematically, only two values are necessary to fit a line, but the line will be forced through both points, reflecting the means exactly. So if that retirement study is comparing workers at age 35 to those at age 55, the line will be unlikely to be a true representation of the real relationship. Comparing the means of these qualitatively different groups would be a better approach. Having 3 points is better than 2, but the more, the better. If the study included ages between 35 and 55, fitting a line would actually give more information.

3. When the relationship is not linear. As few as 5 or 6 values of the predictor can be considered continuous if they follow a line. But if they don’t, it can be much more straightforward, and you can learn a lot more about the relationship between your variables, by looking at which means are higher than which others, rather than trying to fit a complicated non-linear model. In the same example about retirement planning, say you have data for workers aged 30, 35, 40, 45, 50, and 55. If the first four ages have similar mean amounts of retirement planning, but at age 50 it starts to go up, and at 55 it jumps much higher, one approach would be to fit some sort of exponential model or step function. Another approach would be to consider age as categorical, and see at which age retirement planning increased. Although technically this approach doesn’t use all the information in the data, it answers some research questions more directly.

So while you don’t want to categorize a variable just to get it to fit into the analysis you’re most comfortable with, you do want to do it if it will help you understand the real relationship between your variables.

{ 1 comment… read it below or add one }

Sometimes when I have at least a dozen levels of an ordinal factor for which I expect a monotonic relationship that might not be linear, I use something like Spearman Rho instead of Pearson R, because Spearman Rho does not assume linearity.