Dear %$firstname$%,
I'm happy to announce that this is the first issue of Volume 2. I launched the business last July with the first StatWise. I sent it out to a number of my previous clients from Cornell, and 22 signed up by the August issue. We now have over 500 members in our community!
I want you know that I am so grateful to have the chance to guide you through your path to mastering data analysis. Most people I meet think I'm nuts that I find statistics so enjoyable. Recoiling in horror and blank stares are not uncommon. While I admit that I do think of statistics as one big puzzle, the real reward for me is hearing the relief in your voices when the light bulb goes on, and a previously frustrating analysis becomes suddenly clear. So thank you
Many times over the past year it has felt like things were coming together incredibly slowly, but now that I look back, I realize my team and I have created a newsletter, a website with resources and articles, a blog (which has gotten almost 30,000 views!), a monthly webinar series, worked with a number of clients in individual consulting, and offered three new workshops. I'm a bit tired thinking about it all. :-)
But now we begin year 2. This month we're in planning mode--planning new workshops for fall, updating resources on the website, streamlining consulting services, and creating some home-study classes. We're putting together a whole new website with all our training and learning services and products in one place. Keep an eye out for that by early fall.
This month we're talking a bit about categorical variables in the General Linear Model (regression and anova models). Today's article is about the situations when it is both helpful and legitimate to categorize a continuous independent variable. Hope you find it helpful. And this month's webinar in our Craft of Statistical Analysis series is on Dummy and Effect Coding. When I first announced it in the last webinar, I got a couple of "Hurray!"s, so I expect this will be a popular one.
Happy analyzing,
Karen |
In many research fields, particularly those that mostly use ANOVA, a common practice is to categorize continuous predictor variables so they work in an ANOVA. This is often done with median splits—splitting the sample into two categories—the “high” values above the median and the “low” values below the median. There are many reasons why this isn’t such a good idea:
- the median varies from sample to sample, making the categories in different samples have different meanings
- all values on one side of the median are considered equivalent—any variation within the category is ignored, and two values right next to each other on either side of the median are considered different
- the categorization is completely arbitrary. A ‘High” score isn’t necessarily high. If the scale is skewed, as many are, even a value near the low end can end up in the “high” category.
But it can be very useful and legitimate to be able to choose whether to treat an independent variable as categorical or continuous. Knowing when it is appropriate and understanding how it affects interpretion of parameters allows the data analyst to find real results that might otherwise have been missed.
The main thing to understand is that the general linear model doesn’t really care if your predictor is continuous or categorical. But you, as the analyst, choose the information you want from the analysis based on the coding of the predictor. Numerical predictors are usually coded with the actual numerical values. Categorical variables are often coded with dummy variables—0 or 1.
Without getting into the details of coding schemes, when all the values of the predictor are 0 and 1, there is no real information about the distance between them. There’s only one fixed distance—the difference between 0 and 1. So the model returns a parameter estimate that only really gives information about the response—the difference in the means of the response for the two groups.
But when a predictor has many numerical values, the model returns a parameter estimate that uses the information about the distances between the predictor values—the slope of a line. The slope uses both the horizontal distances—the variation in the predictor—and the vertical distances—the variation in the response.
So when deciding whether to treat a predictor as continuous or categorical, what you are really deciding is how important it is to include the detailed information about the variation of values in the predictor. As I already mentioned, median splits create arbitrary groupings that eliminate all detail. Some other situations also throw away the detail, but in a way that is less arbitrary and that may give you all the information you need:
1. Sometimes the quantitative scale reflects meaningful qualitative differences. For example, a well tested depression scale may have meaningful cut points that reflect “non”, “mild”, and “clinical” depression levels. Likewise, some quantitative variables have natural meaningful qualitative cutpoints. One example would be age in a study of retirement planning. People who have reached retirement age and eligibility for pensions, social security and medicare, will have qualitatively different situations than those who have not.
2. When there are few values of the quantitative variable, fitting a line may not be appropriate. Mathematically, only two values are necessary to fit a line, but the line will be forced through both points, reflecting the means exactly. So if that retirement study is comparing workers at age 35 to those at age 55, the line will be unlikely to be a true representation of the real relationship. Comparing the means of these qualitatively different groups would be a better approach. Having 3 points is better than 2, but the more, the better. If the study included ages between 35 and 55, fitting a line would actually give more information.
3. When the relationship is not linear. As few as 5 or 6 values of the predictor can be considered continuous if they follow a line. But if they don’t, it can be much more straightforward, and you can learn a lot more about the relationship between your variables, by looking at which means are higher than which others, rather than trying to fit a complicated non-linear model. In the same example about retirement planning, say you have data for workers aged 30, 35, 40, 45, 50, and 55. If the first four ages have similar mean amounts of retirement planning, but at age 50 it starts to go up, and at 55 it jumps much higher, one approach would be to fit some sort of exponential model or step function. Another approach would be to consider age as categorical, and see at which age retirement planning increased. Although technically this approach doesn’t use all the information in the data, it answers some research questions more directly.
So while you don't want to categorize a variable just to get it to fit into the analysis you're most comfortable with, you do want to do it if it will help you understand the real relationship between your variables.
|
What is The Analysis Factor? The Analysis Factor is the difference between knowing about statistics and knowing how to use statistics. It acknowledges that statistical analysis is an applied skill. It requires learning how to use statistical tools within the context of a researcher’s own data, and supports that learning.
The Analysis Factor, the organization, offers statistical consulting, projects, resources, and learning programs that empower social science researchers to become confident, able, and skilled statistical practitioners. Our aim is to make your journey acquiring the applied skills of statistical analysis easier and more pleasant.
Karen Grace-Martin, the founder, spent seven years as a statistical consultant at Cornell University. While there, she learned that being a great statistical advisor is not only about having excellent statistical skills, but about understanding the pressures and issues researchers face, about fabulous customer service, and about communicating technical ideas at a level each client understands.
You can learn more about Karen Grace-Martin and The Analysis Factor at analysisfactor.com. |