In my last blog post, I wrote about a mistake I once made when I didn’t realize the defaults for dummy coding were different in two SPSS procedures (Binary Logistic and GEE).
Ironically, about the same time I wrote it, I was having a conversation with Ann Maria de Mars on Twitter. She was trying to figure out why her logistic regression model fit results were identical in SAS Proc Logistic and SPSS Binary Logistic, but the coefficients in SAS were half those of SPSS.
It was ironic because I, of course, didn’t recognize it as the same issue and wasn’t much help.
But Ann Maria investigated and discovered that it came down to differences in the defaults for coding categorical predictors in SAS and SPSS that did it. Her detailed and humorous explanation is here.
Some takeaways for you, the researcher and data analyst:
1. Give yourself a break if you hit a snag. Even very experienced data analysts, statisticians who understand what they’re doing, get stumped sometimes. Don’t ever think that performing data analysis is an IQ test. You’re bringing together many skills and complex tools.
2. Learn thy software. In my last post, I phrased it “Know thy software”, but this is where you get to know it. Snags are good opportunities to investigate the details of your software, just like Ann Maria did. If you can think of it as a challenge to figure out–a puzzle–it can actually be fun.
Make friends with your syntax manuals.
3. Get help when you need it. Statistical software packages *are* complex tools. You don’t have to know everything to use them
Ask colleagues. Call customer support. Call a stat consultant. That’s what they’re there for.
4. A great way to check your work is to run your test two different ways. It’s another reason to be able to use at least two stat software packages. I’m not suggesting you have to run every analysis twice. But when a result looks strange, or you want to double-check a specific important model, this can be a good strategy for testing things out.
It may be that your results aren’t telling you what you think they are.
[Logistic_Regression_Workshop]
One of the tricky parts about dummy coded (0/1) variables is keeping track of what’s a 0 and what’s a 1.
This is made particularly tricky because sometimes your software switches them on you.
Here’s one example in a question I received recently. The context was a Linear Mixed Model, but this can happen in other procedures as well.
I dummy code my categorical variables “0” or “1” but for some reason in the (more…)
Model Building–choosing predictors–is one of those skills in statistics that is difficult to teach. It’s hard to lay out the steps, because at each step, you have to evaluate the situation and make decisions on the next step.
If you’re running purely predictive models, and the relationships among the variables aren’t the focus, it’s much easier. Go ahead and run a stepwise regression model. Let the data give you the best prediction.
But if the point is to answer a research question that describes relationships, you’re going to have to get your hands dirty.
It’s easy to say “use theory” or “test your research question” but that ignores a lot of practical issues. Like the fact that you may have 10 different variables that all measure the same theoretical construct, and it’s not clear which one to use. (more…)
Can I use SPSS MIXED models for (a) ordinal logistic regression, and (b) multi-nomial logistic regression?
Every once in a while I get emailed a question that I think others will find helpful. This is definitely one of them.
My answer:
No.
(And by the way, this is all true in SAS as well. I’ll include the SAS versions in parentheses). (more…)
If you’ve compared two textbooks on linear models, chances are, you’ve seen two different lists of assumptions.
I’ve spent a lot of time trying to get to the bottom of this, and I think it comes down to a few things.
1. There are four assumptions that are explicitly stated along with the model, and some authors stop there.
2. Some authors are writing for introductory classes, and rightfully so, don’t want to confuse students with too many abstract, and sometimes untestable, (more…)
Censored data are inherent in any analysis, like Event History or Survival Analysis, in which the outcome measures the Time to Event TTE. Censoring occurs when the event doesn’t occur for an observed individual during the time we observe them.
Despite the name, the event of “survival” could be any categorical event that you would like to describe the mean or median TTE. To take the censoring into account, though, you need to make sure your data are set up correctly.
Here is a simple example, for a data set that measures days after surgery until an (more…)