One of the tricky parts about dummy coded (0/1) variables is keeping track of what’s a 0 and what’s a 1.
This is made particularly tricky because sometimes your software switches them on you.
Here’s one example in a question I received recently. The context was a Linear Mixed Model, but this can happen in other procedures as well.
I dummy code my categorical variables “0” or “1” but for some reason in the (more…)
Model Building–choosing predictors–is one of those skills in statistics that is difficult to teach. It’s hard to lay out the steps, because at each step, you have to evaluate the situation and make decisions on the next step.
If you’re running purely predictive models, and the relationships among the variables aren’t the focus, it’s much easier. Go ahead and run a stepwise regression model. Let the data give you the best prediction.
But if the point is to answer a research question that describes relationships, you’re going to have to get your hands dirty.
It’s easy to say “use theory” or “test your research question” but that ignores a lot of practical issues. Like the fact that you may have 10 different variables that all measure the same theoretical construct, and it’s not clear which one to use. (more…)
Can I use SPSS MIXED models for (a) ordinal logistic regression, and (b) multi-nomial logistic regression?
Every once in a while I get emailed a question that I think others will find helpful. This is definitely one of them.
My answer:
No.
(And by the way, this is all true in SAS as well. I’ll include the SAS versions in parentheses). (more…)
If you’ve compared two textbooks on linear models, chances are, you’ve seen two different lists of assumptions.
I’ve spent a lot of time trying to get to the bottom of this, and I think it comes down to a few things.
1. There are four assumptions that are explicitly stated along with the model, and some authors stop there.
2. Some authors are writing for introductory classes, and rightfully so, don’t want to confuse students with too many abstract, and sometimes untestable, (more…)
Censored data are inherent in any analysis, like Event History or Survival Analysis, in which the outcome measures the Time to Event TTE. Censoring occurs when the event doesn’t occur for an observed individual during the time we observe them.
Despite the name, the event of “survival” could be any categorical event that you would like to describe the mean or median TTE. To take the censoring into account, though, you need to make sure your data are set up correctly.
Here is a simple example, for a data set that measures days after surgery until an (more…)
Time to event analyses (aka, Survival Analysis and Event History Analysis) are used often within medical, sales and epidemiological research. Some examples of time-to-event analysis are measuring the median time to death after being diagnosed with a heart condition, comparing male and female time to purchase after being given a coupon and estimating time to infection after exposure to a disease.
Survival time has two components that must be clearly defined: a beginning point and an endpoint that is reached either when the event occurs or when the follow-up time has ended.
One basic concept needed to understand time-to-event (TTE) analysis is censoring.
In simple TTE, you should have two types of observations:
1. The event occurred, and we are able to measure when it occurred OR
2. The event did NOT occur during the time we observed the individual, and we only know the total number of days in which it didn’t occur. (CENSORED).
Again you have two groups, one where the time-to-event is known exactly and one where it is not. The latter group is only known to have a certain amount of time where the event of interest did not occur. We don’t know if it would have occurred had we observed the individual longer. But knowing that it didn’t occur for so long tells us something about the risk of the envent for that person.
For example, let the time-to-event be a person’s age at onset of cancer. If you stop following someone after age 65, you may know that the person did NOT have cancer at age 65, but you do not have any information after that age.
You know that their age of getting cancer is greater than 65. But you do not know if they will never get cancer or if they’ll get it at age 66, only that they have a “survival” time greater than 65 years. They are censored because we did not gather information on that subject after age 65.
So one cause of censoring is merely that we can’t follow people forever. At some point you have to end your study, and not all people will have experienced the event.
But another common cause is that people are lost to follow-up during a study. This is called random censoring. It occurs when follow-up ends for reasons that are not under control of the investigator.
In survival analysis, censored observations contribute to the total number at risk up to the time that they ceased to be followed. One advantage here is that the length of time that an individual is followed does not have to be equal for everyone. All observations could have different amounts of follow-up time, and the analysis can take that into account.
Allison, P. D. (1995). Survival Analysis Using SAS. Cary, NC: SAS Institute Inc.
Hosmer, D. W. (2008). Applied Survival Analysis (2nd ed.). Hoboken, NJ: John Wiley & Sons, Inc.