There are not a lot of statistical methods designed just for ordinal variables. (There are a few, though.)
But that doesn’t mean that you’re stuck with few options. There are more than you’d think. (more…)
There are not a lot of statistical methods designed just for ordinal variables. (There are a few, though.)
But that doesn’t mean that you’re stuck with few options. There are more than you’d think. (more…)
I recently received a great question in a comment about whether the assumptions of normality, constant variance, and independence in linear models are about the errors, εi, or the response variable, Yi.
The asker had a situation where Y, the response, was not normally distributed, but the residuals were.
Quick Answer: It’s just the errors.
In fact, if you look at any (good) statistics textbook on linear models, you’ll see below the model, stating the assumptions: (more…)
A well-fitting regression model results in predicted values close to the observed data values. The mean model, which uses the mean for every predicted value, generally would be used if there were no useful predictor variables. The fit of a proposed regression model should therefore be better than the fit of the mean model. But how do you measure that model fit?
When is it important to use adjusted R-squared instead of R-squared?
R², the Coefficient of Determination, is one of the most useful and intuitive statistics we have in linear regression.
It tells you how well the model predicts the outcome and has some nice properties. But it also has one big drawback.
What are the assumptions of linear models? If you compare two lists of assumptions, most of the time they’re not the same.
(more…)