Assumptions of Linear Models are about Residuals, not the Response Variable

Stage 2I recently received a great question in a comment about whether the assumptions of normality, constant variance, and independence in linear models are about the residuals or the response variable.

The asker had a situation where Y, the response, was not normally distributed, but the residuals were.

Quick Answer:  It’s just the residuals.

In fact, if you look at any (good) statistics textbook on linear models, you’ll see below the model, stating the assumptions:

ε~ i.i.d. N(0, σ²)

That ε is the residual term (and it ought to have an i subscript–one for each individual).  The i.i.d. means every residual is independent and identically distributed.  They all have the same distribution, which is defined right afterward.

You’ll notice there is nothing similar about Y.  ε’s distribution is influenced by Y’s, which is why Y has to be continuous, unbounded, and measured on an interval or ratio scale.

But Y’s distribution is also influenced by the X’s.  ε’s isn’t.  That’s why you can get a normal distribution for ε, but lopsided, chunky, or just plain weird-looking Y.

 

Four Critical Steps in Building Linear Regression Models
While you’re worrying about which predictors to enter, you might be missing issues that have a big impact your analysis. This training will help you achieve more accurate results and a less-frustrating model building experience.

Reader Interactions

Comments

  1. Tobia says

    Hi Karen. Thanks for the post.

    I just have one question to make sure I got your point. If I cannot reject the null of normally distributed residuals (for example, running an Anderson-Darling test), does this imply that p-values in an OLS regression are right and reliable, even though the Xs and Ys are not normal?

    Cheers,
    Tobia

  2. Bruce Weaver says

    Hello Karen. Thanks for this nice post on an issue that often confuses people. I have two minor comments. First, I think you meant to say interval OR ratio scale in the second to last paragraph. Second, I think it is useful (at least for more advanced users of statistics) to point out the important distinction between errors and residuals, as in this Wikipedia page:

    http://en.wikipedia.org/wiki/Errors_and_residuals_in_statistics

    The i.i.d. N(0, σ²) assumption applies to the errors, not the residuals. For example, if you give me n-1 of the residuals from your regression model, I can work out the last one, because they must sum to 0. So the residuals are not truly independent. The unobservable errors, on the other hand, can be truly independent.

    Once again, thanks for a great post.

    Cheers,
    Bruce

    • Yashwanth says

      Thanks Bruce. The answer got me confused on error and residual. This comment again re-installed faith in my understanding.

  3. Kevin says

    Hi Karen,

    Since Y = E(Y) + ε, and E(Y) is a constant (function of X’s and betas), this should imply that the variance, independence and distributional assumptions on ε applies to Y as well. Am I right to say this?

    • Karen says

      Hi Kevin,

      One small change that makes all the difference: Y=E(Y|X) + e. If every individual had the same value of X, then yes, the distribution of Y would match that of e. Since they generally differ, the Y’s are affected by the X’s but the residuals aren’t.

      The distribution of Y|X is the same as the distribution of e, but the distribution of Y isn’t necessarily. I’ve seen many data sets where Y is skewed, but e is normal.

      Karen


Leave a Reply

Your email address will not be published. Required fields are marked *

Please note that, due to the large number of comments submitted, any questions on problems related to a personal study/project will not be answered. We suggest joining Statistically Speaking, where you have access to a private forum and more resources 24/7.