No, degrees of freedom is not “having one foot out the door”!
Definitions are rarely very good at explaining the meaning of something. At least not in statistics. Degrees of freedom: “the number of independent values or quantities which can be assigned to a statistical distribution”.
This is no exception.
In your typical statistical work, chances are you have already used quantiles such as the median, 25th or 75th percentiles as descriptive statistics.
But did you know quantiles are also valuable in regression, where they can answer a broader set of research questions than standard linear regression?
In standard linear regression, the focus is on estimating the mean of a response variable given a set of predictor variables.
In quantile regression, we can go beyond the mean of the response variable. Instead we can understand how predictor variables predict (1) the entire distribution of the response variable or (2) one or more relevant features (e.g., center, spread, shape) of this distribution.
For example, quantile regression can help us understand not only how age predicts the mean or median income, but also how age predicts the 75th or 25th percentile of the income distribution.
Or we can see how the inter-quartile range — the width between the 75th and 25th percentile — is affected by age. Perhaps the range becomes wider as age increases, signaling that an increase in age is associated with an increase in income variability.
In this webinar, we will help you become familiar with the power and versatility of quantile regression by discussing topics such as:
- Quantiles – a brief review of their computation, interpretation and uses;
- Distinction between conditional and unconditional quantiles;
- Formulation and estimation of conditional quantile regression models;
- Interpretation of results produced by conditional quantile regression models;
- Graphical displays for visualizing the results of conditional quantile regression models;
- Inference and prediction for conditional quantile regression models;
- Software options for fitting quantile regression models.
Join us on this webinar to understand how quantile regression can be used to expand the scope of research questions you can address with your data.
Note: This training is an exclusive benefit to members of the Statistically Speaking Membership Program and part of the Stat’s Amore Trainings Series. Each Stat’s Amore Training is approximately 90 minutes long.
While there are a number of distributional assumptions in regression models, one distribution that has no assumptions is that of any predictor (i.e. independent) variables.
It’s because regression models are directional. In a correlation, there is no direction–Y and X are interchangeable. If you switched them, you’d get the same correlation coefficient.
But regression is inherently a model about the outcome variable. What predicts its value and how well? The nature of how predictors relate to it (more…)
Here’s a little reminder for those of you checking assumptions in regression and ANOVA:
The assumptions of normality and homogeneity of variance for linear models are not about Y, the dependent variable. (If you think I’m either stupid, crazy, or just plain nit-picking, read on. This distinction really is important). (more…)
I often hear concern about the non-normal distributions of independent variables in regression models, and I am here to ease your mind.
There are NO assumptions in any linear model about the distribution of the independent variables. Yes, you only get meaningful parameter estimates from nominal (unordered categories) or numerical (continuous or discrete) independent variables. But no, the model makes no assumptions about them. They do not need to be normally distributed or continuous.
It is useful, however, to understand the distribution of predictor variables to find influential outliers or concentrated values. A highly skewed independent variable may be made more symmetric with a transformation.