• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
The Analysis Factor

The Analysis Factor

Statistical Consulting, Resources, and Statistics Workshops for Researchers

  • Home
  • Our Programs
    • Membership
    • Online Workshops
    • Free Webinars
    • Consulting Services
  • About
    • Our Team
    • Our Core Values
    • Our Privacy Policy
    • Employment
    • Collaborate with Us
  • Statistical Resources
  • Contact
  • Blog
  • Login

Regression models

When Linear Models Don’t Fit Your Data, Now What?

by Karen Grace-Martin 29 Comments

When your dependent variable is not continuous, unbounded, and measured on an interval or ratio scale, linear models don’t fit. The data just will not meet the assumptions of linear models. But there’s good news, other models exist for many types of dependent variables.

Today I’m going to go into more detail about 6 common types of dependent variables that are either discrete, bounded, or measured on a nominal or ordinal scale and the tests that work for them instead. Some are all of these.

[Read more…] about When Linear Models Don’t Fit Your Data, Now What?

Tagged With: binary variable, categorical variable, Censored, dependent variable, Discrete Counts, Multinomial, ordinal variable, Poisson Regression, Proportion, Proportional Odds Model, regression models, Truncated, Zero Inflated

Related Posts

  • 6 Types of Dependent Variables that will Never Meet the Linear Model Normality Assumption
  • Member Training: Types of Regression Models and When to Use Them
  • When to Check Model Assumptions
  • Proportions as Dependent Variable in Regression–Which Type of Model?

What Is Specification Error in Statistical Models?

by Karen Grace-Martin Leave a Comment

When we think about model assumptions, we tend to focus on assumptions like independence, normality, and constant variance. The other big assumption, which is harder to see or test, is that there is no specification error. The assumption of linearity is part of this, but it’s actually a bigger assumption.

What is this assumption of no specification error? [Read more…] about What Is Specification Error in Statistical Models?

Tagged With: curvilinear effect, interaction, Model Building, predictors, specification error, statistical model, transformation

Related Posts

  • Member Training: Model Building Approaches
  • Differences in Model Building Between Explanatory and Predictive Models
  • Spotlight Analysis for Interpreting Interactions
  • Five Common Relationships Among Three Variables in a Statistical Model

Measures of Model Fit for Linear Regression Models

by Karen Grace-Martin 37 Comments

A well-fitting regression model results in predicted values close to the observed data values. Stage 2

The mean model, which uses the mean for every predicted value, generally would be used if there were no useful predictor variables. The fit of a proposed regression model should therefore be better than the fit of the mean model. [Read more…] about Measures of Model Fit for Linear Regression Models

Tagged With: F test, Model Fit, R-squared, regression models, RMSE

Related Posts

  • The Difference Between R-squared and Adjusted R-squared
  • Simplifying a Categorical Predictor in Regression Models
  • Eight Ways to Detect Multicollinearity
  • Why ANOVA is Really a Linear Regression, Despite the Difference in Notation

Linear Regression Analysis – 3 Common Causes of Multicollinearity and What Do to About Them

by Karen Grace-Martin 1 Comment

Multicollinearity in regression is one of those issues that strikes fear into the hearts of researchers. You’ve heard about its dangers in statistics Stage 2classes, and colleagues and journal reviews question your results because of it. But there are really only a few causes of multicollinearity. Let’s explore them.

Multicollinearity is simply redundancy in the information contained in predictor variables. If the redundancy is moderate, [Read more…] about Linear Regression Analysis – 3 Common Causes of Multicollinearity and What Do to About Them

Tagged With: dummy coding, interpreting regression coefficients, Multicollinearity, principal component analysis

Related Posts

  • Making Dummy Codes Easy to Keep Track of
  • Simplifying a Categorical Predictor in Regression Models
  • A Visual Description of Multicollinearity
  • Your Questions Answered from the Interpreting Regression Coefficients Webinar

The Difference Between R-squared and Adjusted R-squared

by Karen Grace-Martin 2 Comments

One of the most useful and intuitive statistics we have in linear regression is the Coefficient of Determination: R²

It tells you how well the model predicts the outcome and has some nice properties. [Read more…] about The Difference Between R-squared and Adjusted R-squared

Tagged With: Adjusted R-squared, Coefficient of determination, linear regression, Multiple Regression, R-squared

Related Posts

  • Confusing Statistical Term #9: Multiple Regression Model and Multivariate Regression Model
  • Member Training: Preparing to Use (and Interpret) a Linear Regression Model
  • A Visual Description of Multicollinearity
  • Measures of Model Fit for Linear Regression Models

Interpreting Regression Coefficients

by Karen Grace-Martin 31 Comments

Updated 12/20/2021

Despite its popularity, interpreting regression coefficients of any but the simplest models is sometimes, well….difficult.

So let’s interpret the coefficients in a model with two predictors: a continuous and a categorical variable.  The example here is a linear regression model. But this works the same way for interpreting coefficients from any regression model without interactions.

A linear regression model with two predictor variables results in the following equation:

Yi = B0 + B1*X1i + B2*X2i + ei.

The variables in the model are:

  • Y, the response variable;
  • X1, the first predictor variable;
  • X2, the second predictor variable; and
  • e, the residual error, which is an unmeasured variable.

The parameters in the model are:

  • B0, the Y-intercept;
  • B1, the first regression coefficient; and
  • B2, the second regression coefficient.

One example would be a model of the height of a shrub (Y) based on the amount of bacteria in the soil (X1) and whether the plant is located in partial or full sun (X2).

Height is measured in cm. Bacteria is measured in thousand per ml of soil.  And type of sun = 0 if the plant is in partial sun and type of sun = 1 if the plant is in full sun.

Let’s say it turned out that the regression equation was estimated as follows:

Y = 42 + 2.3*X1 + 11*X2

Interpreting the Intercept

B0, the Y-intercept, can be interpreted as the value you would predict for Y if both X1 = 0 and X2 = 0.

We would expect an average height of 42 cm for shrubs in partial sun with no bacteria in the soil. However, this is only a meaningful interpretation if it is reasonable that both X1 and X2 can be 0, and if the data set actually included values for X1 and X2 that were near 0.

If neither of these conditions are true, then B0 really has no meaningful interpretation. It just anchors the regression line in the right place. In our case, it is easy to see that X2 sometimes is 0, but if X1, our bacteria level, never comes close to 0, then our intercept has no real interpretation.

Interpreting Coefficients of Continuous Predictor Variables

Since X1 is a continuous variable, B1 represents the difference in the predicted value of Y for each one-unit difference in X1, if X2 remains constant.

This means that if X1 differed by one unit (and X2 did not differ) Y will differ by B1 units, on average.

In our example, shrubs with a 5000/ml bacteria count would, on average, be 2.3 cm taller than those with a 4000/ml bacteria count. They likewise would be about 2.3 cm taller than those with 3000/ml bacteria, as long as they were in the same type of sun.

(Don’t forget that since the measurement unit for bacteria count is 1000 per ml of soil, 1000 bacteria represent one unit of X1).

Interpreting Coefficients of Categorical Predictor Variables

Similarly, B2 is interpreted as the difference in the predicted value in Y for each one-unit difference in X2 if X1 remains constant. However, since X2 is a categorical variable coded as 0 or 1, a one unit difference represents switching from one category to the other.

B2 is then the average difference in Y between the category for which X2 = 0 (the reference group) and the category for which X2 = 1 (the comparison group).

So compared to shrubs that were in partial sun, we would expect shrubs in full sun to be 11 cm taller, on average, at the same level of soil bacteria.

Interpreting Coefficients when Predictor Variables are Correlated

Don’t forget that each coefficient is influenced by the other variables in a regression model. Because predictor variables are nearly always associated, two or more variables may explain some of the same variation in Y.

Therefore, each coefficient does not measure the total effect on Y of its corresponding variable. It would if it were the only predictor variable in the model. Or if the predictors were independent of each other.

Rather, each coefficient represents the additional effect of adding that variable to the model, if the effects of all other variables in the model are already accounted for.

This means that adding or removing variables from the model will change the coefficients. This is not a problem, as long as you understand why and interpret accordingly.

Interpreting Other Specific Coefficients

I’ve given you the basics here. But interpretation gets a bit trickier for more complicated models, for example, when the model contains quadratic or interaction terms. There are also ways to rescale predictor variables to make interpretation easier.

So here is some more reading about interpreting specific types of coefficients for different types of models:

  • Interpreting the Intercept
  • Removing the Intercept when X is Continuous or Categorical
  • Interpreting Interactions in Regression
  • How Changing the Scale of X affects Interpreting its Regression Coefficient
  • Interpreting Coefficients with a Centered Predictor

Bookmark and Sharehttp://s7.addthis.com/js/250/addthis_widget.js#pub=kgracemartin

Tagged With: categorical predictor, continuous predictor, Intercept, interpreting regression coefficients, linear regression

Related Posts

  • Centering a Covariate to Improve Interpretability
  • Using Marginal Means to Explain an Interaction to a Non-Statistical Audience
  • Member Training: Segmented Regression
  • Should You Always Center a Predictor on the Mean?

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Interim pages omitted …
  • Go to page 38
  • Go to Next Page »

Primary Sidebar

This Month’s Statistically Speaking Live Training

  • Member Training: Analyzing Pre-Post Data

Upcoming Free Webinars

Poisson and Negative Binomial Regression Models for Count Data

Upcoming Workshops

  • Analyzing Count Data: Poisson, Negative Binomial, and Other Essential Models (Jul 2022)
  • Introduction to Generalized Linear Mixed Models (Jul 2022)

Copyright © 2008–2022 The Analysis Factor, LLC. All rights reserved.
877-272-8096   Contact Us

The Analysis Factor uses cookies to ensure that we give you the best experience of our website. If you continue we assume that you consent to receive cookies on all websites from The Analysis Factor.
Continue Privacy Policy
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.
Non-necessary
Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.
SAVE & ACCEPT