• Skip to primary navigation
  • Skip to main content
  • Skip to primary sidebar
The Analysis Factor

The Analysis Factor

Statistical Consulting, Resources, and Statistics Workshops for Researchers

  • Home
  • About
    • Our Programs
    • Our Team
    • Our Core Values
    • Our Privacy Policy
    • Employment
    • Guest Instructors
  • Membership
    • Statistically Speaking Membership Program
    • Login
  • Workshops
    • Online Workshops
    • Login
  • Consulting
    • Statistical Consulting Services
    • Login
  • Free Webinars
  • Contact
  • Login

interpreting regression coefficients

November Member Training: Preparing to Use (and Interpret) a Linear Regression Model

by TAF Support

You think a linear regression might be an appropriate statistical analysis for your data, but you’re not entirely sure. What should you check before running your model to find out?

[Read more…] about November Member Training: Preparing to Use (and Interpret) a Linear Regression Model

Tagged With: Bivariate Statistics, histogram, interpreting regression coefficients, linear regression, Multiple Regression, scatterplot, Univariate statistics

Related Posts

  • Member Training: Using Transformations to Improve Your Linear Regression Model
  • Member Training: Segmented Regression
  • The General Linear Model, Analysis of Covariance, and How ANOVA and Linear Regression Really are the Same Model Wearing Different Clothes
  • Interpreting Lower Order Coefficients When the Model Contains an Interaction

Simplifying a Categorical Predictor in Regression Models

by Jeff Meyer Leave a Comment

One of the many decisions you have to make when model building is which form each predictor variable should take. One specific version of this decision is whether to combine categories of a categorical predictor.

The greater the number of parameter estimates in a model the greater the number of observations that are needed to keep power constant. The parameter estimates in a linear [Read more…] about Simplifying a Categorical Predictor in Regression Models

Tagged With: categorical predictor, interpreting regression coefficients, Model Building, pairwise, R-squared

Related Posts

  • What It Really Means to Take an Interaction Out of a Model
  • Differences in Model Building Between Explanatory and Predictive Models
  • A Strategy for Converting a Continuous to a Categorical Predictor
  • Should I Specify a Model Predictor as Categorical or Continuous?

Your Questions Answered from the Interpreting Regression Coefficients Webinar

by Karen Grace-Martin Leave a Comment

Last week I had the pleasure of teaching a webinar on Interpreting Regression Coefficients. We walked through the output of a somewhat tricky regression model—it included two dummy-coded categorical variables, a covariate, and a few interactions.

As always seems to happen, our audience asked an amazing number of great questions. (Seriously, I’ve had multiple guest instructors compliment me on our audience and their thoughtful questions.)

We had so many that although I spent about 40 minutes answering [Read more…] about Your Questions Answered from the Interpreting Regression Coefficients Webinar

Tagged With: dummy coding, dummy variable, effect size, eta-square, Interpreting Interactions, interpreting regression coefficients, Reference Group, spotlight analysis, statistical significance

Related Posts

  • Using Marginal Means to Explain an Interaction to a Non-Statistical Audience
  • Interpreting Interactions in Linear Regression: When SPSS and Stata Disagree, Which is Right?
  • About Dummy Variables in SPSS Analysis
  • Interpreting (Even Tricky) Regression Coefficients – A Quiz

Using Marginal Means to Explain an Interaction to a Non-Statistical Audience

by Jeff Meyer 8 Comments

Even with a few years of experience, interpreting the coefficients of interactions in a regression table can take some time to figure out. Trying to explain these coefficients  to a group of non-statistically inclined people is a daunting task.

For example, say you are going to speak to a group of dieticians. They are interested [Read more…] about Using Marginal Means to Explain an Interaction to a Non-Statistical Audience

Tagged With: Interpreting Interactions, Interpreting Intercepts, interpreting regression coefficients, linear regression, marginal means

Related Posts

  • Your Questions Answered from the Interpreting Regression Coefficients Webinar
  • Understanding Interactions Between Categorical and Continuous Variables in Linear Regression
  • Clarifications on Interpreting Interactions in Regression
  • Interpreting Lower Order Coefficients When the Model Contains an Interaction

Interpreting Interactions in Linear Regression: When SPSS and Stata Disagree, Which is Right?

by Jeff Meyer Leave a Comment

Sometimes what is most tricky about understanding your regression output is knowing exactly what your software is presenting to you.

Here’s a great example of what looks like two completely different model results from SPSS and Stata that in reality, agree.

The Model

I ran a linear model regressing “physical composite score” on education and “mental composite score”.

The outcome variable, physical composite score, is a measurement of one’s physical well-being.   The predictor “education” is categorical with four categories.  The other predictor, mental composite score, is continuous and measures one’s mental well-being.

I am interested in determining whether the association between physical composite score and mental composite score is different among the four levels of education. To determine this I included an interaction between mental composite score and education.

The SPSS Regression Output

Here is the result of the regression using SPSS:

[Read more…] about Interpreting Interactions in Linear Regression: When SPSS and Stata Disagree, Which is Right?

Tagged With: dummy coding, Interactions in Regression, Interpreting Interactions, interpreting regression coefficients, slopes

Related Posts

  • Your Questions Answered from the Interpreting Regression Coefficients Webinar
  • SPSS GLM or Regression? When to use each
  • Dummy Coding in SPSS GLM–More on Fixed Factors, Covariates, and Reference Groups, Part 1
  • Interpreting Lower Order Coefficients When the Model Contains an Interaction

The General Linear Model, Analysis of Covariance, and How ANOVA and Linear Regression Really are the Same Model Wearing Different Clothes

by Karen Grace-Martin 19 Comments

Just recently, a client got some feedback from a committee member that the Analysis of Covariance (ANCOVA) model she ran did not meet all the assumptions.

Specifically, the assumption in question is that the covariate has to be uncorrelated with the independent variable.

This committee member is, in the strictest sense of how analysis of covariance is used, correct.

And yet, they over-applied that assumption to an inappropriate situation.

ANCOVA for Experimental Data

Analysis of Covariance was developed for experimental situations and some of the assumptions and definitions of ANCOVA apply only to those experimental situations.

The key situation is the independent variables are categorical and manipulated, not observed.

The covariate–continuous and observed–is considered a nuisance variable. There are no research questions about how this covariate itself affects or relates to the dependent variable.

The only hypothesis tests of interest are about the independent variables, controlling for the effects of the nuisance covariate.

A typical example is a study to compare the math scores of students who were enrolled in three different learning programs at the end of the school year.

The key independent variable here is the learning program. Students need to be randomly assigned to one of the three programs.

The only research question is about whether the math scores differed on average among the three programs. It is useful to control for a covariate like IQ scores, but we are not really interested in the relationship between IQ and math scores.

So in this example, in order to conclude that the learning program affected math scores, it is indeed important that IQ scores, the covariate, is unrelated to which learning program the students were assigned to.

You could not make that causal interpretation if it turns out that the IQ scores were generally higher in one learning program than the others.

So this assumption of ANCOVA is very important in this specific type of study in which we are trying to make a specific type of inference.

ANCOVA for Other Data

But that’s really just one application of a linear model with one categorical and one continuous predictor. The research question of interest doesn’t have to be about the causal effect of the categorical predictor, and the covariate doesn’t have to be a nuisance variable.

A regression model with one continuous and one dummy-coded variable is the same model (actually, you’d need two dummy variables to cover the three categories, but that’s another story).

The focus of that model may differ–perhaps the main research question is about the continuous predictor.

But it’s the same mathematical model.

The software will run it the same way. YOU may focus on different parts of the output or select different options, but it’s the same model.

And that’s where the model names can get in the way of understanding the relationships among your variables. The model itself doesn’t care if the categorical variable was manipulated. It doesn’t care if the categorical independent variable and the continuous covariate are mildly correlated.

If those ANCOVA assumptions aren’t met, it does not change the analysis at all. It only affects how parameter estimates are interpreted and the kinds of conclusions you can draw.

In fact, those assumptions really aren’t about the model. They’re about the design. It’s the design that affects the conclusions. It doesn’t matter if a covariate is a nuisance variable or an interesting phenomenon to the model. That’s a design issue.

The General Linear Model

So what do you do instead of labeling models? Just call them a General Linear Model. It’s hard to think of regression and ANOVA as the same model because the equations look so different. But it turns out they aren’t.

Regression and ANOVA model equations

If you look at the two models, first you may notice some similarities.

  • Both are modeling Y, an outcome.
  • Both have a “fixed” portion on the right with some parameters to estimate–this portion estimates the mean values of Y at the different values of X.
  • Both equations have a residual, which is the random part of the model. It is the variation in Y that is not affected by the Xs.

But wait a minute, Karen, are you nuts?–there are no Xs in the ANOVA model!

Actually, there are. They’re just implicit.

Since the Xs are categorical, they have only a few values, to indicate which category a case is in. Those j and k subscripts? They’re really just indicating the values of X.

(And for the record, I think a couple Xs are a lot easier to keep track of than all those subscripts.  Ever have to calculate an ANOVA model by hand?  Just sayin’.)

So instead of trying to come up with the right label for a model, focus instead on understanding (and describing in your paper) the measurement scales of your variables, if and how much they’re related, and how that affects the conclusions.

In my client’s situation, it was not a problem that the continuous and the categorical variables were mildly correlated. The data were not experimental and she was not trying to draw causal conclusions about only the categorical predictor.

So she had to call this ANCOVA model a multiple regression.

Tagged With: analysis of covariance, ancova, ANOVA, causal effect, General Linear Model, interpreting regression coefficients, linear regression

Related Posts

  • 3 Reasons Psychology Researchers should Learn Regression
  • SPSS GLM: Choosing Fixed Factors and Covariates
  • When Assumptions of ANCOVA are Irrelevant
  • Why ANOVA and Linear Regression are the Same Analysis

  • Go to page 1
  • Go to page 2
  • Go to page 3
  • Go to page 4
  • Go to Next Page »

Primary Sidebar

Free Webinars

Effect Size Statistics on Tuesday, Feb 2nd

This Month’s Statistically Speaking Live Training

  • January Member Training: A Gentle Introduction To Random Slopes In Multilevel Models

Upcoming Workshops

  • Logistic Regression for Binary, Ordinal, and Multinomial Outcomes (May 2021)
  • Introduction to Generalized Linear Mixed Models (May 2021)

Read Our Book



Data Analysis with SPSS
(4th Edition)

by Stephen Sweet and
Karen Grace-Martin

Statistical Resources by Topic

  • Fundamental Statistics
  • Effect Size Statistics, Power, and Sample Size Calculations
  • Analysis of Variance and Covariance
  • Linear Regression
  • Complex Surveys & Sampling
  • Count Regression Models
  • Logistic Regression
  • Missing Data
  • Mixed and Multilevel Models
  • Principal Component Analysis and Factor Analysis
  • Structural Equation Modeling
  • Survival Analysis and Event History Analysis
  • Data Analysis Practice and Skills
  • R
  • SPSS
  • Stata

Copyright © 2008–2021 The Analysis Factor, LLC. All rights reserved.
877-272-8096   Contact Us

The Analysis Factor uses cookies to ensure that we give you the best experience of our website. If you continue we assume that you consent to receive cookies on all websites from The Analysis Factor.
Continue Privacy Policy
Privacy & Cookies Policy

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled

Necessary cookies are absolutely essential for the website to function properly. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information.

Non-necessary

Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies. It is mandatory to procure user consent prior to running these cookies on your website.