Regression models

Using Predicted Means to Understand Our Models

January 14th, 2019 by

The expression “can’t see the forest for the trees” often comes to mind when reviewing a statistical analysis. We get so involved in reporting “statistically significant” and p-values that we fail to explore the grand picture of our results.

It’s understandable that this can happen.  We have a hypothesis to test. We go through a multi-step process to create the best model fit possible. Too often the next and last step is to report which predictors are statistically significant and include their effect sizes.

(more…)


Member Training: Model Building Approaches

January 1st, 2019 by

There is a bit of art and experience to model building. You need to build a model to answer your research question but how do you build a statistical model when there are no instructions in the box? 

Should you start with all your predictors or look at each one separately? Do you always take out non-significant variables and do you always leave in significant ones?

(more…)


How to Understand a Risk Ratio of Less than 1

December 26th, 2018 by

When a model has a binary outcome, one common effect size is a risk ratio. As a reminder, a risk ratio is simply a ratio of two probabilities. (The risk ratio is also called relative risk.)

Risk ratios are a bit trickier to interpret when they are less than one. 

A predictor variable with a risk ratio of less than one is often labeled a “protective factor” (at least in Epidemiology). This can be confusing because in our typical understanding of those terms, it makes no sense that a risk be protective.

So how can a RISK be protective? (more…)


Removing the Intercept from a Regression Model When X Is Continuous

December 17th, 2018 by

Stage 2In a recent article, we reviewed the impact of removing the intercept from a regression model when the predictor variable is categorical. This month we’re going to talk about removing the intercept when the predictor variable is continuous.

Spoiler alert: You should never remove the intercept when a predictor variable is continuous.

Here’s why. (more…)


Statistical Models for Truncated and Censored Data

November 12th, 2018 by

by Jeff Meyer

As mentioned in a previous post, there is a significant difference between truncated and censored data.

Truncated data eliminates observations from an analysis based on a maximum and/or minimum value for a variable.

Censored data has limits on the maximum and/or minimum value for a variable but includes all observations in the analysis.

As a result, the models for analysis of these data are different. (more…)


Your Questions Answered from the Interpreting Regression Coefficients Webinar

November 5th, 2018 by

Last week I had the pleasure of teaching a webinar on Interpreting Regression Coefficients. We walked through the output of a somewhat tricky regression model—it included two dummy-coded categorical variables, a covariate, and a few interactions.

As always seems to happen, our audience asked an amazing number of great questions. (Seriously, I’ve had multiple guest instructors compliment me on our audience and their thoughtful questions.)

We had so many that although I spent about 40 minutes answering (more…)