OptinMon

Differences in Model Building Between Explanatory and Predictive Models

October 8th, 2018 by

Suppose you are asked to create a model that will predict who will drop out of a program your organization offers. You decide to use a binary logistic regression because your outcome has two values: “0” for not dropping out and “1” for dropping out.

Most of us were trained in building models for the purpose of understanding and explaining the relationships between an outcome and a set of predictors. But model building works differently for purely predictive models. Where do we go from here? (more…)


The Difference Between Link Functions and Data Transformations

September 24th, 2018 by

Generalized linear models—and generalized linear mixed models—are called generalized linear because they connect a model’s outcome to its predictors in a linear way. The function used to make this connection is called a link function. Link functions sounds like an exotic term, but they’re actually much simpler than they sound.

For example, Poisson regression (commonly used for outcomes that are counts) makes use of a natural log link function as follows:

Clearly, there is not a direct linear relationship of the x variables to the average count, but there is a “sort of linear” relationship happening: a function of the mean of y is related to a linear combination of x variables. In other words, the linear model has now been generalized to a bigger type of situation.

This can lead to confusion, though, because on the surface it looks very similar to what happens when we transform the dependent variable in a linear model, like a linear regression.

The key thing to understand is that the natural log link function is a function of the mean of y, not the y values themselves.

Transformations of Y

Below is a linear model equation where the original dependent variable, y, has been natural log transformed. That is, the natural log has been taken of each individual value of y, and that is being used as the dependent variable.

The linear model with the log transformation is providing an equation for an individual value of ln(y). We could also write it as follows, where we are modeling the mean of ln(y) (note the error term is no longer present):

This makes the difference a bit clearer. When we transform the data in a linear model, we are no longer claiming that y is normally distributed around a mean, given the x values — we are claiming that our new outcome variable, ln(yi), is normally distributed.

In fact, we often make this transformation specifically because the values of y do not appear to be normally distributed around their average.

In the case of the Poisson model, however, the link function does not change the distribution of the actual observations in some way to make them something other than Poisson distributed. Instead, the link function defines the relationship of the x variables directly to the mean of the Poisson distributed y. The individual observations then vary around this expected value accordingly.

The mean of the log is not the log of the mean

As you may know, if you have used this kind of data transformation in a linear model before, you cannot simply take the exponent of the mean of ln(y) to get the mean of y.

You might be surprised to know, though, that you can do this with a link function. If you have specific values of your x variables, you can calculate the predicted average count, μy based on those x values by inversing the natural log:

This ability to back-transform means (and regression coefficients) to a more intuitive scale is part of what makes generalized linear models so useful.


Go to the next article or see the full series on Easy-to-Confuse Statistical Concepts


The Four Stages of Statistical Skill

September 21st, 2018 by

At The Analysis Factor, we are on a mission to help researchers improve their statistical skills so they can do amazing research.

We all tend to think of “Statistical Analysis” as one big skill, but it’s not.

Over the years of training, coaching, and mentoring data analysts at all stages, I’ve realized there are four fundamental stages of statistical skill:

Stage 1: The Fundamentals

 

 

Stage 2: Linear Models

 

 

 

Stage 3: Extensions of Linear Models

 

 

 

 

Stage 4: Advanced Models

 

 

 

There is also a stage beyond these where the mathematical statisticians dwell. But that stage is required for such a tiny fraction of data analysis projects, we’re going to ignore that one for now.

If you try to master the skill of “statistical analysis” as a whole, it’s going to be overwhelming.

And honestly, you’ll never finish. It’s too big of a field.

But if you can work through these stages, you’ll find you can learn and do just about any statistical analysis you need to. (more…)


Understanding Random Effects in Mixed Models

September 17th, 2018 by

In fixed-effects models (e.g., regression, ANOVA, generalized linear models), there is only one source of random variability. This source of variance is the random sample we take to measure our variables.

It may be patients in a health facility, for whom we take various measures of their medical history to estimate their probability of recovery. Or random variability may come from individual students in a school system, and we use demographic information to predict their grade point averages.

(more…)


What is the Purpose of a Generalized Linear Mixed Model?

September 10th, 2018 by

If you are new to using generalized linear mixed effects models, or if you have heard of them but never used them, you might be wondering about the purpose of a GLMM.

Mixed effects models are useful when we have data with more than one source of random variability. For example, an outcome may be measured more than once on the same person (repeated measures taken over time).

When we do that we have to account for both within-person and across-person variability. A single measure of residual variance can’t account for both.

(more…)


The Proportional Hazard Assumption in Cox Regression

August 20th, 2018 by

by Steve Simon, PhD

The Cox regression model has a fairly minimal set of assumptions, but how do you check those assumptions and what happens if those assumptions are not satisfied?

Non-proportional hazards

The proportional hazards assumption is so important to Cox regression that we often include it in the name (the Cox proportional hazards model). What it essentially means is that the ratio of the hazards for any two individuals is constant over time. They’re proportional. It involves logarithms and it’s a strange concept, so in this article, we’re going to show you how to tell if you don’t have it.

There are several graphical methods for spotting this violation, but the simplest is an examination of the Kaplan-Meier curves.

If the curves cross, as shown below, then you have a problem.

Likewise, if one curve levels off while the other drops to zero, you have a problem.

Figure 2. Kaplan-Meier curve with only one curve leveling off

You can think of non-proportional hazards as an interaction of your independent variable with time. It means that you have to do more work in interpreting your model. If you ignore this problem, you may also experience a serious loss in power.

If you have evidence of non-proportional hazards, don’t despair. There are several fairly simple modifications to the Cox regression model that will work for you.

Nonlinear covariate relationships

The Cox model assumes that each variable makes a linear contribution to the model, but sometimes the relationship may be more complex.

You can diagnose this problem graphically using residual plots. The residual in a Cox regression model is not as simple to compute as the residual in linear regression, but you look for the same sort of pattern as in linear regression.

If you have a nonlinear relationship, you have several options that parallel your choices in a linear regression model.

Lack of independence

Lack of independence is not something that you have to wait to diagnose until your data is collected. Often it is something you are aware from the start because certain features of the design, such as centers in a multi-center study, are likely to produce correlated outcomes. These are the same issues that hound you with a linear regression model in a multi-center study.

There are several ways to account for lack of independence, but this is one problem you don’t want to ignore. An invalid model will ruin all your confidence intervals and p-values.