Regression models

The Exposure Variable in Poisson Regression Models

January 23rd, 2009 by

Poisson Regression Models and its extensions (Zero-Inflated Poisson, Negative Binomial Regression, etc.) are used to model counts and rates. A few examples of count variables include:

– Number of words an eighteen month old can say

– Number of aggressive incidents performed by patients in an impatient rehab center

Most count variables follow one of these distributions in the Poisson family. Poisson regression models allow researchers to examine the relationship between predictors and count outcome variables.

Using these regression models gives much more accurate parameter (more…)


Interpreting Interactions in Regression

January 19th, 2009 by

Adding interaction terms to a regression model has real benefits. It greatly expands your understanding of the relationships among the variables in the model. And you can test more specific hypotheses.  But interpreting interactions in regression takes understanding of what each coefficient is telling you.

The example from Interpreting Regression Coefficients was a model of the height of a shrub (Height) based on the amount of bacteria in the soil (Bacteria) and whether the shrub is located in partial or full sun (Sun). Height is measured in cm, Bacteria is measured in thousand per ml of soil, and Sun = 0 if the plant is in partial sun, and Sun = 1 if the plant is in full sun.

(more…)


Logistic Regression Models for Multinomial and Ordinal Variables

January 14th, 2009 by

Multinomial Logistic Regression

The multinomial (a.k.a. polytomous) logistic regression model is a simple extension of the binomial logistic regression model.  They are used when the dependent variable has more than two nominal (unordered) categories.

Dummy coding of independent variables is quite common.  In multinomial logistic regression the dependent variable is dummy coded into multiple 1/0 variables.  There is a variable for all categories but one, so if there are M categories, there will be M-1 dummy variables.  All but one category has its own dummy variable.  Each category’s dummy variable has a value of 1 for its category and a 0 for all others.  One category, the reference category, doesn’t need its own dummy variable as it is uniquely identified by all the other variables being 0.

The multinomial logistic regression then estimates a separate binary logistic regression model for each of those dummy variables.  The result is (more…)


Poisson Regression Analysis for Count Data

December 31st, 2008 by

There are many dependent variables that no matter how many transformations you try, you cannot get to be normally distributed.  The most common culprits are count variables–the variable that measures the count or rate of some event in a sample.  Some examples I’ve seen from a variety of disciplines are:

Number of eggs in a clutch that hatch
Number of domestic violence incidents in a month
Number of times juveniles needed to be restrained during tenure at a correctional facility
Number of infected plants per transect

A common quality of these variables is that 0 is the mode–the most common value.  1 is the next most common, 2 the next, and so on.  In variables with low expected counts (number of cars in a household, number of degrees earned), (more…)


SPSS GLM: Choosing Fixed Factors and Covariates

December 30th, 2008 by

The beauty of the Univariate GLM procedure in SPSS is that it is so flexible.  You can use it to analyze regressions, ANOVAs, ANCOVAs with all sorts of interactions, dummy coding, etc.

The down side of this flexibility is it is often confusing what to put where and what it all means.

So here’s a quick breakdown.

The dependent variable I hope is pretty straightforward.  Put in your continuous dependent variable.

Fixed Factors are categorical independent variables.  It does not matter if the variable is (more…)


Centering for Multicollinearity Between Main effects and Quadratic terms

December 10th, 2008 by

One of the most common causes of multicollinearity is when predictor variables are multiplied to create an interaction term or a quadratic or higher order terms (X squared, X cubed, etc.).

Why does this happen?  When all the X values are positive, higher values produce high products and lower values produce low products.  So the product variable is highly correlated with the component variable.  I will do a very simple example to clarify.  (Actually, if they are all on a negative scale, the same thing would happen, but the correlation would be negative).

In a small sample, say you have the following values of a predictor variable X, sorted in ascending order:

2, 4, 4, 5, 6, 7, 7, 8, 8, 8

It is clear to you that the relationship between X and Y is not linear, but curved, so you add a quadratic term, X squared (X2), to the model.  The values of X squared are:

4, 16, 16, 25, 49, 49, 64, 64, 64

The correlation between X and X2 is .987–almost perfect.

Plot of X vs. X squared
Plot of X vs. X squared

To remedy this, you simply center X at its mean.  The mean of X is 5.9.  So to center X, I simply create a new variable XCen=X-5.9.

These are the values of XCen:

-3.90, -1.90, -1.90, -.90, .10, 1.10, 1.10, 2.10, 2.10, 2.10

Now, the values of XCen squared are:

15.21, 3.61, 3.61, .81, .01, 1.21, 1.21, 4.41, 4.41, 4.41

The correlation between XCen and XCen2 is -.54–still not 0, but much more managable.  Definitely low enough to not cause severe multicollinearity.  This works because the low end of the scale now has large absolute values, so its square becomes large.

The scatterplot between XCen and XCen2 is:

Plot of Centered X vs. Centered X squared
Plot of Centered X vs. Centered X squared

If the values of X had been less skewed, this would be a perfectly balanced parabola, and the correlation would be 0.

Tonight is my free teletraining on Multicollinearity, where we will talk more about it.  Register to join me tonight or to get the recording after the call.