OptinMon

Respect Your Data

February 13th, 2009 by

The steps you take to analyze data are just as important as the statistics you use. Mistakes and frustration in statistical analysis come as much, if not more, from poor process than from using the wrong statistical method.

Benjamin Earnhart of the University of Iowa has written a short (and humorous) article entitled “Respect Your Data” (requires LinkedIn account) that describes 23 practical steps that data analysts must take. This article was published in the newsletter of the American Statistical Association and has since been expanded and annotated

 


Statistical Consulting 101: 4 Questions you Need to Answer to Choose a Statistical Method

February 11th, 2009 by

One of the most common situations in which researchers get stuck with statistics is choosing which statistical methodology is appropriate to analyze their data. If you start by asking the following four questions, you will be able to narrow things down considerably.

Even if you don’t know the implications of your answers, answering these questions will clarify issues for you. It will help you decide what information to seek, and it will make any conversations you have with statistical advisors more efficient and useful.

1. What is your research question? (more…)


When NOT to Center a Predictor Variable in Regression

February 9th, 2009 by

There are two reasons to center predictor variables in any type of regression analysis–linear, logistic, multilevel, etc.

1. To lessen the correlation between a multiplicative term (interaction or polynomial term) and its component variables (the ones that were multiplied).

2. To make interpretation of parameter estimates easier.

I was recently asked when is centering NOT a good idea? (more…)


Order affects Regression Parameter Estimates in SPSS GLM

February 6th, 2009 by

Stage 2I just discovered something in SPSS GLM that I never knew.

When you have an interaction in the model, the order you put terms into the Model statement affects which parameters SPSS gives you.

The default in SPSS is to automatically create interaction terms among all the categorical predictors.  But if you want fewer than all those interactions, or if you want to put in an interaction involving a continuous variable, you need to choose Model–>Custom Model.

In the specific example of an interaction between a categorical and continuous variable, to interpret this interaction you need to output Regression Coefficients. Do this by choosing  Options–>Regression Parameter Estimates.

If you put the main effects into the model first, followed by interactions, you will find the usual output–the regression coefficients (column B) for the continuous variable is the slope for the reference group.  The coefficients for the interactions in the other categories tell you the difference between the slope for that category and the slope for the reference group.  The coefficient for the reference group here in the interaction is 0.

What I was surprised to find is that if the interactions are put into the model first, you don’t get that.

Instead, the coefficients for the interaction of each category is the actual slope for that group, NOT the difference.

This is actually quite useful–it can save a bit of calculating and now you have a p-value for whether each slope is different from 0.  However, it also means you have to be cautious and make sure you realize what each parameter estimate is actually estimating.

 


Seven Ways to Make up Data: Common Methods to Imputing Missing Data

February 4th, 2009 by

There are many ways to approach missing data. The most common, I believe, is to ignore it. But making no choice means that your statistical software is choosing for you.

Most of the time, your software is choosing listwise deletion. Listwise deletion may or may not be a bad choice, depending on why and how much data are missing.

Another common approach among those who are paying attention is imputation. Imputation simply means replacing the missing values with an estimate, then analyzing the full data set as if the imputed values were actual observed values.

How do you choose that estimate?  The following are common methods: (more…)


Proportions as Dependent Variable in Regression–Which Type of Model?

January 26th, 2009 by

When the dependent variable in a regression model is a proportion or a percentage, it can be tricky to decide on the appropriate way to model it.

The big problem with ordinary linear regression is that the model can predict values that aren’t possible–values below 0 or above 1.  But the other problem is that the relationship isn’t linear–it’s sigmoidal.  A sigmoidal curve looks like a flattened S–linear in the middle, but flattened on the ends.  So now what?

The simplest approach is to do a linear regression anyway.  This approach can be justified only in a few situations.

1. All your data fall in the middle, linear section of the curve.  This generally translates to all your data being between .2 and .8 (although I’ve heard that between .3-.7 is better).  If this holds, you don’t have to worry about the two objections.  You do have a linear relationship, and you won’t get predicted values much beyond those values–certainly not beyond 0 or 1.

2. It is a really complicated model that would be much harder to model another way.  If you can assume a linear model, it will be much easier to do, say, a complicated mixed model or a structural equation model.  If it’s just a single multiple regression, however, you should look into one of the other methods.

A second approach is to treat the proportion as a binary response then run a logistic or probit regression.  This will only work if the proportion can be thought of and you have the data for the number of successes and the total number of trials.  For example, the proportion of land area covered with a certain species of plant would be hard to think of this way, but the proportion of correct answers on a 20-answer assessment would.

The third approach is to treat it the proportion as a censored continuous variable.  The censoring means that you don’t have information below 0 or above 1.  For example, perhaps the plant would spread even more if it hadn’t run out of land.  If you take this approach, you would run the model as a two-limit tobit model (Long, 1997).  This approach works best if there isn’t an excessive amount of censoring (values of 0 and 1).

Reference: Long, J.S. (1997). Regression Models for Categorical and Limited Dependent Variables. Sage Publishing.