OptinMon

A Great (and Free) Resource for SPSS Syntax: the Command Syntax Reference

October 22nd, 2009 by

I find SPSS manuals, as a rule, marginally useful. Sure they may tell you which options are available when doing Statistic X, but not what they mean or when to use them.

I still use them, of course, but only when I have no other options.

There is one exception, though, and that is the Command Syntax Reference. This is the manual that explains all the SPSS Syntax commands.

SPSS started as a syntax-only program. I first learned SPSS before Windows existed. I don’t think you could even get it for a PC back then. We had to use the College’s VAX mainframe computer. This was back in the days where you had to go pick up your printouts down the hall in the computing center. But no cards. I’m not THAT old.

Anyway, I think in those days SPSS must have put a lot of resources into really good manual writing. So the Command Syntax Reference, which was the entire manual, rocked. It still does, since for the most part, the syntax doesn’t change that much with new versions.

The great thing about it is now it’s available right in SPSS. When you click on help, instead of Search, choose Command Syntax Reference. It includes every possible option, explains when and how to use it, and what it means. It’s an extremely handy resource, comes free with SPSS, and you don’t have to spend hours searching the internet for an answer.

The only hard part about it is it is organized by the command, and they’re not always intuitive. So if you don’t know that the Univariate GLM menu equivalent syntax command is  “UNIANOVA,” you’ll have a hard time using it.

This is another good time to use the Paste button. Just use the menus to create some semblance of the analysis you want to do and hit Paste. You’ll get the basic command, which you can now look up and refine.

Want to learn more? If you’re just getting started with data analysis in SPSS, or would like a thorough refresher, please join us in our online workshop Introduction to Data Analysis in SPSS.

 


5 Reasons to use SPSS Syntax

October 7th, 2009 by

You don’t rely on only SPSS menus to run your analysis, right?  (Please, please tell me you don’t).

There’s really nothing wrong with using the menus.  It’s a great way to get started using SPSS and it saves you the hassle of remembering all that code.

But there are some really, really good reasons to use the syntax as well.

 

1. Efficiency

If you’re figuring out the best model and have to refine which predictors to include, running the same descriptive statistics on a  bunch of variables, or defining the missing values for all 286 variable in the data set, you’re essentially running the same analysis over and over.

Picking your way through the menus gets old fast.  In syntax, you just copy and paste and change or add variables names.

A trick I use is to run through the menus for one variable, paste the code, then add the other 285. You can even copy the names out of the Variable View and paste them into the code. Very easy.

2. Memory

I know that while you’re immersed in your data analysis, you can’t imagine you won’t always remember every step you did.

But you will.  And sooner than you think.

Syntax gives you a “paper” trail of what you did, so you don’t have to remember. If you’re in a regulated industry, you know why you need this trail. But anyone who needs to defend their research needs it.

3. Communication

When your advisor, coauthor, colleague, statistical consultant, or Reviewer #2 asks you which options you used in your analysis or exactly how you recoded that variable, you can clearly communicate it by showing the syntax.  Much harder to explain with menu options.

When I hold a workshop or run an analysis for a client, I always use syntax.  I  send it to them to peruse, tweak, adapt, or admire.  It’s really the only way for me to show them exactly what I did and how to do it.

If your client, advisor, or colleague doesn’t know how to read the syntax, that’s okay. Because you have a clear answer of what you did, you can explain it.

4. Efficiency again

When the data set gets updated, or a reviewer (or your advisor, coauthor, colleague, or statistical consultant) asks you to add another predictor to a model, it’s a simple matter to edit and rerun a syntax program.

In menus, you have to start all over. Hopefully you’ll remember exactly which options you chose last time and/or exactly how you made every small decision in your data analysis (see #2: Memory).

5. Control

There are some SPSS options that are available in syntax, but not in the menus.

And others that just aren’t what they seem in the menus.

The menus for the Mixed procedure are about the most unintuitive I’ve ever seen.  But the syntax for Mixed is really logical and straightforward.  And it’s very much like the GLM syntax (UNIANOVA), so if you’re familiar with GLM, learning Mixed is a simple extension.

Bonus Reason to use SPSS Syntax: Cleanliness

Luckily, SPSS makes it exceedingly easy to create syntax.  If you’re more comfortable with menus, run it in menus the first time, then hit PASTE instead of OK.  SPSS will automatically create the syntax for you, which you can alter at will.  So you don’t have to remember every programming convention.

When refining a model, I often run through menus and paste it.  Then I alter the syntax to find the best-fitting model.

At this point, the output is a mess, filled with so many models I can barely keep them straight.  Once I’ve figured out the model that fits best, I delete the entire output, then rerun the syntax for only the best model.  Nice, clean output.

The Take-away: Reproducibility

What this all really comes down to is your ability to confidently, easily, and accurately reproduce your analysis. When you rely on menus, you are relying on your own memory to reproduce. There are too many decisions, judgments, and too many places to make easy mistakes without noticing it to ever be able to rely totally on your memory.

The tools are there to make this easy. Use them.

 


6 Types of Dependent Variables that will Never Meet the Linear Model Normality Assumption

September 17th, 2009 by

The assumptions of normality and constant variance in a linear model (both OLS regression and ANOVA) are quite robust to departures.  That means that even if the assumptions aren’t met perfectly, the resulting p-values will still be reasonable estimates.

But you need to check the assumptions anyway, because some departures are so far off that the p-values become inaccurate.  And in many cases there are remedial measures you can take to turn non-normal residuals into normal ones.

But sometimes you can’t.

Sometimes it’s because the dependent variable just isn’t appropriate for a linear model.  The (more…)


3 Mistakes Data Analysts Make in Testing Assumptions in GLM

September 1st, 2009 by

I know you know it–those assumptions in your regression or ANOVA model really are important.  If they’re not met adequately, all your p-values are inaccurate, wrong, useless.

But, and this is a big one, linear models are robust to departures from those assumptions.  Meaning, they don’t have to fit exactly for p-values to be accurate, right, and useful.

You’ve probably heard both of these contradictory statements in stats classes and a million other places, and they are the kinds of statements that drive you crazy.  Right?

I mean, do statisticians make this stuff up just to torture researchers? Or just to keep you feeling stupid?

No, they really don’t.   (I promise!)  And learning how far you can push those robust assumptions isn’t so hard, with some training and a little practice.  Over the years, I’ve found a few mistakes researchers commonly make because of one, or both, of these statements:

1.  They worry too much about the assumptions and over-test them. There are some nice statistical tests to determine if your assumptions are met.  And it’s so nice having a p-value, right?  Then it’s clear what you’re supposed to do, based on that golden rule of p<.05.

The only problem is that many of these tests ignore that robustness.  They find that every distribution is non-normal and heteroskedastic.  They’re good tools, but  these hammers think every data set is a nail.  You want to use the hammer when needed, but don’t hammer everything.

2.They assume everything is robust anyway, so they don’t test anything. It’s easy to do.  And once again, it probably works out much of the time.  Except when it doesn’t.

Yes, the GLM is robust to deviations from some of the assumptions.  But not all the way, and not all the assumptions.  You do have to check them.

3. They test the wrong assumptions. Look at any two regression books and they’ll give you a different set of assumptions.

This is partially because many of these “assumptions”  need to be checked, but they’re not really model assumptions, they’re data issues.  And it’s also partially because sometimes the assumptions have been taken to their logical conclusions.  That textbook author is trying to make it more logical for you.  But sometimes that just leads you to testing the related, but wrong thing.  It works out most of the time, but not always.

 


Quick-R: A guide for SPSS, SAS, and Stata Users

August 20th, 2009 by

If you are a SPSS, SAS, or Stata user who finds yourself needing to use R (I mean, it’s free), I just found this great website: http://statmethods.net/index.html.

 


To Compare Regression Coefficients, Include an Interaction Term

August 14th, 2009 by

Just yesterday I got a call from a researcher who was reviewing a paper.  She didn’t think the authors had run their model correctly, but wanted to make sure.  The authors had run the same logistic regression model separately for each sex because they expected that the effects of the predictors were different for men and women.

On the surface, there is nothing wrong with this approach.  It’s completely legitimate to consider men and women as two separate populations and to model each one separately.

As often happens, the problem was not in the statistics, but what they were trying to conclude from them.   The authors went on to compare the two models, and specifically compare the coefficients for the same predictors across the two models.

Uh-oh. Can’t do that.

If you’re just describing the values of the coefficients, fine.  But if you want to compare the coefficients AND draw conclusions about their differences, you need a p-value for the difference.

Luckily, this is easy to get.  Simply include an interaction term between Sex (male/female) and any predictor whose coefficient you want to compare.  If you want to compare all of them because you believe that all predictors have different effects for men and women, then include an interaction term between sex and each predictor.  If you have 6 predictors, that means 6 interaction terms.

In such a model, if Sex is a dummy variable (and it should be), two things happen:

1.the coefficient for each predictor becomes the coefficient for that variable ONLY for the reference group.

2. the interaction term between sex and each predictor represents the DIFFERENCE in the coefficients between the reference group and the comparison group.  If you want to know the coefficient for the comparison group, you have to add the coefficients for the predictor alone and that predictor’s interaction with Sex.

The beauty of this approach is that the p-value for each interaction term gives you a significance test for the difference in those coefficients.