SAS

Ten Ways Learning a Statistical Software Package is Like Learning a New Language

January 31st, 2014 by

Someone recently asked me if they need to learn R.  In responding, it struck me that this is another way that learning a stat package is like learning a new language.

The metaphor is extremely helpful for deciding when and how to learn a new stat package, and to keep you going when the going gets rough. (more…)


Opposite Results in Ordinal Logistic Regression, Part 2

July 22nd, 2013 by

I received the following email from a reader after sending out the last article: Opposite Results in Ordinal Logistic Regression—Solving a Statistical Mystery.

And I agreed I’d answer it here in case anyone else was confused.

Karen’s explanations always make the bulb light up in my brain, but not this time.

With either output,
The odds of 1 vs > 1 is exp[-2.635] = 0.07 ie unlikely to be  1, much more likely (14.3x) to be >1
The odds of £2 vs > 2 exp[-0.812] =0.44 ie somewhat unlikely to be £2, more likely (2.3x) to be >2

SAS – using the usual regression equation
If NAES increases by 1 these odds become (more…)


Opposite Results in Ordinal Logistic Regression—Solving a Statistical Mystery

July 5th, 2013 by

A number of years ago when I was still working in the consulting office at Cornell, someone came in asking for help interpreting their ordinal logistic regression results.

The client was surprised because all the coefficients were backwards from what they expected, and they wanted to make sure they were interpreting them correctly.

It looked like the researcher had done everything correctly, but the results were definitely bizarre. They were using SPSS and the manual wasn’t clarifying anything for me, so I did the logical thing: I ran it in another software program. I wanted to make sure the problem was with interpretation, and not in some strange default or (more…)


Two Recommended Solutions for Missing Data: Multiple Imputation and Maximum Likelihood

September 10th, 2012 by

Two methods for dealing with missing data, vast improvements over traditional approaches, have become available in mainstream statistical software in the last few years.

Both of the methods discussed here require that the data are missing at random–not related to the missing values. If this assumption holds, resulting estimates (i.e., regression coefficients and standard errors) will be unbiased with no loss of power.

The first method is Multiple Imputation (MI). Just like the old-fashioned imputation (more…)


An Easy Way to Reverse Code Scale items

June 29th, 2012 by

Before you run a Cronbach’s alpha or factor analysis on scale items, it’s generally a good idea to reverse code items that are negatively worded so that a high value indicates the same type of response on every item.

So for example let’s say you have 20 items each on a 1 to 7 scale. For most items, a 7 may indicate a positive attitude toward some issue, but for a few items, a 1 indicates a positive attitude.

I want to show you a very quick and easy way to reverse code them using a single command line. This works in any software. (more…)


Dummy Code Software Defaults Mess With All of Us

July 15th, 2011 by

In my last blog post, I wrote about a mistake I once made when I didn’t realize the defaults for dummy coding were different in two SPSS procedures (Binary Logistic and GEE).

Ironically, about the same time I wrote it, I was having a conversation with Ann Maria de Mars on Twitter.  She was trying to figure out why her logistic regression model fit results were identical in SAS Proc Logistic and SPSS Binary Logistic, but the coefficients in SAS were half those of SPSS.

It was ironic because I, of course, didn’t recognize it as the same issue and wasn’t much help.

But Ann Maria investigated and discovered that it came down to differences in the defaults for coding categorical predictors in SAS and SPSS that did it.  Her detailed and humorous explanation is here.

Some takeaways for you, the researcher and data analyst:

1. Give yourself a break if you hit a snag.  Even very experienced data analysts, statisticians who understand what they’re doing, get stumped sometimes.  Don’t ever think that performing data analysis is an IQ test.  You’re bringing together many skills and complex tools.

2. Learn thy software.  In my last post, I phrased it “Know thy software”, but this is where you get to know it.  Snags are good opportunities to investigate the details of your software, just like Ann Maria did.  If you can think of it as a challenge to figure out–a puzzle–it can actually be fun.

Make friends with your syntax manuals.

3. Get help when you need it. Statistical software packages *are* complex tools. You don’t have to know everything to use them

Ask colleagues.  Call customer support. Call a stat consultant.  That’s what they’re there for.

4. A great way to check your work is to run your test two different ways.  It’s another reason to be able to use at least two stat software packages.  I’m not suggesting you have to run every analysis twice.  But when a result looks strange, or you want to double-check a specific important model, this can be a good strategy for testing things out.

It may be that your results aren’t telling you what you think they are.

 

[Logistic_Regression_Workshop]