Every statistical software procedure that dummy codes predictor variables uses a default for choosing the reference category.
This default is usually the category that comes first or last alphabetically.
That may or may not be the best category to use, but fortunately you’re not stuck with the defaults.
So if you do choose, which one should you choose? (more…)
by Maike Rahn, PhD
In the previous blogs I wrote about the basics of running a factor analysis. Real-life factor analysis can become complicated. Here are some of the more common problems researchers encounter and some possible solutions:
- The factor loadings in your confirmatory factor analysis are only |0.5| or less.
Solution: lower the cut-offs of your factor loadings, provided that lower factor loadings are expected and accepted in your field.
- Your confirmatory factor analysis does not show the hypothesized number of factors.
Solution 1: you were not able to validate the factor structure in your sample; your analysis with this sample did not work out.
Solution 2: your factor analysis has just become exploratory. Something is going on with your sample that is different from the samples used in other studies. Find out what it is.
- A few key variables in your confirmatory factor analysis do not behave as expected and/or are correlated with the wrong factor.
Solution: the good news is that you found the hypothesized factors. The bad news is (more…)
Inter Rater Reliability is one of those statistics I seem to need just seldom enough that I forget all the details and have to look it up every time.
Luckily, there are a few really great web sites by experts that explain it (and related concepts) really well, in language that is accessible to non-statisticians.
So rather than reinvent the wheel and write about it, I’m going to refer you to these really great sites:
If you know of any others, please share in the comments. I’ll be happy to add to the list.
Recently I gave a webinar The Steps to Running Any Statistical Model. A few hundred people were live on the webinar. We held a Q&A session at the end, but as you can imagine, we didn’t have time to get through all the questions.
This is the first in a series of written answers to some of those questions. I’ve tried to sort them by the step each is about.
A written list of the steps is available here.
If you missed the webinar, you can view the video here. It’s free.
Questions about Step 1. Write out research questions in theoretical and operational terms
Q: In using secondary data research designing, have you found that this type of data source affects the research question? That is, should one have a strong understanding of the data to ensure their theoretical concept can be operational to fit the data? My research question changes the more I learn.
Yes. There’s no point in asking research questions that the data you have available can’t answer.
So the order of the steps would have to change—you may have to start with a vague idea of the type of research question you want to ask, but only refine it after doing some descriptive statistics, or even running an initial model.
Q: How soon in the process should one start with the first group of steps?
You want to at least start thinking about them as you’re doing the lit review and formulating your research questions.
Think about how you could measure variables, which ones are likely to be collinear or have a lot of missing data. Think about the kind of model you’d have to do for each research question.
Think of a scenario where the same research question could be operationalized such that the dependent variable is measured either continuous or ordered categories. An easy example is income in dollars measured by actual income or by income categories.
By all means, if people can answer the question with a real and accurate number, your analysis will be much, much easier. In many situations, they can’t. They won’t know, remember, or tell you their exact income. If so, you may have to use categories to prevent missing data. But these are things to think about early.
Q: where in the process do you use existing lit/results to shape the research question and modeling?
I would start by putting the literature review before Step 1. You’ll use that to decide on a theoretical research question, as well as ways to operationalize it..
But it will help you other places as well. For example, it helps the sample size calculations to have variance estimates from other studies. Other studies may give you an idea of variables that are likely to have missing data, too little variation to include as predictors. They may change your exploratory factor analysis in Step 7 to a confirmatory one.
In fact, just about every step can benefit from a good literature review.
If you missed the webinar, you can view the video here. It’s free.
by Maike Rahn, PhD
When are factor loadings not strong enough?
Once you run a factor analysis and think you have some usable results, it’s time to eliminate variables that are not “strong” enough. They are usually the ones with low factor loadings, although additional criteria should be considered before taking out a variable.
As a rule of thumb, your variable should have a rotated factor loading of at least |0.4| (meaning ≥ +.4 or ≤ –.4) onto one of the factors in order to be considered important. (more…)

We were recently fortunate to host a free The Craft of Statistical Analysis Webinar with guest presenter David Lillis. As usual, we had hundreds of attendees and didn’t get through all the questions. So David has graciously agreed to answer questions here.
If you missed the live webinar, you can download the recording here: Ten Data Analysis Tips in R.
Q: Is the M=structure(.list(.., class = “data.frame) the same as M=data.frame(..)? Is there some reason to prefer to use structure(list, … ,) as opposed to M=data.frame?
A: They are not the same. The structure( .. .) syntax is a short-hand way of storing a data set. If you have a data set called M, then the command dput(M) provides a shorthand way of storing the dataset. You can then reconstitute it later as follows: M <- structure( . . . .). Try it for yourselves on a rectangular dataset. For example, start off with (more…)