Study design

Stratified Sampling for Oversampling Small Sub-Populations

June 11th, 2012 by

by Ritu Narayan

Sampling is a critical issue in any research study design. Most of us have grappled with balancing costs, time and of course, statistical power when deciding our sampling strategies.

How do we know when to go for a simple random sample or to go for stratification or for clustering? Let’s talk about stratified sampling here and one research scenario when it is useful.

One Scenario for Stratified Sampling

Suppose you are studying minority groups and their behavior, say Yiddish speakers in the U.S. and their voting.  Yiddish speakers are a small subset of the US population, just .6%. (more…)


When To Fight For Your Analysis and When To Jump Through Hoops

February 14th, 2012 by

In the world of data analysis, there’s not always one clearly appropriate analysis for every research question.

There are so many issues to take into account.  They include the research question to be answered, the measurement of the variables, the study design, data limitations and issues, the audience, practical constraints like software availability, and the purpose of the data analysis.

So what do you do when a reviewer rejects your choice of data analysis? This reviewer can be your boss, your dissertation committee, a co-author, or journal reviewer or editor.

What do you do?

There are ultimately only two choices: You can redo the analysis their way. Or you can fight for your analysis. How do you choose?

The one absolute in this choice is that you have to honor the integrity of your data analysis and yourself.

Do not be persuaded to do an analysis that will produce inaccurate or misleading results, especially when readers will actually make decisions based on these results. (If no one will ever read your report, this is less crucial).

But even within that absolute, there are often choices. Keep in mind the two goals in data analysis:

  1. The analysis needs to accurately reflect the limits of the design and the data, while still answering the research question.
  2. The analysis needs to communicate the results to the audience.

When to fight for your analysis

So first and foremost, if your reviewer is asking you to do an analysis that does not appropriately take into account the design or the variables, you need to fight.

For example, a few years ago I worked with a researcher who had a study with repeated measurements on the same individuals. It had a small sample size and an unequal number of observations on each individual.

It was clear that to take into account the design and the unbalanced data, the appropriate analysis was a linear mixed model.

The researcher’s co-author questioned the use of the linear mixed model, mainly because he wasn’t familiar with it. He thought the researcher was attempting something fishy. His suggestion was to use an ad hoc technique of averaging over the multiple observations for each subject.

This was a situation where fighting was worth it.

Unnecessarily simplifying the analysis to please people who were unfamiliar with an appropriate method was not an option. The simpler model would have violated assumptions.

This was particularly important because the research was being submitted to a high-level journal.

So it was the researcher’s job to educate not only his coauthor, but the readers, in the form of explaining the analysis and its advantages, with citations, right in the paper.

When to Jump through Hoops

In contrast, sometimes the reviewer is not really asking for a completely different analysis. They just want a different way of running the same analysis or reporting different specific statistics.

For example a simple confirmatory factor analysis can be run in standard statistical software like SAS, SPSS, or Stata using a factor analysis command. Or it can be run it in structural equation modeling software like Amos or MPlus or using an SEM command in standard software.

The analysis is essentially the same, but the two types of software will report different statistics.

If your committee members are familiar with structural equation modeling, they probably want to see the type of statistics that structural equation modeling software will report. Running it this way has advantages.

These include overall model fit statistics like RMSEA or model chi-squares.

This is a situation where it may be easier, and produces no ill-effects, to jump through the hoop.

Running the analysis in the software they prefer won’t violate any assumptions or produce inaccurate results. This assumes you have access to that software and know how to use it.

If the reviewer can stop your research in its tracks, it may be worth it to rerun the analysis to get the statistics they want to see reported.

You do have to decide whether the cost of jumping through the hoop, in terms of time, money, and emotional energy, is worth it.

If the request is relatively minor, it usually is. If it’s a matter of rerunning every analysis you’ve done to indulge a committee member’s pickiness, it may be worth standing up for yourself and your analysis.

When you can’t talk to the reviewer

When you’re dealing with anonymous reviewers, the situation can get sticky.  After all, you cannot ask them to clarify their concerns. And you have limited opportunities to explain the reasons for choosing your analysis.

It may be harder to discern if they are being overly picky, don’t understand the statistics themselves, or have a valid point.

If you choose to stand up for yourself, be well armed. Research the issue until you are absolutely confident in your approach (or until you’re convinced that you were missing something).

A few hours in the library or talking with a trusted expert is never a wasted investment. Compare that to running an unpublishable analysis to please a committee member or coauthor.

Often, the problem is actually not in the analysis you did, but in the way you explained it. It’s your job to explain why the analysis is appropriate and, if it’s unfamiliar to readers, what it does.

Rewrite that section, making it very clear. Ask colleagues to review it. Cite other research that uses or explains that statistical method.

Whatever you choose, be confident that you made the right decision, then move on.

 


Cohort and Case-Control Studies: Pro’s and Con’s

June 7th, 2010 by

Two designs commonly used in epidemiology are the cohort and case-control studies. Both study causal relationships between a risk factor and a disease. What is the difference between these two designs? And when should you opt for the one or the other?

Cohort studies

Cohort studies begin with a group of people (a cohort) free of disease. The people in the cohort are grouped by whether or not they are exposed to a potential cause of disease. The whole cohort is followed over time to see if (more…)


Great Resources for Your Literature Review

April 30th, 2010 by

by Ursula Saqui, Ph.D.

This is the second post of a two-part series on the overall process of doing a literature review.  Part one discussed the benefits of doing a literature review, how to get started, and knowing when to stop.

You have made a commitment to do a literature review, have the purpose defined, and are ready to get started.

Where do you find your resources?

If you are not in academia, have access to a top-notch library, or receive the industry publications of interest, you may need to get creative if you do not want to pay for each article. (In a pinch, I have paid up to $36 for an article, which can add up if you are conducting a comprehensive literature review!)

Here is where the internet and other community resources can be your best friends.

Still stuck?  Hire someone who knows how to do a good literature review and has access to quality resources.

On a budget?  Hire a student who has access to an academic library.  Many times students can get credit for working on research and business projects through internships or experiential learning programs. This situation is a win-win.  You get the information you need and the student gets academic credit along with exposure to new ideas and topics.

About the Author: With expertise in human behavior and research, Ursula Saqui, Ph.D. gives clarity and direction to her clients’ projects, which inevitably lead to better results and strategies. She is the founder of Saqui Research.

 


The Literature Review: The Foundation of Any Successful Research Project

April 23rd, 2010 by

by Ursula Saqui, Ph.D.

This post is the first of a two-part series on the overall process of doing a literature review.  Part two covers where to find your resources.

Would you build your house without a foundation?  Of course not!  However, many people skip the first step of any empirical-based project–conducting a literature review.  Like the foundation of your house, the literature review is the foundation of your project.

Having a strong literature review gives structure to your research method and informs your statistical analysis.  If your literature review is weak or non-existent, (more…)


Statistical Consulting 101: 4 Questions you Need to Answer to Choose a Statistical Method

February 11th, 2009 by

One of the most common situations in which researchers get stuck with statistics is choosing which statistical methodology is appropriate to analyze their data. If you start by asking the following four questions, you will be able to narrow things down considerably.

Even if you don’t know the implications of your answers, answering these questions will clarify issues for you. It will help you decide what information to seek, and it will make any conversations you have with statistical advisors more efficient and useful.

1. What is your research question? (more…)