The 13 Steps for Statistical Modeling in any Regression or ANOVA

by Karen


No matter what statistical model you’re running, you need to go through the same 13 steps.  The order and the specifics of how you do each step will differ depending on the data and the type of model you use.

These 13 steps are in 3 major parts.  Most people think of only Part 3 as  modeling.  However, if you do all 3 parts, and think of them all as part of the analysis, the modeling process will be faster, easier, and make more sense.

Part 1: Define and Design

In the first 4 steps, the object is clarity. You want to make everything as clear as possible to yourself. The more clear things are at this point, the smoother everything will be.

1. Write out research questions in theoretical and operational terms

A lot of times, when researchers are confused about the right statistical method to use, the real problem is they haven’t defined their research questions.  They have a general idea of the relationship they want to test, but it’s a bit vague.  You need to be very specific.

For each research question, write it down in both theoretical and operational terms.

2. Design the study or define the design

Depending on whether you are collecting your own data or doing secondary data analysis, you need a clear idea of the design.  Design issues are about randomization and sampling:

•    Nested and Crossed Factors
•    Potential confounders and control variables
•    Longitudinal or repeated measurements on a study unit
•    Sampling: simple random sample or stratification or clustering

3. Choose the variables for answering research questions and determine their level of measurement

Every model has to take into account both the design and the level of measurement of the variables.

Level of measurement, remember, is whether a variable is nominal, ordinal, or interval.  Within interval, you also need to know if variables are discrete counts or continuous.

It’s absolutely vital that you know the level of measurement of each response and predictor variable, because they determine both the type of information you can get from your model and the family of models that is appropriate.

4. Write an analysis plan

Write your best guess for the statistical method that will answer the research question, taking into account the design and the type of data.

It does not have to be final at this point—it just needs to be a reasonable approximation.

5. Calculate sample size estimations

This is the point at which you should calculate your sample sizes–before you collect data and after you have an analysis plan.  You need to know which statistical tests you will use as a basis for the estimates.

And there really is no point in running post-hoc power analyses–it doesn’t tell you anything.

Part 2: Prepare and explore

6. Collect, code, enter, and clean data 

The parts that are most directly applicable to modeling are entering data and creating new variables.

For data entry, the analysis plan you wrote will determine how to enter variables.  For example, if you will be doing a linear mixed model, you will want the data in long format.

7. Create new variables

This step may take longer than you think–it can be quite time consuming.  It’s pretty rare for every variable you’ll need for analysis to be collected in exactly the right form.  Create indices, categorize, reverse code, whatever you need to do to get variables in their final form, including running principal components or factor analysis.

8. Run Univariate and Bivariate Statistics

You need to know what you’re working with.  Check the distributions of the variables you intend to use, as well as bivariate relationships among all variables that might go into the model.

You may find something here that leads you back to step 7 or even step 4.   You might have to do some data manipulation or deal with missing data.

More commonly, it will alert you to issues that will become clear in later steps.  The earlier you are aware of issues, the better you can deal with them.  But even if you don’t discover the issue until later, it won’t throw you for a loop if you have a good understanding of your variables.

9. Run an initial model

Once you know what you’re working with, run the model listed in your analysis plan.  In all likelihood, this will not be the final model.

But it should be in the right family of models for the types of variables, the design, and to answer the research questions.  You need to have this model to have something to explore and refine.

Part 3: Refine the model

10. Refine predictors and check model fit

If you are doing a truly exploratory analysis, or if the point of the model is pure prediction, you can use some sort of stepwise approach to determine the best predictors.

If the analysis is to test hypotheses or answer theoretical research questions, this part will be more about refinement.  You can
• Test, and possibly drop, interactions and quadratic or explore other types of non-linearity
• Drop nonsignificant control variables
• Do hierarchical modeling to see the effects of predictors added alone or in blocks.
• Check for overdispersion
• Test the best specification of random effects

11. Test assumptions

Because you already investigated the right family of models in Part 1,  thoroughly investigated your variables in Step 8, and correctly specified your model in Step 10, you should not have big surprises here.  Rather, this step will be about confirming, checking, and refining.  But what you learn here can send you back to any of those steps for further refinement.

12. Check for and resolve data issues

Steps 11 and 12 are often done together, or perhaps back and forth.  This is where you check for data issues that can affect the model, but are not exactly assumptions.  These include:

Data issues are about the data, not the model, but occur within the context of the model
• Multicollinearity
• Outliers and influential points
• Missing data
• Truncation and censoring

Once again, data issues don’t appear until you have chosen variables and put them in the model.

13. Interpret Results

Now, finally, interpret the results.

You may not notice data issues or misspecified predictors until you interpret the coefficients.  Then you find something like a super high standard error or a coefficient with a sign opposite what you expected, sending you back to previous steps.

{ 16 comments… read them below or add one }

tesfaw May 3, 2017 at 4:43 am

hi karen
when the study variables are not satisfy the research ,so which methods can be used and by what methods can be improve the study


tesfaw May 3, 2017 at 4:26 am

regarding to the above topics .forward to about test of assumption,interpret the result.
1. test of hypothesis (i’e if accept the null hypothesis H0=0,reject the null hypothesis H1 is different from Ho)
2.we can adjust alpha value (these alpha value are fixed by your study of literature review).
3. test statistics (these test statistics are fixed by your study i’e
your study variables of the predictors are quantitative we can use regression models (t-test (for small sample size and known variance,z-test for large sample and unknown variance and we can interpret by anova(analysis ofvariance can be used and interpret it ))))other wise we can use other statistical models like logistic regression for the independent variables are qualitative ,design of experiment,bio statistics …
4.we can compare the value of tabulated and calculated values (these is an important methods of deciding the rejecting and accepting the study)
5.regarding to the above observation we can interpret the values and we conclude them
if we interpret the result based on the study ,if the study result are reject the null hypothesis we can take action by that study other wise the study could be accept the null hypothesis we can’t take any action .


SAMPATH P February 20, 2017 at 10:48 am

nice to read all comments and more answers, i need to write an article on cancer data, let me give some ideas


Srinivas Chalamalla October 28, 2016 at 12:46 pm

I thank you very much for your valuable information.


farah October 28, 2016 at 12:29 pm

Thank you so much for this nice explanation.


alen owen June 9, 2016 at 7:48 am

We would like to know at what stage should you check normality of the data given this is an assumption that needs to hold for us to conduct regression test. Otherwise, i like the piece. It is very informative.


Karen June 17, 2016 at 9:58 am

Hi Alen,

Actually, I have written an article about that. You may find it helpful: When to Check Model Assumptions


PIYUSH KUMAR MISHRA May 16, 2016 at 1:03 am

Hi karen ,
I have a data on death of patient month wise ,cause of death ,manner of death, i want to perfrom ARIMA please suggest me


PIYUSH KUMAR MISHRA May 11, 2016 at 1:26 am

nice explanation


Priti April 12, 2016 at 5:04 pm

Hi Karen,
I want to build a model for number of days to recover money . Dependent variables are all nominal, made their dummy variables too.
will negative binomial distribution be appropriate?
with one key variable demandsent (4 levels) the outcome variable shows variance much higher than mean at all levels.
Interest of outcome variable varies from 0-356 from historical data actually upper limit there is no bound but days less than 0 is not interest.


umar Muhd March 15, 2016 at 5:08 am

Thank you so much for this summarized and wonderful explannation. it is indeed educative.


Kaye January 19, 2016 at 7:10 am

Dear Karen,

I really enjoyed your threads and am amazed with the way you explain things that is understandable even to a beginner like me. Came across your website by accident while looking for chi-square vs logistic regression explanations. Thank you! A quick question: The place where I work is currently using a tool to categorize feeding difficulties in children and it has never been validated. BUT it has been widely used world wide (~8 countries) with studies proving its efficacy. Is it feasible to make a research using this tool without going through a validation study? Thanks very much !



Ian February 15, 2015 at 7:05 am

Hi Karen,

Thanks for sharing insightful information. I’ve never conduct primary data and I’ve difficulties in understanding it as I am pretty skeptical with its validity. Nevertheless, is it possible to run regression using primary data from the question that you personally designed?



Nick September 23, 2014 at 10:05 pm

Hi Karen, I find these resources so useful, thanks for sharing! I have a quick question, I’m currently working on a project and using Quadratic general rotary unitized design approach, could you be having literature or links about this method? I have been able to find limited resources about this.

Thanks in advance.

Nick Tayari
PHD Candidate.


Amar April 15, 2013 at 3:08 am

Thanks for such a good post.
This is really very good explanation. One thing I would like to know about false prediction rate in case of classification model. How to reduce the false prediction rate even if we go through all the steps[variable selection,data cleaning, multicolinearity tests] of the model improvements.


Karen April 19, 2013 at 2:43 pm

Hi Amar,

That’s a really big question to answer in detail. But here’s the short answer. Two ways:

1. Improve your model by including better predictors
2. change your classification cutoff. This won’t necessarily change the overall false prediction rate, but it will decrease false positives (or negatives, depending on which way to move it). This will be at the expense, though, of increasing false negatives. It depends on which is the worse error.


Leave a Comment

Please note that Karen receives hundreds of comments at The Analysis Factor website each week. Since Karen is also busy teaching workshops, consulting with clients, and running a membership program, she seldom has time to respond to these comments anymore. If you have a question to which you need a timely response, please check out our low-cost monthly membership program, or sign-up for a quick question consultation.

{ 2 trackbacks }

Previous post:

Next post: