A Science News article from July 2014 was titled “Scientists’ grasp of confidence intervals doesn’t inspire confidence.” Perhaps that is why only 11% of the articles in the 10 leading psychology journals in 2006 reported confidence intervals in their statistical analysis.
How important is it to be able to create and interpret confidence intervals?
The American Psychological Association Publication Manual, which sets the editorial standards for over 1,000 journals in the behavioral, life, and social sciences, has begun emphasizing parameter estimation and de-emphasizing Null Hypothesis Significance Testing (NHST).
Its most recent edition, the sixth, published in 2009, states “estimates of appropriate effect sizes and confidence intervals are the minimum expectations” for published research.
In this webinar, we’ll clear up the ambiguity as to what exactly is a confidence interval and how to interpret them in a table and graph format. We will also explore how they are calculated for continuous and dichotomous outcome variables in various types of samples and understand the impact sample size has on the width of the band. We’ll discuss related concepts like equivalence testing.
By the end of the webinar, we anticipate your grasp of confidence intervals will inspire confidence.
Note: This training is an exclusive benefit to members of the Statistically Speaking Membership Program and part of the Stat’s Amore Trainings Series. Each Stat’s Amore Training is approximately 90 minutes long.
Standardized regression coefficients remove the unit of measurement of predictor and outcome variables. They are sometimes called betas, but I don’t like to use that term because there are too many other, and too many related, concepts that are also called beta.
There are many good reasons to report them: