Member Training: Matrix Algebra for Data Analysts: A Primer

If you’ve been doing data analysis for very long, you’ve certainly come across terms, concepts, and processes of matrix algebra.  Not just matrices, but:

  • Matrix addition and multiplication
  • Traces and determinants
  • Eigenvalues and Eigenvectors
  • Inverting and transposing
  • Positive and negative definite

These mathematical ideas are at the heart of calculating everything from regression models to factor analysis; from multivariate statistics to multilevel models.

Data analysts can get away without ever understanding matrix algebra, certainly.  But there are times when having even a basic understanding of how matrix algebra works and what it has to do with data can really make your analyses make a little more sense.

In this webinar we introduce some matrix algebra terms and methods that directly apply to the statistical analyses you’re already doing.


Note: This training is an exclusive benefit to members of the Statistically Speaking Membership Program and part of the Stat’s Amore Trainings Series. Each Stat’s Amore Training is approximately 90 minutes long.
Not a Member? Join!

About the Instructor

Karen Grace-Martin helps statistics practitioners gain an intuitive understanding of how statistics is applied to real data in research studies.

She has guided and trained researchers through their statistical analysis for over 15 years as a statistical consultant at Cornell University and through The Analysis Factor. She has master’s degrees in both applied statistics and social psychology and is an expert in SPSS and SAS.

Not a Member Yet?

It’s never too early to set yourself up for successful analysis with support and training from expert statisticians. Just head over and sign up for Statistically Speaking. You'll get access to this training webinar, 100+ other stats trainings, a pathway to work through the trainings that you need — plus the expert guidance you need to build statistical skill with live Q&A sessions and an ask-a-mentor forum.

Reader Interactions

Comments

  1. Teresa Aloba Le says

    I find myself missing out on what I think would be valuable opportunities attending your courses. The biggest hindrance for me is the time zone difference. I’m based in Melbourne Australia.

  2. erling a bringeland says

    I much enjoyed your explanations to reversed signs on regression coefficients in ordinal regression, depending on statistical package used. Recently I made an unexpected discovery using SPSS. I did an ordinal regression with three categories (response= 3, stable disease= 2, progression =1) in the dependent variable. However, test of parallell lines was signficiant (p= 0.039). For some reason I recoded the dependent variable to (progression=3, stable disease = 2 and response =1). As expected, all that happened was that the ORs were flipped (The betas shifting signs), the significance level of each beta was the same, the Nagelkerke Rsq the same etc. BUT, the test of parallel lines assumption came out highly non-significant as ideally wished for (p= 0.9).
    Is there any way to logically explain this difference: one way of coding seemingly leading to an invalid model, the other way not?


Leave a Reply

Your email address will not be published.

Please note that, due to the large number of comments submitted, any questions on problems related to a personal study/project will not be answered. We suggest joining Statistically Speaking, where you have access to a private forum and more resources 24/7.