Interpretable Machine Learning – June 2026
Event Phone: 1-610-715-0115
Upcoming Dates
-
09JunInterpretable Machine Learning10:30 AM-3:00 PM
Cancellation Policy: If you cancel your registration two weeks or more before the course is scheduled to begin, you are entitled to receive your choice of either a credit for a future seminar (which can be applied toward any of our courses) or a refund of the registration fee (minus a processing fee of $50).
In the unlikely event that Statistical Horizons LLC must cancel a seminar, we will do our best to inform you as soon as possible of the cancellation. You would then have the option of receiving a full refund of the seminar fee or a credit towards another seminar. In no event shall Statistical Horizons LLC be liable for any incidental or consequential damages that you may incur because of the cancellation.
A 4-Day Livestream Seminar Taught by Adam D. Rennhoff, Ph.D.
Machine learning models routinely outperform traditional statistical models in predictive accuracy, yet their complexity can make them difficult to understand and communicate. For many applied researchers, this lack of transparency can limit the adoption of powerful predictive tools.
This course offers a practical and conceptually clear introduction to interpretable machine learning. You will learn how to understand, explain, and trust complex machine learning models using modern tools that naturally connect to familiar statistical concepts, such as marginal effects, uncertainty quantification, and variable importance. The course emphasizes both global interpretation (how variables influence predictions on average) and local interpretation (why a specific prediction was made).
Hands-on demonstrations will be conducted in R, with equivalent Python code provided where feasible. No prior experience with machine learning beyond basic modeling knowledge is required.
After attending this seminar, you will be able to…
-
- Understand why machine learning models often outperform traditional models and why interpretability is essential.
- Differentiate between intrinsically interpretable models and black-box models.
- Use global interpretation tools such as partial dependence plots (PDPs), ALE plots, feature importance, and surrogate models.
- Use local interpretation tools such as individual conditional expectation (ICE) curves, LIME, anchors, and SHAP values.
- Interpret direction, magnitude, and heterogeneity of effects using model-agnostic tools.
- Apply bootstrapped uncertainty methods to assess statistical confidence in ML interpretation.
Venue: Livestream Seminar