MCDS Seminar Series: Interpretability vs. Explainability in Machine Learning (VIDEO RECORDING AND SLIDES AVAILABLE)

Image for MCDS Seminar Series: Interpretability vs. Explainability in Machine Learning (VIDEO RECORDING AND SLIDES AVAILABLE)

More Information

mcds@unimelb.edu.au

Our second virtual seminar in the 2020 series was held on 26 June. A video recording and accompanying slides from the webinar can be viewed below.

Our centre was pleased to host Cynthia Rudin, professor of computer science, electrical and computer engineering, and statistical science at Duke University. Previously, Professor Rudin held positions at MIT, Columbia, and NYU. Her degrees are from the University at Buffalo and Princeton University. She is a three time winner of the INFORMS Innovative Applications in Analytics Award, was named as one of the "Top 40 Under 40" by Poets and Quants in 2015, and was named by Businessinsider.com as one of the 12 most impressive professors at MIT in 2015. She has served on committees for INFORMS, the National Academies, the American Statistical Association, DARPA, the NIJ, and AAAI. She is a fellow of both the American Statistical Association and Institute of Mathematical Statistics. She is a Thomas Langford Lecturer at Duke University for 2019-2020. Professor Rudin's website is https://users.cs.duke.edu/~cynthia/

Seminar Title: Interpretability vs. Explainability in Machine Learning

Webinar slides: Download

Abstract: With widespread use of machine learning, there have been serious societal consequences from using black box models for high-stakes decisions, including flawed bail and parole decisions in criminal justice. Explanations for black box models are not reliable, and can be misleading. If we use interpretable machine learning models, they come with their own explanations, which are faithful to what the model actually computes.

In this talk, discussion will cover some of the reasons that black boxes with explanations can go wrong, whereas using inherently interpretable models would not have these same problems. Examples will be given of where an explanation of a black box model went wrong, namely, a discussion of ProPublica's analysis of the COMPAS model used in the criminal justice system: ProPublica’s explanation of the black box model COMPAS was flawed because it relied on wrong assumptions to identify the race variable as being important. Luckily in recidivism prediction applications, black box models are not needed because inherently interpretable models exist that are just as accurate as COMPAS.

Examples of interpretable models in healthcare will also be given. One of these models, the 2HELPS2B score, is actually used in intensive care units in hospitals; most machine learning models cannot be used when the stakes are so high.

Finally, there will be a discussion of two long-term projects Prof. Rudin's lab is working on, namely optimal sparse decision trees and interpretable neural networks.