O’Reilly Media, October 2019. — 62 p. — ISBN: 9781098115470.
Innovation and competition are driving analysts and data scientists toward increasingly complex predictive modeling and machine learning algorithms. This complexity makes these models accurate, but can also make their predictions difficult to understand. When accuracy outpaces interpretability, human trust suffers, affecting business adoption, model validation efforts, and regulatory oversight.
In the updated edition of this ebook, Patrick Hall and Navdeep Gill from H2O.ai introduce the idea of machine learning interpretability and examine a set of machine learning techniques, algorithms, and models to help data scientists improve the accuracy of their predictive models while maintaining a high degree of interpretability. While some industries require model transparency, such as banking, insurance, and healthcare, machine learning practitioners in almost any vertical will likely benefit from incorporating the discussed interpretable models, and debugging, explanation, and fairness approaches into their workflow.
This second edition discusses new, exact model explanation techniques, and de-emphasizes the trade-off between accuracy and interpretability. This edition also includes up-to-date information on cutting-edge interpretability techniques and new figures to illustrate the concepts of trust and understanding in machine learning models.