Springer, 2023. — 483 p.
This book is a comprehensive curation, exposition, and illustrative discussion of recent research tools for the interpretability of Deep Learning models, with a focus on neural network architectures. In addition, it includes several case studies from application-oriented articles in the fields of Computer Vision, optics, and Machine Learning related topics. The book can be used as a monograph on interpretability in Deep Learning covering the most recent topics as well as a textbook for graduate students. Scientists with research, development, and application responsibilities benefit from its systematic exposition.
In current graduate courses related to Deep Learning, Machine Learning, and neural networks, there is an absence of teaching/learning material that deals with the topic of interpretability/explainability. This is mainly attributed to the fact that the previous focus of the Machine Learning community was precision, whereas the question of interpretability is an emergent topic. However, it is gaining traction as an increasingly relevant subject, with books, lecture notes, new courses as well as perspectives being published. Nonetheless, the focus on general Machine Learning in these works implies that the question of interpretability in Deep Learning, which is now ubiquitously used across a large variety of Machine Learning applications, remains currently unaddressed to sufficient depth. This textbook will be therefore one of the pioneer textbooks dedicated to this topic. It will potentially lead to the creation of specialized graduate courses on this topic since a need for such courses is perceived but unavailability of organized material on the topic is a prominent obstacle.