Curriculum

  1. 1

    Book Preview

    1. Interpretability and Explainability in AI Using Python Free preview
  2. 2

    Introduction

    1. (Included in full purchase)
  3. 3

    Chapter 1 : Interpreting Interpretable Machine Learning

    1. (Included in full purchase)
  4. 4

    Chapter 2 : Model Types and Interpretability Techniques

    1. (Included in full purchase)
  5. 5

    Chapter 3 : Interpretability Taxonomy and Techniques

    1. (Included in full purchase)
  6. 6

    Chapter 4 : Feature Effects Analysis with Plots

    1. (Included in full purchase)
  7. 7

    Chapter 5 : Post-Hoc Methods

    1. (Included in full purchase)
  8. 8

    Chapter 6 : Anchors and Counterfactuals

    1. (Included in full purchase)
  9. 9

    Chapter 7 : Interpretability in Neural Networks

    1. (Included in full purchase)
  10. 10

    Chapter 8 : Explainable Neural Networks

    1. (Included in full purchase)
  11. 11

    Chapter 9 : Explainability in Transformers and Large Language Models

    1. (Included in full purchase)
  12. 12

    Chapter 10 : Explainability and Responsible AI

    1. (Included in full purchase)
  13. 13

    INDEX

    1. (Included in full purchase)

About the course

Interpretability in AI/ML refers to the ability to understand and explain how a model arrives at its predictions. It ensures that humans can follow the model's reasoning, making it easier to debug, validate, and trust. Interpretability and Explainability in AI Using Python takes you on a structured journey through interpretability and explainability techniques for both white-box and black-box models. You’ll start with foundational concepts in interpretable machine learning, exploring different model types and their transparency levels. As you progress, you’ll dive into post-hoc methods, feature effect analysis, anchors, and counterfactuals—powerful tools to decode complex models. The book also covers explainability in deep learning, including Neural Networks, Transformers, and Large Language Models (LLMs), equipping you with strategies to uncover decision-making patterns in AI systems. Through hands-on Python examples, you’ll learn how to apply these techniques in real-world scenarios. By the end, you’ll be well-versed in choosing the right interpretability methods, implementing them efficiently, and ensuring AI models align with ethical and regulatory standards—giving you a competitive edge in the evolving AI landscape.

About the Author

Aruna Chakkirala, a seasoned technical leader and currently serves as an AI Solutions Architect at Microsoft. She was instrumental in the early adoption of Generative AI and constantly strives to keep pace with the evolving domain. As a Data Scientist, she has built Supervised and Unsupervised models to address cybersecurity problems. She holds a patent for her pioneering work in community detection for DNS querying. Her technical expertise spans multiple domains, including Networks, Security, Cloud, Big Data, and AI. She believes that the success of real-world AI applications increasingly depends on well- defined architectures across all encompassing domains. Her current interests include Generative AI, applications of LLMs and SLMs, Causality, Mechanistic Interpretability, and Explainability tools.