Skip to content

trainindata/machine-learning-interpretability

This branch is 1 commit behind solegalli/machine-learning-interpretability:main.

Folders and files

NameName
Last commit message
Last commit date

Latest commit

83a03a5 · Aug 6, 2024

History

20 Commits
Apr 27, 2024
Nov 14, 2023
Nov 14, 2023
Nov 14, 2023
Nov 14, 2023
Jul 16, 2024
Nov 14, 2023
Nov 23, 2023
Nov 29, 2023
Nov 29, 2023
Jun 5, 2024
Jun 5, 2024
Aug 6, 2024
Jul 31, 2024
Aug 18, 2023
Oct 18, 2023
Nov 23, 2023
Jun 8, 2023
Jun 8, 2023

Repository files navigation

PythonVersion License https://github.com/solegalli/machine-learning-interpretability/blob/master/LICENSE Sponsorship https://www.trainindata.com/

Machine Learning Interpretability- Code Repository

Code repository for the online course Machine Learning Interpretability

Course launch: 30th November, 2023

Actively maintained.

Table of Contents

  1. Machine Learning Interpretability

    1. Interpretability in the context of Machine Learning
    2. Local vs Global Interpretability
    3. Intrinsically explainable models
    4. Post-hoc explainability methods
    5. Challenges to interpretability
    6. How to make models more explainable
  2. Intrinsically Explainable Models

    1. Linear and Logistic Regression
    2. Decision trees
    3. Random forests
    4. Gradient boosting machines
    5. Global and local interpretation
  3. Post-hoc methods - Global explainability

    1. Permutation Feature Importance
    2. Partial dependency plots
    3. Accumulated local effects
  4. Post-hoc methods - Local explainability

    1. LIME
    2. SHAP
    3. Individual contitional expectation
  5. Featuring the following Python interpretability libraries

    1. Scikit-learn
    2. treeinterpreter
    3. Eli5
    4. Dalex
    5. Alibi
    6. pdpbox
    7. Lime
    8. Shap

Links

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Jupyter Notebook 100.0%