# InterpretML — Interpretable Machine Learning by Microsoft > InterpretML provides glass-box models like Explainable Boosting Machines and black-box explainers in a unified API, helping data scientists understand why models make specific predictions. ## Install Save in your project root: # InterpretML — Interpretable Machine Learning by Microsoft ## Quick Use ```bash pip install interpret python -c " from interpret.glassbox import ExplainableBoostingClassifier from sklearn.datasets import load_breast_cancer from sklearn.model_selection import train_test_split X, y = load_breast_cancer(return_X_y=True) X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2) ebm = ExplainableBoostingClassifier() ebm.fit(X_train, y_train) print(f'Accuracy: {ebm.score(X_test, y_test):.3f}') from interpret import show ebm_global = ebm.explain_global() show(ebm_global) " ``` ## Introduction InterpretML is an open-source library from Microsoft Research that unifies interpretable (glass-box) models and explainability techniques (black-box) under a single API. Its flagship model, the Explainable Boosting Machine (EBM), achieves accuracy comparable to gradient boosting while remaining fully interpretable. ## What InterpretML Does - Trains glass-box models like EBM, linear models, and decision trees - Explains any black-box model with SHAP, LIME, and partial dependence - Provides interactive visualizations of feature importances and effects - Supports both classification and regression tasks - Enables comparison of multiple explanations side by side ## Architecture Overview InterpretML defines an Explainer interface with explain_global and explain_local methods. Glass-box models implement both training and explanation natively. Black-box explainers wrap external models and produce explanations via perturbation or gradient methods. All explanations are rendered as interactive Plotly dashboards. ## Self-Hosting & Configuration - Install via pip; the package includes all glass-box models - Use ExplainableBoostingClassifier or Regressor as drop-in sklearn estimators - Set max_bins and interactions to control EBM complexity - For black-box explanations, wrap any predict function with ShapKernel or LimeTabular - Launch the interactive dashboard with show() in a Jupyter environment ## Key Features - EBM matches gradient boosting accuracy with full interpretability - Unified API across glass-box and black-box explanation methods - Interactive Plotly-based dashboards for exploring feature effects - Pairwise interaction detection built into the EBM training loop - Differential privacy support for training on sensitive data ## Comparison with Similar Tools - **SHAP** — focuses on Shapley value explanations for any model; InterpretML includes SHAP plus its own glass-box models - **LIME** — local explanation technique; InterpretML integrates LIME alongside other methods - **Alibi** — strong on counterfactual and anchor explanations; InterpretML focuses on feature-level interpretation - **Captum** — PyTorch-specific attribution methods; InterpretML is framework-agnostic ## FAQ **Q: What is an Explainable Boosting Machine?** A: EBM is a generalized additive model with pairwise interactions trained via cyclic gradient boosting. Each feature's effect is learned independently, making the model fully inspectable. **Q: Does EBM work with large datasets?** A: Yes. EBM scales well and supports parallel training via the n_jobs parameter. **Q: Can I use InterpretML with deep learning models?** A: Yes. The black-box explainers work with any model that has a predict or predict_proba method. **Q: Is InterpretML production-ready?** A: Yes. EBM models can be serialized with joblib or pickle and served like any scikit-learn model. ## Sources - https://github.com/interpretml/interpret - https://interpret.ml/docs --- Source: https://tokrepo.com/en/workflows/asset-abe0118d Author: AI Open Source