Introduction
InterpretML is an open-source library from Microsoft Research that unifies interpretable (glass-box) models and explainability techniques (black-box) under a single API. Its flagship model, the Explainable Boosting Machine (EBM), achieves accuracy comparable to gradient boosting while remaining fully interpretable.
What InterpretML Does
- Trains glass-box models like EBM, linear models, and decision trees
- Explains any black-box model with SHAP, LIME, and partial dependence
- Provides interactive visualizations of feature importances and effects
- Supports both classification and regression tasks
- Enables comparison of multiple explanations side by side
Architecture Overview
InterpretML defines an Explainer interface with explain_global and explain_local methods. Glass-box models implement both training and explanation natively. Black-box explainers wrap external models and produce explanations via perturbation or gradient methods. All explanations are rendered as interactive Plotly dashboards.
Self-Hosting & Configuration
- Install via pip; the package includes all glass-box models
- Use ExplainableBoostingClassifier or Regressor as drop-in sklearn estimators
- Set max_bins and interactions to control EBM complexity
- For black-box explanations, wrap any predict function with ShapKernel or LimeTabular
- Launch the interactive dashboard with show() in a Jupyter environment
Key Features
- EBM matches gradient boosting accuracy with full interpretability
- Unified API across glass-box and black-box explanation methods
- Interactive Plotly-based dashboards for exploring feature effects
- Pairwise interaction detection built into the EBM training loop
- Differential privacy support for training on sensitive data
Comparison with Similar Tools
- SHAP — focuses on Shapley value explanations for any model; InterpretML includes SHAP plus its own glass-box models
- LIME — local explanation technique; InterpretML integrates LIME alongside other methods
- Alibi — strong on counterfactual and anchor explanations; InterpretML focuses on feature-level interpretation
- Captum — PyTorch-specific attribution methods; InterpretML is framework-agnostic
FAQ
Q: What is an Explainable Boosting Machine? A: EBM is a generalized additive model with pairwise interactions trained via cyclic gradient boosting. Each feature's effect is learned independently, making the model fully inspectable.
Q: Does EBM work with large datasets? A: Yes. EBM scales well and supports parallel training via the n_jobs parameter.
Q: Can I use InterpretML with deep learning models? A: Yes. The black-box explainers work with any model that has a predict or predict_proba method.
Q: Is InterpretML production-ready? A: Yes. EBM models can be serialized with joblib or pickle and served like any scikit-learn model.