Comments (1)
Hi @gatihe -- The models tend to "think" differently and if the performances are similar it would be difficult to choose which model is a better representation of the underlying generative function. I'm not aware of a way to do this at least. Perhaps @richcaruana has more thoughts on it.
The main benefit you get from using an EBM is that the EBM global explanations are an exact and complete representation of the model itself, so you aren't getting an approximate explanation that would be required from a black box model like a random forest. EBMs make no guarantees however regarding how well they match the underlying generative function. If the only thing you need is a feature importance metric, then I don't think the exactness of the explanation is a critical aspect.
There are also multiple ways that you can measure feature importance, so that's another thing to consider in your scenario. We offer the mean absolute score and the max-min score within the interpret package, but you can also calculate other alternatives yourself like the change in AUC when you remove each feature, etc. Each of these feature importance metrics will tell you different things about your model and data.
from interpret.
Related Issues (20)
- Integrate EBM into the pytorch framework HOT 7
- Visualising Decision Tree explainer gives a Cytoscape object which is not savable to my local machine HOT 2
- [DP-EBM] Question regarding range R and sensitivity
- Support for more parameters in the Differentially Private models HOT 1
- NAM Model HOT 1
- Some hyperparameter questions HOT 3
- Lookup Table for single feature and feature interaction terms HOT 5
- Operations when merging EBM HOT 6
- possibility of adding `sample_weight` to `interpret.glassbox.ClassificationTree` HOT 6
- 2d PDP Z-axis colours appear too similar HOT 1
- Exporting EBM as PMML HOT 3
- Feature Request: Passing Validation Set or Index HOT 2
- Explore the data with continuous output and category input HOT 4
- Using the init_score in EBM Classifier HOT 1
- Merging two EBM regressors leads to model that has NaNs in attributes HOT 1
- Bug: Pandas DataFrames columns names not verified at prediction time HOT 6
- Add a new interpretable algorithm, Automatic Piecewise Linear Regression HOT 8
- Smoothness over variable regions with outlier outcome values HOT 7
- Create submodels of an ExplainableBoostingRegressor(outer_bags=14)? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from interpret.