Comments (4)
Does your model really have 2000 iterations with substantial improvements up until the end? Could you reduce 0.8 to something like 0.05? Maybe the 1600 iterations being selected always capture all of the relevant models (which I wouldn't be surprised..)
from lightgbm.
Thanks for your answer @tiagoleonmelo. I tried a lot of things, working on a toy dataset first for binary classification, then regression (house prices), I tried with also 1 single tree at a time in the predict after shuffle, I also tried with with very few leaves etc. etc. Every combination always gives me the same outcome. Could you please share your result with a toy dataset just to understand if I'm missing something?
from lightgbm.
I was just now trying to reproduce what I initially posted and was getting to the same conclusion as you: all of them produced the same score.
Apparently I had a typo in my post: it should be num_iteration
and not num_iterations
(I already edited it in order not to mislead any more people in the future)
You should be able to get different predictions if you run this:
import lightgbm as lgb
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
X, y = load_breast_cancer(return_X_y=True)
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.33, random_state=42)
gbm = lgb.LGBMClassifier(
**{
'boosting_type': 'gbdt',
'n_estimators': 200,
'objective': 'binary',
'tree_learner': 'serial',
}
)
gbm.fit(X_train, y_train)
N = 100
alpha = 0.8
total_iterations = gbm.booster_.current_iteration()
X0 = X_test[0, :]
X0 = X0.reshape((1, X_test.shape[1]))
preds = []
for _ in range(N):
pred = gbm.booster_.shuffle_models().predict(X0, num_iteration=int(alpha * total_iterations))
preds += [pred[0]]
print(preds)
from lightgbm.
It works.
I've been working with a custom dataset that exhibits significant class imbalance, with around 90% of instances labeled as 0 and only 10% as 1. When attempting to compute a 95% confidence interval for predictions, I've noticed that the lower bound tends to be close to the actual prediction, while the upper bound tends to be disproportionately large for each prediction. Additionally, it appears that the magnitude of the lower and upper bounds correlates with the magnitude of the prediction itself:
LB | Pred | UB |
---|---|---|
0.012769 | 0.009298 | 0.137781 |
0.014389 | 0.010908 | 0.148293 |
0.024899 | 0.020048 | 0.208131 |
0.035270 | 0.031333 | 0.284767 |
0.052851 | 0.049575 | 0.355912 |
0.081938 | 0.081448 | 0.448467 |
0.101303 | 0.105715 | 0.510233 |
0.124052 | 0.138253 | 0.568761 |
So it seems that there's a consistent trend where the model exhibits a relatively uniform level of certainty across predictions, regardless of the actual confidence intervals. In other words, there are no instances where the model displays high certainty with close confidence intervals for some predictions while showing more uncertainty for others.
Regarding the shuffle_models method, I'm uncertain about its workings and whether it offers any guarantees similar to those provided by conformal prediction methods. Maybe there may be an issue either with my code implementation or with the dataset itself. I'd greatly appreciate your insights and advice on how to address this matter. Thank you!
from lightgbm.
Related Issues (20)
- [python-package] NumPy 2.0 support HOT 1
- LightGBM failed to testlightgbm.exe on MSVC HOT 1
- Lightgbm trains much slower than catboost. HOT 15
- Any suggestions for predicting all values to be 0? HOT 1
- [python-package] How to refit a classifier? HOT 4
- Can not predict with multithread? HOT 5
- [ci] CUDA 11.8 wheel (gcc) CI jobs failing: 'libomp.so.5: no such file or directory'
- [GPU] lightgbm.basic.LightGBMError: Check failed: (best_split_info.left_count) > (0), lightgbm.basic.LightGBMError: Check failed: (best_split_info.right_count) > (0)
- [docs] add recommendations on memory management / reducing memory usage
- Clarification on Early Stopping Behavior with Multiple eval_set in LightGBM HOT 2
- Windows 7 runtime error HOT 3
- [python-package] reset_parameter() segfaults when passing an unrecognized parameter HOT 3
- Inquiry about Release Date for GPU Training Bug "bin size 257" FIX HOT 1
- [ci] [R-package] Add a CI job testing the R package on arm64 macOS
- Tests fail to open the file examples/binary_classification/binary.test HOT 1
- [c++] Segmentation fault when use parallel_tree_learner method when number of categories of one feature larger than 28 (default max_cat_to_onehot-4).
- Error while using "LightGMB" on Fabric HOT 1
- [ci] enforce 'shellcheck' checks
- LightGBM\include\LightGBM/utils/common.h(33,10): fatal error C1083: Cannot open include file: 'fast_double_pa rser.h': No such file or directory HOT 6
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from lightgbm.