Comments (6)
Hi @Lyttonkeepfoing,
If you are looking to train ConfidNet, you should be using the selfconfid_learner.py
as shown in the example config file
https://github.com/valeoai/ConfidNet/blob/master/confidnet/confs/selfconfid-classif.yaml
default_learner.py
can be used for computing MCP baseline or to compare with the 'golden' topline TCP. As the true class probability (TCP) is not supposed to be available in training, this is why it should considered only as this golden topline, which have nearly perfect results, hence explaining also your observation that it achieve low AURC.
Best,
Charles
from confidnet.
Thanks for responding! But I still have a question. Why not upload a selfconfid-cifar10-classifi.yaml and selfconfid-cifar100-classifi.yaml file directly? and What's the parameter setting in Cifar100 and other datasets? The optimizer and lr are different from cifar10. I can't get an accurate result if the parameters are different. I think it's a nice work and we want make it becoming a baseline for us. Your reply is really important for us!
from confidnet.
The training of ConfidNet is similar regardless of the considered dataset: 500 epochs with Adam optimizer and learning rate 10e-4, dropout and same data augmentation used in classification training. Best model can be selected based on AUPR-Error on validation set.
Hence, in selfconfid-classif.yaml
, you should only adapt the data
block and the augmentations entry. This is why there is only one file, whereas there are different config files for classification. Note also that a lot of improvements with ConfidNet come also with the second phase, fine-tuning the whole network.
Wishing you best of luck for your project
from confidnet.
Thanks so much, I'm so sorry i got one more question. if "fpr_at_95tpr" in self.metrics: for i,delta in enumerate(np.arange( self.proba_pred.min(), self.proba_pred.max(), (self.proba_pred.max() - self.proba_pred.min()) / 10000, )): tpr = len(self.proba_pred[(self.accurate == 1) & (self.proba_pred >= delta)]) / len( self.proba_pred[(self.accurate == 1)] ) print(tpr, 'tpr') if i % 100 == 0: print(f"Threshold:\t {delta:.6f}") print(f"TPR: \t\t {tpr:.4%}") print("------") if 0.9505 >= tpr >= 0.9495: print(f"Nearest threshold 95% TPR value: {tpr:.6f}") print(f"Threshold 95% TPR value: {delta:.6f}") fpr = len( self.proba_pred[(self.errors == 1) & (self.proba_pred >= delta)] ) / len(self.proba_pred[(self.errors == 1)]) scores["fpr_at_95tpr"] = {"value": fpr, "string": f"{fpr:05.2%}"} break
Here, sometimes the value of tpr is lower than 0.9495, then it will break. And the socres["fpr_at_95tpr"] become None. Then i met Key error bug. Do you know what's the problem? When i train on Cifar10, everything is normal. But when i train on Cifar 100, the tpr is really wired. like this 0.0025906735751295338 tpr
from confidnet.
I think the key is this stride: (self.proba_pred.max() - self.proba_pred.min()) / 100000
Is there a solution that I don't need to change the "100000"?
from confidnet.
Yes, you can edit here the stride, 100000 was a a good trade-off between running time and accurateness to compute the TPR
from confidnet.
Related Issues (14)
- Question about the accuracy of training vgg16 on CIFAR10 dataset? HOT 6
- How to choose which epoch to use? HOT 2
- CamVid dataset train / val / test split HOT 2
- Trying to understand the ConfidNet training process HOT 1
- ConfidNet Failure Cases & Generalization HOT 3
- ConfidNet performs worse than MCP when I reproduce SVHN results HOT 2
- ModuleNotFoundError HOT 1
- Attempts to pre-train models results in error: UnboundLocalError: local variable 'pred' referenced before assignment HOT 2
- Pre-trained Cifar10 baseline classification model's accuracy HOT 2
- Kernel Restarting HOT 2
- Main Segnet Model for Camvid HOT 1
- Freeze Miniconda version and dependencies in Dockerfile HOT 1
- OOD for? HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from confidnet.