Comments (15)
And can you also give some insights on how the evaluation metrics in the code corresponding to the ones reported in the paper? And also got a little confused why overall is reported for close setting, F-measure is reported for open setting.
from openlongtailrecognition-oltr.
Hello @JasAva besides the randomness of each training session, I think the version of pytorch might also causing troubles sometimes. In addition, I am thinking maybe the learing rate we published is a little bit different than the ones we used for the experiments. Sometimes the numbers can be mixed. We are very sorry about this. About the F-measure, we follow this paper: https://arxiv.org/abs/1511.06233 , please check it out. Thank you very much.
from openlongtailrecognition-oltr.
Hi @zhmiao , Thanks for answering, I also think this might caused by the learning rate.
Moreover, can you provide the trained models for stage1 and stage2, I'd like to benchmark the reported results.
from openlongtailrecognition-oltr.
hi, @zhmiao I also have the same problem I cann't repeat your result by this version code, so may you provide you used lr of feature network and classifier network respectively? Thanks a lot.
from openlongtailrecognition-oltr.
@JasAva @jchhuang Yes. we will publish our pretrained models to maybe later this weekend or earlier next week. We will notify you as soon as they are published. Thanks.
from openlongtailrecognition-oltr.
@zhmiao thanks for your quickly reply. May you also provide the detailed hyper-parameters, I think many researcher also would like to repeat the experiments by theirselves. tks
from openlongtailrecognition-oltr.
@JasAva dear, have you reproduce the results as claimed in the paper, may you share some insights to me?
from openlongtailrecognition-oltr.
@JasAva @jchhuang We found some bugs in the current published code. It is somewhere in the MetaEmbeddingClassifier. It is caused by renaming the variables to be consistent with the paper during code releasing process. We will fix the bug asap. Thanks
from openlongtailrecognition-oltr.
@JasAva @jchhuang we posted a reimplemented imagenet-lt weights using current config, the numbers are very close to what we reported. We are reimplementing Places right now. Will keep you updated. Thanks
from openlongtailrecognition-oltr.
@zhmiao Thanks for updating the models. Just curious, there seems no changes in the code itself (you mentioned there is a bug somewhere in the MetaEmbeddingClassifier?), are the reimplemented models are obtained using the current release?
from openlongtailrecognition-oltr.
@zhmiao Thanks for updating, however, the method of producing the results claimed in your paper is more appreciated other than just post a re-pretrain model weights, because peoples will doubt the performances of the re-pretrain model weights maybe benefits from a larger datasets other than the algorithm itself?
from openlongtailrecognition-oltr.
@JasAva Yes, we have gone through the code, it seems that there there was a bug in the evaluation functions instead of the classifier.
from openlongtailrecognition-oltr.
@zhmiao hi,the bug you mentioned in the evaluation functions is replace the>> and << as >>+ and <<= in the function of shot_acc().
from openlongtailrecognition-oltr.
@JasAva @jchhuang @drcege Hello! Sorry for the late reply! As described in #50 (comment) , we finally debugged the published code and current open set performance is:
============
Phase: test
Evaluation_accuracy_micro_top1: 0.361
Averaged F-measure: 0.501
Many_shot_accuracy_top1: 0.442 Median_shot_accuracy_top1: 0.352 Low_shot_accuracy_top1: 0.175
==========
This is higher than we reported in the paper. We updated some of the modules with clone() method, and set use_fc in the first stage to False. These changes will lead us to the proper results. Please have a try. Thank you very much again.
For Places, the current config won't work either. The reason why we could not get the reported results is that we forget that on the first stage, we actually did not freeze the weights. We only freeze the weights on the second stage. We will update the corresponding code as soon as possible.
from openlongtailrecognition-oltr.
@JasAva @jchhuang @drcege Hello, we have updated configuration files for Places. Currently, the reproduced results are a little better than reported. Please check out the updates. Thanks!
from openlongtailrecognition-oltr.
Related Issues (20)
- Reproducing OLTR results HOT 3
- Stage 2 multi GPU
- why fix all parameters except self attention parameters? HOT 4
- Table 2 results HOT 2
- Pretrained Weights for Places_LT?
- the use of fc layer HOT 2
- the accuracy of the train and val HOT 2
- how to compute centroids?
- Why the input dimension of the `fc_spatial` layer in `ModulatedAttLayer` is 7*7*in_channel? HOT 1
- Many_shot_accuracy_top1: nan on my own dataset HOT 1
- Revised F-measure results for other models in your paper
- Applications for face recognition
- Error when running stage_1.py under Places_LT
- Unable to reproduce baseline result on ImageNet-LT HOT 1
- BUG: stage1 test error!!
- Could you please give me an example of arranging ILSVRC2014 dataset? HOT 7
- Implementation on Inat-18
- About Class aware sampler
- The role of untrained FC(add_fc)
- The question about the version of Places_LT
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from openlongtailrecognition-oltr.