Comments (21)
If you are using my code and config file without any change, you should test the fifth checkpoint or change "max_epoch" in "metatune.data" to 1000 instead of 2000 for 5-shot detection.
Hi,
This is the way I tested before:
(1) python valid_ensemble.py cfg/metatune.data cfg/darknet_dynamic.cfg cfg/reweighting_net.cfg backup/metatunetest1_novel0_neg0/000010.weights
(2) python scripts/voc_eval.py results/metatunetest1_novel0_neg0/ene000010/comp4_det_test_
I changed the 000010.weights to 000005.weights, but the result is also like this.
from fewshot_detection.
Did you fine-tune with both novel & base classes or just novel classes?
from fewshot_detection.
If you are using my code and config file without any change, you should test the fifth checkpoint or change "max_epoch" in "metatune.data" to 1000 instead of 2000 for 5-shot detection.
from fewshot_detection.
Did you fine-tune with both novel & base classes or just novel classes?
Hi, I have a question, what does this weightfile mean? is the result of base training?
from fewshot_detection.
If you are using my code and config file without any change, you should test the fifth checkpoint or change "max_epoch" in "metatune.data" to 1000 instead of 2000 for 5-shot detection.
Hi,I trained according to your steps,
and what does this weightfile mean?
from fewshot_detection.
Got the same problem as @bitwangdan . The final evaluated results are not the same as @bitwangdan reproduced here, but are similar
I have also tried with changing 000010.weights to 00000x.weights, and the evaluation results stay the same.
from fewshot_detection.
@bitwangdan @XinyiYS
Hi, can I ask how to install pytorch 0.3.1 with python2.7?
Because the error said I can't install numpy in python2.7, and the python version need >=3.5 when I install torch and torchvision.
Do you have this problem when implement the source code?
Thanks.
from fewshot_detection.
Got the same problem as @bitwangdan . The final evaluated results are not the same as @bitwangdan reproduced here, but are similar
I have also tried with changing 000010.weights to 00000x.weights, and the evaluation results stay the same.
You should carefully analyze the eval code, and you should delete the annts.pkl file when you evaluate it, and the evaluation result is the current result.
Got the same problem as @bitwangdan . The final evaluated results are not the same as @bitwangdan reproduced here, but are similar
I have also tried with changing 000010.weights to 00000x.weights, and the evaluation results stay the same.
You should carefully analyze the eval code, and you should delete the annts.pkl file when you evaluate it, and the evaluation result is the current result.
from fewshot_detection.
与@bitwangdan遇到相同的问题。最终评估结果与此处复制的@bitwangdan不同,但相似
我还尝试将000010.weights更改为00000x.weights,并且评估结果保持不变。
Got the same problem as @bitwangdan . The final evaluated results are not the same as @bitwangdan reproduced here, but are similar
I have also tried with changing 000010.weights to 00000x.weights, and the evaluation results stay the same.
Excuse me, is your training setting exactly the same as the author?
Thanks!
from fewshot_detection.
Excuse me, is your training setting exactly the same as the author?
Yes, I followed the configurations and the setting by the repo exactly. The results after base training were on par with the author's results. But I cannot seem to get as good results after few-shot fine-tune as the author's results.
On a side-node about Github: why, all of a sudden, am I reading some sort-of mandarin translation of my previous comment? lol
from fewshot_detection.
Excuse me, is your training setting exactly the same as the author?
Yes, I followed the configurations and the setting by the repo exactly. The results after base training were on par with the author's results. But I cannot seem to get as good results after few-shot fine-tune as the author's results.
On a side-node about Github: why, all of a sudden, am I reading some sort-of mandarin translation of my previous comment? lol
When I train 1500 epochs by two 1080Ti GPUs, The best base class evaluation result is shown.
and I don't know why the base result is poor.
Are you train base model with four GPUs?
Thanks your reply!
from fewshot_detection.
Excuse me, is your training setting exactly the same as the author?
Yes, I followed the configurations and the setting by the repo exactly. The results after base training were on par with the author's results. But I cannot seem to get as good results after few-shot fine-tune as the author's results.
On a side-node about Github: why, all of a sudden, am I reading some sort-of mandarin translation of my previous comment? lolWhen I train 1500 epochs by two 1080Ti GPUs, The best base class evaluation result is shown.
and I don't know why the base result is poor.
Are you train base model with four GPUs?
Thanks your reply!
Is this after base training alone, or after base training AND fine tuning?
Yeah, I carried our the base training with four GPUs.
from fewshot_detection.
Excuse me, is your training setting exactly the same as the author?
Yes, I followed the configurations and the setting by the repo exactly. The results after base training were on par with the author's results. But I cannot seem to get as good results after few-shot fine-tune as the author's results.
On a side-node about Github: why, all of a sudden, am I reading some sort-of mandarin translation of my previous comment? lolWhen I train 1500 epochs by two 1080Ti GPUs, The best base class evaluation result is shown.
and I don't know why the base result is poor.
Are you train base model with four GPUs?
Thanks your reply!Is this after base training alone, or after base training AND fine tuning?
Yeah, I carried our the base training with four GPUs.
this is the base training alone
from fewshot_detection.
this is the base training alone
Not sure if using 2 GPUs instead of 4 would affect the effectiveness of training. Probably the authors are best equipped to answer your questions.
Naively, i think you might try loading the last weights file of the model and continue with base training for some more epochs and see if the evaluation results improve. In my opinion, the base training should not cause much trouble since the it relies more on YOLOv2 and darknet back-bone and is essentially training a normal supervised learning object detection model with well-tested architecture (YOLOv2).
from fewshot_detection.
this is the base training alone
Not sure if using 2 GPUs instead of 4 would affect the effectiveness of training. Probably the authors are best equipped to answer your questions.
Naively, i think you might try loading the last weights file of the model and continue with base training for some more epochs and see if the evaluation results improve. In my opinion, the base training should not cause much trouble since the it relies more on YOLOv2 and darknet back-bone and is essentially training a normal supervised learning object detection model with well-tested architecture (YOLOv2).
Thank your reply, I will continue to train the base.
from fewshot_detection.
Thank your reply, I will continue to train the base.
I shared the my trained model (only after base training), here #19
Maybe you could the weights directly instead of trying to figure out how to get the base training to work.
from fewshot_detection.
Thank your reply, I will continue to train the base.
I shared the my trained model (only after base training), here #19
Maybe you could the weights directly instead of trying to figure out how to get the base training to work.
Thank your suggestion, I mainly use this method for other work and improve the performance. In a short, thank you very much!
from fewshot_detection.
Thank your reply, I will continue to train the base.
I shared the my trained model (only after base training), here #19
Maybe you could the weights directly instead of trying to figure out how to get the base training to work.
Hello, what is the configuration of your server environment? As configured by the author, I train 150 epochs, but the detection result is much worse than yours for base classes.
Thank your reply!
from fewshot_detection.
Thank you very much for your code, which is very interesting. But I have a question,
This is the result of my base training
This is the result of my few shot tuning
The MAP of novel class is better , but the base class is worse, is there something wrong?
Hello, how many epochs do you train this result on the base class?
Thanks~
from fewshot_detection.
Hello, what is the configuration of your server environment? As configured by the author, I train 150 epochs, but the detection result is much worse than yours for base classes.
Thank your reply!
I followed the author's config and environment pretty much exactly. So I am not very sure why training on different servers results quite different results. Perhaps run through the path to your downloaded datasets to make sure there is no hidden bugs.
from fewshot_detection.
Hi, Have new classes been used to train the base model?
from fewshot_detection.
Related Issues (20)
- 能否共享一下t-SNE的代码?并分享一下如何使用。感谢
- Which version of cuda is needed
- RuntimeError: The size of tensor a (3) must match the size of tensor b (864) at non-singleton dimension 3 HOT 5
- Strange Code in Dataset.py
- Strange about the meta-learning?
- few-shot training issues
- TypeError: conv2d() received an invalid combination of arguments HOT 6
- map all get 0 HOT 5
- Question about the paper
- Request for learnet module
- Inconsistencies in COCO splits HOT 1
- How long does the training step take? HOT 1
- TypeError: conv2d() received an invalid combination of arguments HOT 1
- ValueError: Expected input batch_size (20) to match target batch_size (8).
- ???
- How to duplicate the results?
- TypeError: only integer tensors of a single element can be converted to an index
- AttributeError:"Easydict" object has no attribute "data"
- 模型训练完成后,推理时间只有3ms,但是box filter却有上百毫秒,请问是为什么呢。 HOT 5
- expected tensor [64 x 32 x 3 x 3] and src [1776] to have the same number of elements, but got 18432 and 1776 elements respectively HOT 1
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from fewshot_detection.