Comments (15)
The multiple versions of my RetinaNet also reached mAP of about 80% on HRSC2016, which are consistent with your results. Thank you again for your experiment and nice work! 😆
from s2anet.
S2ANet reaches the mAP higher than 89%, while RetinaNet obtains only 56%.
from s2anet.
HRSC2016 is very sensitive to the hyperparameters, e.g., lr, schedules, warmup_iters.
My experiments on HRSC2016 with Retinanet get poor preformance as well. But I think it will be better by carefully adjusting the hyperparameters.
You can validate the AP with different lr (e.g., 0.05 for 4gpus), schedules (e.g., 24 epochs), less warmup_iters and so on.
from s2anet.
HRSC2016 is very sensitive to the hyperparameters, e.g., lr, schedules, warmup_iters.
My experiments on HRSC2016 with Retinanet get poor preformance as well. But I think it will be better by carefully adjusting the hyperparameters.
You can validate the AP with different lr (e.g., 0.05 for 4gpus), schedules (e.g., 24 epochs), less warmup_iters and so on.
But it works well on DOTA... it's amazing that with the same config file, it reached mAP higher than 70%
from s2anet.
Thanks for your code, i can not reproduce the same results on HRSC2016 as reported in your paper, what may be wrong?
I have changed the lr to 0.001, and the other settings are the same.
@ csuhan @ ming71
from s2anet.
When I keep the learning rate as 0.01, the loss becomes too large as this:
2020-10-26 10:45:04,367 - INFO - Epoch [7][200/219] lr: 0.01000, eta: 0:24:59, time: 0.249, data_time: 0.002, memory: 2323, loss_fam_cls: 0.8456, loss_fam_bbox: 0.9042, loss_odm_cls: 0.3194, loss_odm_bbox: 58701399372.6484, loss: 58701399374.7175
from s2anet.
I don‘t know. I achieved the mAP of 89+% with the original settings for s2anet.
from s2anet.
But the result in the original paper is 90.17. @ming71
from s2anet.
But the result in the original paper is 90.17. @ming71
Note that I just run about 20 epochs to achieve the performance, not the whole scheduler.
from s2anet.
I have run 36 epochs, what is the lr in your config 0.01? when i trained the model under 0.01 learning rate, the loss become too large to converge. @ming71
from s2anet.
I have run 36 epochs, what is the lr in your config 0.01? when i trained the model under 0.01 learning rate, the loss become too large to converge. @ming71
I trained via dist_train.sh
with 4 2080 Ti, everything is OK, maybe it's your problem.
from s2anet.
I trained with single gpu @ming71
from s2anet.
Refer to https://github.com/csuhan/s2anet/blob/master/docs/GETTING_STARTED.md#train-a-model
4GPU * 2img/GPU = 0.01lr
1GPU * 2img/GPU = 0.0025lr
from s2anet.
When I keep the learning rate as 0.01, the loss becomes too large as this:
2020-10-26 10:45:04,367 - INFO - Epoch [7][200/219] lr: 0.01000, eta: 0:24:59, time: 0.249, data_time: 0.002, memory: 2323, loss_fam_cls: 0.8456, loss_fam_bbox: 0.9042, loss_odm_cls: 0.3194, loss_odm_bbox: 58701399372.6484, loss: 58701399374.7175
Hello,how did you solve this problem? I met the same problem like you, when lr was equal to 0.01, loss was too large to train, I reduced lr even to 0.00001, but eventually it became large, even Nan.
from s2anet.
Hi, @ming71 , I know the reason why the mAP of RetinaNet in my codebase is so low. First, 3x is too short for RetinaNet. Then horizontal anchors make the angle prediction hard to converge.
Therefore, I changes some settings:
(1) longer training schedule: 6x RetinaNet reaches 73% mAP.
(2) more anchor angles: when set anchor_angles=[0., PI/3, PI/6, PI/2], the mAP is about 81%.
from s2anet.
Related Issues (20)
- RuntimeError: expected device cuda:1 but got device cuda:0 HOT 2
- 请问一下数据预处理生成的trainval_s2anet.pkl和test_s2anet.pkl包含的是哪些内容? HOT 1
- 关于更换backbone之后mAP输出始终为0 HOT 1
- 计算旋转IOU时的数据格式
- 新手求救! HOT 2
- IndexError: too many indices for array: array is 1-dimensional, but 2 were indexed
- 关于初始学习率的设置
- ModuleNotFoundError: No module named 'DOTA_devkit'
- 请问在训练后如何生成pickle类型的文件呢,谢谢您 HOT 1
- ImportError: cannot import name 'ConvModule' from 'mmcv.cnn'
- Welcome update to OpenMMLab 2.0
- Why use train and validation datasets both for training?
- 关于ARF算子
- Experimental effect
- setup.py HOT 3
- python setup.py develop报错
- Prepare datasets
- Prepare datasets
- python tools/train.py configs/dota/s2anet_r50_fpn_1x_dota.py
- 执行python tools/test.py,生成results_after_nms后,进行Start evaluation,报错文件找不到!
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from s2anet.