GithubHelp home page GithubHelp logo

Result on HRSC2016 about s2anet HOT 15 CLOSED

csuhan avatar csuhan commented on July 21, 2024
Result on HRSC2016

from s2anet.

Comments (15)

ming71 avatar ming71 commented on July 21, 2024 2

The multiple versions of my RetinaNet also reached mAP of about 80% on HRSC2016, which are consistent with your results. Thank you again for your experiment and nice work! 😆

from s2anet.

ming71 avatar ming71 commented on July 21, 2024

S2ANet reaches the mAP higher than 89%, while RetinaNet obtains only 56%.

from s2anet.

csuhan avatar csuhan commented on July 21, 2024

HRSC2016 is very sensitive to the hyperparameters, e.g., lr, schedules, warmup_iters.
My experiments on HRSC2016 with Retinanet get poor preformance as well. But I think it will be better by carefully adjusting the hyperparameters.
You can validate the AP with different lr (e.g., 0.05 for 4gpus), schedules (e.g., 24 epochs), less warmup_iters and so on.

from s2anet.

ming71 avatar ming71 commented on July 21, 2024

HRSC2016 is very sensitive to the hyperparameters, e.g., lr, schedules, warmup_iters.
My experiments on HRSC2016 with Retinanet get poor preformance as well. But I think it will be better by carefully adjusting the hyperparameters.
You can validate the AP with different lr (e.g., 0.05 for 4gpus), schedules (e.g., 24 epochs), less warmup_iters and so on.

But it works well on DOTA... it's amazing that with the same config file, it reached mAP higher than 70%

from s2anet.

Fly-dream12 avatar Fly-dream12 commented on July 21, 2024

Thanks for your code, i can not reproduce the same results on HRSC2016 as reported in your paper, what may be wrong?
I have changed the lr to 0.001, and the other settings are the same.
@ csuhan @ ming71

from s2anet.

Fly-dream12 avatar Fly-dream12 commented on July 21, 2024

When I keep the learning rate as 0.01, the loss becomes too large as this:
2020-10-26 10:45:04,367 - INFO - Epoch [7][200/219] lr: 0.01000, eta: 0:24:59, time: 0.249, data_time: 0.002, memory: 2323, loss_fam_cls: 0.8456, loss_fam_bbox: 0.9042, loss_odm_cls: 0.3194, loss_odm_bbox: 58701399372.6484, loss: 58701399374.7175

from s2anet.

ming71 avatar ming71 commented on July 21, 2024

I don‘t know. I achieved the mAP of 89+% with the original settings for s2anet.

from s2anet.

Fly-dream12 avatar Fly-dream12 commented on July 21, 2024

But the result in the original paper is 90.17. @ming71

from s2anet.

ming71 avatar ming71 commented on July 21, 2024

But the result in the original paper is 90.17. @ming71

Note that I just run about 20 epochs to achieve the performance, not the whole scheduler.

from s2anet.

Fly-dream12 avatar Fly-dream12 commented on July 21, 2024

I have run 36 epochs, what is the lr in your config 0.01? when i trained the model under 0.01 learning rate, the loss become too large to converge. @ming71

from s2anet.

ming71 avatar ming71 commented on July 21, 2024

I have run 36 epochs, what is the lr in your config 0.01? when i trained the model under 0.01 learning rate, the loss become too large to converge. @ming71

I trained via dist_train.sh with 4 2080 Ti, everything is OK, maybe it's your problem.

from s2anet.

Fly-dream12 avatar Fly-dream12 commented on July 21, 2024

I trained with single gpu @ming71

from s2anet.

csuhan avatar csuhan commented on July 21, 2024

Refer to https://github.com/csuhan/s2anet/blob/master/docs/GETTING_STARTED.md#train-a-model
4GPU * 2img/GPU = 0.01lr
1GPU * 2img/GPU = 0.0025lr

from s2anet.

Sp2-Hybrid avatar Sp2-Hybrid commented on July 21, 2024

When I keep the learning rate as 0.01, the loss becomes too large as this:
2020-10-26 10:45:04,367 - INFO - Epoch [7][200/219] lr: 0.01000, eta: 0:24:59, time: 0.249, data_time: 0.002, memory: 2323, loss_fam_cls: 0.8456, loss_fam_bbox: 0.9042, loss_odm_cls: 0.3194, loss_odm_bbox: 58701399372.6484, loss: 58701399374.7175

Hello,how did you solve this problem? I met the same problem like you, when lr was equal to 0.01, loss was too large to train, I reduced lr even to 0.00001, but eventually it became large, even Nan.

from s2anet.

csuhan avatar csuhan commented on July 21, 2024

Hi, @ming71 , I know the reason why the mAP of RetinaNet in my codebase is so low. First, 3x is too short for RetinaNet. Then horizontal anchors make the angle prediction hard to converge.

Therefore, I changes some settings:
(1) longer training schedule: 6x RetinaNet reaches 73% mAP.
(2) more anchor angles: when set anchor_angles=[0., PI/3, PI/6, PI/2], the mAP is about 81%.

from s2anet.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.