Comments (7)
Hi
Thank you for your interest in our paper.
For torch.optim.AdamW
, you have to use weight_decay=0.1
.
In AdamW
paper, they decoupled the weight decay which means w = (1 - weight_decay)w
But, PyTorch implementation is w = (1 - lr * weight_decay) w
https://github.com/pytorch/pytorch/blob/b31f58de6fa8bbda5353b3c77d9be4914399724d/torch/optim/adamw.py#L73
It makes it easy to utilize the learning rate scheduler for weight decay but requires changing parameters.
In the paper, we followed the notation of AdamW
paper.
So lr=1e-3, weight_decay=0.1
is the PyTorch parameter for weight decay 1e-4.
You can find a similar setting on NovoGrad paper
https://arxiv.org/pdf/1905.11286.pdf
from adamp.
5e-3
is correct.
torch.optim.AdamW(param, lr=2e-3, weight_decay=5e-3)
It is 1e-5
in paper notation.
from adamp.
Sorry, I missed the mobilenetV2
We used lr=2e-3, wd=5e-3, batch_size=1024
for MobileNetV2
from adamp.
Thank you for a quick response.
I'm still a little confused about MobileNet. For mobilenetV2, whther the PyTorch
parameter for weight decay is 2.5
or 5e-3
?
Thx
from adamp.
I see, Thx !!!
from adamp.
Sorry, I missed the wd_ratio
and delta
in AdamP. I know that AdamW and AdamP have the same hyperparameter, except for wd_ratio
and delta
.
For MobileNetV2 and AdamP of table 2, is the hyperparameter
AdamP(params, lr=2e-3, betas=(0.9, 0.999), eps=1e-8, weight_decay=5e-3, delta=0.1, wd_ratio=0.1, nesterov=True)
?
from adamp.
You are correct.
But, we didn't use nesterov for fair comparison with AdamW.
I think nesterov=True
will make better performance.
However, if you want the same setting, then
AdamP(params, lr=2e-3, betas=(0.9, 0.999), eps=1e-8, weight_decay=5e-3, delta=0.1, wd_ratio=0.1, nesterov=False)
epochs=150
, label_smoothing=0.1
from adamp.
Related Issues (13)
- example params HOT 2
- unexptected behavior of AdamP on CPU? HOT 5
- Could it be equivalent to normalize the weights ? HOT 2
- Good bug in your implementation of AdamP HOT 1
- learning rate scheduler for adamp? HOT 3
- The difference with Adam and AdamP in the code HOT 3
- Experiment of Adam with warm up? HOT 2
- Question about eq (5) in section 2.3 HOT 1
- Depereciation warning in pytorch 1.5 (maybe and above?) HOT 4
- Builtin cosine_similarity HOT 4
- Recommended value to weight_decay for AdamP? HOT 1
- Runtime: Adam vs AdamP HOT 2
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from adamp.