Comments (17)
Hi, @bolianchen :
- PC_DARTS_cifar was searched on CIFAR-10
- Yes, it is the genotype in the last epoch.
from pc-darts.
Hi, @yuhuixu1993
I want to reproduce your model (Params(M) is 3.6), so I just run python train_search.py
to search model on CIFAR-10. But in the last epoch, I get a model with 4.5M Params. May I get your original training setting?
from pc-darts.
@whwu95,nas can not search the same architecture in each different search run or most of them are different. However, 4.5M are still strange. would you please offer the genotypes?
from pc-darts.
Hi @yuhuixu1993
Thanks very much for your immediate reply!!
from pc-darts.
@whwu95,nas can not search the same architecture in each different search run or most of them are different. However, 4.5M are still strange. would you please offer the genotypes?
Genotype(normal=[('sep_conv_5x5', 1), ('sep_conv_5x5', 0), ('sep_conv_5x5', 2), ('sep_conv_5x5', 1), ('sep_conv_5x5', 3), ('sep_conv_5x5', 1), ('max_pool_3x3', 4), ('sep_conv_5x5', 2)], normal_concat=range(2, 6), reduce=[('max_pool_3x3', 1), ('sep_conv_5x5', 0), ('max_pool_3x3', 2), ('max_pool_3x3', 1), ('sep_conv_5x5', 3), ('sep_conv_3x3', 2), ('sep_conv_3x3', 3), ('dil_conv_3x3', 2)], reduce_concat=range(2, 6))
from pc-darts.
@whwu95 ,the default settings are my training settings,did you change any hyperparameters? did you try more search runs,does it happen all the time?
from pc-darts.
@yuhuixu1993 Yes, I haven't changed anything and it happens all the time.
from pc-darts.
@yuhuixu1993 I noticed that you change the '--learning_rate_min' from 0.001 to 0, so I tried again and get a model with 4.81M Params.
In fact, first I try to run python train_search.py
on 1080Ti and 2080Ti with out of memory (default batch size 256). So I try again on TiTanX with batch size 256 (using 11.5G memory), and I get the above results.
from pc-darts.
@whwu95 ,I run code I released again on 1080ti without OOM problem. What is your running environment? On 1080ti, better to use pytorch 0.3. I will check the results again after the running is finished.
from pc-darts.
@yuhuixu1993 Thank you for your reply! My environment is Pytorch 1.0. Anyway, I try again on TITANX to fix the OOM problem. Thank you again and Waiting for your results.
from pc-darts.
@whwu95 ,hi,it works well under this environment, I recommend you to use pytorch 0.3 and 1080ti. Besides sometimes the hyperparameter that the epoch begins to start to train the arch_params can somewhat control the parameters of the searched architectures.
from pc-darts.
@yuhuixu1993 hi, I transfer my env to pytorch0.3 & python2.7, and run python train_search.py
again. However, I still get a model with 4.6M Params...... It's really strange. Attached is my log file. Could you show me your log file? Thanks a lot.
log.txt
from pc-darts.
@whwu95 ,from your log file,I notice that your architecture began to change in the first epoch, please check that we update the architecture
Parameters in the 15th epoch !
Line 156 in a2fa00a
log.txt
from pc-darts.
@yuhuixu1993 Hi, I solve the problem. Yesterday I try to transfer the code to Pytorch1.0, so I replace
the code in model_search.py
self.alphas_normal = Variable(1e-3*torch.randn(k, num_ops).cuda(), requires_grad=True)
self.alphas_reduce = Variable(1e-3*torch.randn(k, num_ops).cuda(), requires_grad=True)
self.betas_normal = Variable(1e-3*torch.randn(k).cuda(), requires_grad=True)
self.betas_reduce = Variable(1e-3*torch.randn(k).cuda(), requires_grad=True)
with the code
self.alphas_normal = nn.Parameter(1e-3*torch.randn(k, num_ops))
self.alphas_reduce = nn.Parameter(1e-3*torch.randn(k, num_ops))
self.betas_normal = nn.Parameter(1e-3*torch.randn(k))
self.betas_reduce = nn.Parameter(1e-3*torch.randn(k))
Due to the params defined by nn.Parameter will be included in model.parameters(), so this code makes the model update the arch_params in the first epoch instead of the 15th epoch.
from pc-darts.
I met the same issue for multi GPU (Variable vs nn.Parameter). The solution is the following:
- don t call _initialize_alphas() inside your model in the init() (so during the initialization, your model doesn't have self.alphas_normal ... in model.parameters() )
- Create the optimizer for the main model in the train_search
- Use model._initialize_alphas() to declare self.alphas_normal ... (there are declkared but there are not used by the optimizer)
- Declare Architect() and the optimizer specific to alphas_normal,alphas_reduce ... via self.model.arch_parameters(). Don t forget to set requires_grad = True to alphas_normal ... with:
for p in self.model.arch_parameters():
p.requires_grad = True
@yuhuixu1993 if you don know how to do it, feel free to contact me. Thanks a lot for this project.
When will your paper be published ?
from pc-darts.
@yuhuixu1993 Hi, I solve the problem. Yesterday I try to transfer the code to Pytorch1.0, so I replace
the code in model_search.pyself.alphas_normal = Variable(1e-3*torch.randn(k, num_ops).cuda(), requires_grad=True) self.alphas_reduce = Variable(1e-3*torch.randn(k, num_ops).cuda(), requires_grad=True) self.betas_normal = Variable(1e-3*torch.randn(k).cuda(), requires_grad=True) self.betas_reduce = Variable(1e-3*torch.randn(k).cuda(), requires_grad=True)
with the code
self.alphas_normal = nn.Parameter(1e-3*torch.randn(k, num_ops)) self.alphas_reduce = nn.Parameter(1e-3*torch.randn(k, num_ops)) self.betas_normal = nn.Parameter(1e-3*torch.randn(k)) self.betas_reduce = nn.Parameter(1e-3*torch.randn(k))
Due to the params defined by nn.Parameter will be included in model.parameters(), so this code makes the model update the arch_params in the first epoch instead of the 15th epoch.
So did you reproduce the result in pytorch 1.0 finally ?
from pc-darts.
I met the same issue for multi GPU (Variable vs nn.Parameter). The solution is the following:
- don t call _initialize_alphas() inside your model in the init() (so during the initialization, your model doesn't have self.alphas_normal ... in model.parameters() )
- Create the optimizer for the main model in the train_search
- Use model._initialize_alphas() to declare self.alphas_normal ... (there are declkared but there are not used by the optimizer)
- Declare Architect() and the optimizer specific to alphas_normal,alphas_reduce ... via self.model.arch_parameters(). Don t forget to set requires_grad = True to alphas_normal ... with:
for p in self.model.arch_parameters(): p.requires_grad = True
@yuhuixu1993 if you don know how to do it, feel free to contact me. Thanks a lot for this project.
When will your paper be published ?
hi @OValery16, thanks for your kind contribution, I will try if it really works. Our paper has recently been accepted by ICLR 20. Thanks!
from pc-darts.
Related Issues (20)
- Is a channel sampling mask fixed? HOT 3
- Is there any plan to release the pretrained imagenet model? HOT 1
- Why modifying architecture after epoch 15
- Data preparation of ImageNet
- How to change the channel proportion K? HOT 2
- Cannot re-implement your claimed result HOT 3
- GPU Utilization is Bad HOT 1
- We cannot obtain your claimed result on ImageNet after trying many configurations HOT 4
- Question about search on custom dataset HOT 5
- test.py运行报错
- Understanding the two sets of the architecture hyperparameter HOT 2
- how you report the final accuracy in evaluation? Possibly touch the test set for the best acc? HOT 2
- Learning rate schedule
- 你好,结果不一致 HOT 2
- Searched genotype remain / keep unchanged for a great number of epoch HOT 2
- RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1!
- 您好,想请问一下网络搜索完之后如何得到需要的网络结构代码? HOT 3
- About the license of this repository
- Hello, whether PC-DARTS likes DARTS with extra dropout?
- Not Enough Comments in the Code
Recommend Projects
-
React
A declarative, efficient, and flexible JavaScript library for building user interfaces.
-
Vue.js
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
-
Typescript
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
-
TensorFlow
An Open Source Machine Learning Framework for Everyone
-
Django
The Web framework for perfectionists with deadlines.
-
Laravel
A PHP framework for web artisans
-
D3
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
-
Recommend Topics
-
javascript
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
-
web
Some thing interesting about web. New door for the world.
-
server
A server is a program made to process requests and deliver data to clients.
-
Machine learning
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
-
Visualization
Some thing interesting about visualization, use data art
-
Game
Some thing interesting about game, make everyone happy.
Recommend Org
-
Facebook
We are working to build community through open source technology. NB: members must have two-factor auth.
-
Microsoft
Open source projects and samples from Microsoft.
-
Google
Google ❤️ Open Source for everyone.
-
Alibaba
Alibaba Open Source for everyone
-
D3
Data-Driven Documents codes.
-
Tencent
China tencent open source team.
from pc-darts.