lightaime / sgas Goto Github PK
View Code? Open in Web Editor NEWSGAS: Sequential Greedy Architecture Search (CVPR'2020) https://www.deepgcns.org/auto/sgas
License: MIT License
SGAS: Sequential Greedy Architecture Search (CVPR'2020) https://www.deepgcns.org/auto/sgas
License: MIT License
Can you tell me whether the code is written by myself or modified based on which paper? If it is modified based on the existing code, can you tell me which paper's code is?
Thanks!
I think your idea is excellent and more naturally than those using sample such as SNAS. However , one problem of DARTS is the entropy of the operation distribution is very small. The maximum softmax value is about 0.3. So why the "Selection Certainty" can work? It's very common when the parameter of two op is very close, the "Selection Certainty" may be large because other op parameter is samll.
In this case "Selection Certainty" may not work. And almost all edges suffer this problem, may increase difficulty .
sgas/gcn/gcn_point/model_search.py
Lines 55 to 64 in 6faff35
Hi, @guochengqian
In line 60, I think that 'continue' is more suitable rather than 'pass'. Using 'pass', it will not skip line 63 and 'o_list' will additionally append the operator 'o' in last iteration. Do you think so?
Which deep learning framework did you use? Tensorflow, Pytorch or any other one?
And when will you update your codes?
Look forward to your reply.
I make it and update higher accuracy than the existed
If need, contact me
Hi, when i run main_ppi.py , there are something wrong with drop_path .
Traceback (most recent call last): File "main_ppi.py", line 163, in <module> train() File "main_ppi.py", line 56, in train train_acc, train_obj,class_acc = train_step( model, criterion, optimizer) File "main_ppi.py", line 77, in train_step logits, logits_aux = model(features,edge_index) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/content/drive/My Drive/model.py", line 98, in forward s0, s1 = s1, cell(s0, s1, edge_index, self.drop_path_prob) File "/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py", line 532, in __call__ result = self.forward(*input, **kwargs) File "/content/drive/My Drive/model.py", line 42, in forward h1 = drop_path(h1, drop_prob) File "/content/drive/My Drive/NAS-GCN-SAR/tools/utils.py", line 137, in drop_path x.mul_(mask) RuntimeError: output with shape [46659, 32] doesn't match the broadcast shape [46659, 1, 46659, 32]
If i assign 0 to --drop_path_prob
, that work well.
Can you help me ? Thank you very mach !!
I search with SGAS Cri2, whicn return the architecture:
Genotype(normal=[('skip_connect', 0), ('gat', 1), ('conv_1x1', 1), ('conv_1x1', 2), ('sage', 2), ('semi_gcn', 3)], normal_concat=range(1, 5))
Then I train the compact model by stacking this cell. However, this architecture perform not well:
Finish! best_test_overall_acc 0.897893 test_class_acc_when_best 0.840035
This result is far away from that(93.07) reported in the paper.
How can I reproduce the result reported in the paper? Conduct more searches with different random seeds?
Hi @lightaime , thinks for your excellent job and instant reply, I've solved OSError problem. But new problem appears, it confused me a lot when I run the gcn_graph/train_search.py
File "train_search.py", line 313, in
main()
File "train_search.py", line 177, in main
train_dataset = GeoData.PPI(os.path.join(args.data, 'ppi'), split='train')
File "/home/tie.xu/anaconda3/envs/sgas/lib/python3.6/site-packages/torch_geometric/datasets/ppi.py", line 55, in init
self.data, self.slices = torch.load(self.processed_paths[0])
File "/home/tie.xu/anaconda3/envs/sgas/lib/python3.6/site-packages/torch/serialization.py", line 529, in load
return _legacy_load(opened_file, map_location, pickle_module, **pickle_load_args)
File "/home/tie.xu/anaconda3/envs/sgas/lib/python3.6/site-packages/torch/serialization.py", line 702, in _legacy_load
result = unpickler.load()
ModuleNotFoundError: No module named 'torch_geometric.data.storage'
Best!
0.2 or 0.3?
Paper 0.3 Code 0.2?
I have run the env_install step by step, But when I run the code "sgas/gcn_point/train_search", I have the error:
RuntimeError: No such operator torch_cluster::random_walk
Can you help me? Thanks!
I have run the env_install step by step, But when I run the code "sgas/gcn_point/main_modelnet.py", I have the error:
RuntimeError: cuDNN error: CUDNN_STATUS_NOT_SUPPORTED. This error may appear if you passed in a non-contiguous input.
Can you help me? Thanks!
Hi,
Nice work.
When training the searched model on ModelNet40 (using the provided command, and trained the model on one Tesla v100), the error occurs. What is the batchsize you used? And other parameters?
Thanks.
It seems that "Sequential Greedy Architecture Search" is same with progressive method in P-DARTS. Have you measured how Greedy method impacts Kendall? Why not show all DARTS-based mehods' Kendall?
I have this problem when I run the code(cnn/train_search.py). If you could help me solve this, I appreciate. It usually apears after some epochs(28 for me)
Traceback (most recent call last):
File "/home/tie.xu/anaconda3/envs/sgas/lib/python3.6/multiprocessing/util.py", line 262, in _run_finalizers
finalizer()
File "/home/tie.xu/anaconda3/envs/sgas/lib/python3.6/multiprocessing/util.py", line 186, in call
res = self._callback(*self._args, **self._kwargs)
File "/home/tie.xu/anaconda3/envs/sgas/lib/python3.6/shutil.py", line 486, in rmtree
_rmtree_safe_fd(fd, path, onerror)
File "/home/tie.xu/anaconda3/envs/sgas/lib/python3.6/shutil.py", line 444, in _rmtree_safe_fd
onerror(os.unlink, fullname, sys.exc_info())
File "/home/tie.xu/anaconda3/envs/sgas/lib/python3.6/shutil.py", line 442, in _rmtree_safe_fd
os.unlink(name, dir_fd=topfd)
OSError: [Errno 16] Device or resource busy: '.nfs000000001835000600002bd7'
The info appears many times when I run the code!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.