acbull / ladies Goto Github PK
View Code? Open in Web Editor NEWCode for NeurIPS'19 "Layer-Dependent Importance Sampling for Training Deep and Large Graph Convolutional Networks"
Code for NeurIPS'19 "Layer-Dependent Importance Sampling for Training Deep and Large Graph Convolutional Networks"
Hi, it seems the citation bibtex in your README.md is not LADIES.
Hi all, could you release the code for calculating the memory in Table 3?
In line #L517, nodes of the lowest layer are treated as input nodes for GCN. This suggests the lowest layer contains all the nodes in the sampled sub-graph. However, it is not always true.
For example,
layer1: 4->2 5 -> 2
layer2: 2->1 3->1
layer3: 1->0
the lowest layer contains node (4, 5, 2), the middle layer contain nodes (2,3,1), the top layer contains nodes (1, 0). In your code, the features for nodes 1 and 3 are lost.
Dear authors,
Thanks for sharing the code, after try to run it, I found multiple entry level bugs in the code. Now the main script is not even executable. Could you help to solve the issues?
For example:
After I fixed all the issues in the code, the algorithm cannot converge in any of the databases. The training error goes to 0 after 2 epochs but the validation error keep increasing, the F-1 score always fluctuates around 0.7, and the first iteration has the lowest validation loss, which is very confusing.
Could you please fix the errors in the code? It may mislead the community. Thank you very much!
YH
Could you please upload your paper first? Thanks.
When I validate this model on Reddit dataset, the model always run out of gpu memory, where the validation are conducted on a machine with Tesla V100-PCIE GPU (32GB memory). This is inconsistent with the results shown in table 3 in your paper. The detailed error is as follow: Traceback (most recent call last):
File "pytorch_ladies_.py", line 321, in
output = best_model.forward(feat_data[input_nodes], adjs)[output_nodes]
File "pytorch_ladies_.py", line 91, in forward
x = self.encoder(feat, adjs)
File "/root/anaconda3/envs/LADIES/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "pytorch_ladies_.py", line 81, in forward
x = self.dropout(self.gcs[idx](x, adjs[idx]))
File "/root/anaconda3/envs/LADIES/lib/python3.6/site-packages/torch/nn/modules/module.py", line 489, in call
result = self.forward(*input, **kwargs)
File "pytorch_ladies_.py", line 62, in forward
return F.elu(torch.spmm(adj, out))
RuntimeError: CUDA out of memory. Tried to allocate 1.71 GiB (GPU 0; 31.75 GiB total capacity; 28.63 GiB already allocated; 453.50 MiB free; 1.63 GiB cached)
Hi,
I was trying to reproduce the same set of results with the same datasets with your code. Though the accuracy mean and variance for 10 experiments gives almost the same result, the total time and batch number do not match. I also wonder how these two results are quite different for batch size 64 and 512. Could you please upload the final version of your code where you calculate these results or give an insight on how can I reproduce these results?
Thanks!
Hi, I am trying to reproduce the result of LADIES on Reddit dataset. I am curious about where is the download link of Reddit dataset and the processing function of reddit dataset. Thank you for your help in advance.
What's SuGCN for? I'm sorry that i can't got this code very well.
没太明白为什么要专门写了SuGCN,想请教一下。
Hi, bro,
in your code,
fastgcn_sampler:
adj = row_norm(U[: , after_nodes].multiply(1/p[after_nodes]))
However, other people say: we should row - select and col - select the lap_matrix (U).
More details can see: https://github.com/khanhhhh/ladies/blob/khanh/solution_random_block/sampler.py
I can not understand, can you explain more?
Hi, bro,
Thanks for making the code public. But there are so many bugs in the code which can't be run directly. By the way, could you please report the detailed parameters you used for results in Table 3 on the four datasets ? I tuned a lot but can't reproduce the performance. Thanks very much!
A declarative, efficient, and flexible JavaScript library for building user interfaces.
🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.
TypeScript is a superset of JavaScript that compiles to clean JavaScript output.
An Open Source Machine Learning Framework for Everyone
The Web framework for perfectionists with deadlines.
A PHP framework for web artisans
Bring data to life with SVG, Canvas and HTML. 📊📈🎉
JavaScript (JS) is a lightweight interpreted programming language with first-class functions.
Some thing interesting about web. New door for the world.
A server is a program made to process requests and deliver data to clients.
Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.
Some thing interesting about visualization, use data art
Some thing interesting about game, make everyone happy.
We are working to build community through open source technology. NB: members must have two-factor auth.
Open source projects and samples from Microsoft.
Google ❤️ Open Source for everyone.
Alibaba Open Source for everyone
Data-Driven Documents codes.
China tencent open source team.