GithubHelp home page GithubHelp logo

Comments (12)

yyou1996 avatar yyou1996 commented on August 27, 2024

Thanks for your interest in our work.

I need your help @yongduosui.

from graphcl.

yongduosui avatar yongduosui commented on August 27, 2024

thx! I have fixed the bug.

from graphcl.

googlebaba avatar googlebaba commented on August 27, 2024

Great! Another question is whether def aug_mask_list() is used for attribute masking? From the code, it masks the whole attributes for the selected nodes, but in your paper, it has this description "we might only mask the pixel value part rather than all analog to image completion". It seems that only partial attributes will be masked. So which one is correct?

from graphcl.

yongduosui avatar yongduosui commented on August 27, 2024

In our paper table1 shows "Semantic robustness against losing partial attributes per node". It means that masks the whole attributes for the selected nodes. "partial attributes" means the whole graph's partial attributes, not a single node.

from graphcl.

googlebaba avatar googlebaba commented on August 27, 2024

Well, it still seems confusing. To my understanding, the method masks the whole attributes for the selected nodes to against losing partial attributes. Why masking the whole attributes will make the model robust against the partial attributes losing? Why do not you use the same masking strategy, i.e., masking partial attributes?
Moreover, in Appendix C, it states that "node attributes of superpixel graphs contain information of pixel value and location,
and we might only mask the pixel value part rather than all analog to image completion." I think it means you mask partial attributes. So it is different from the code. It makes me confused a lot.
Hope your reply. Thank you.

from graphcl.

yyou1996 avatar yyou1996 commented on August 27, 2024

@googlebaba This is a great point! After my double-check, you are totally correct that there is a mismatch between the text description and implementation of Attribute Masking. This results from the needless words "per node" in "Semantic robustness against losing partial attributes per node". Our original thought was masking partial node attributes and recovering with their context (neighbor nodes) if you go through the description of Attribute Masking below Table 1:

Attribute masking. Attribute masking prompts models to recover masked vertex attributes using their context information, i.e., the remaining attributes. The underlying assumption is that missing partial vertex attributes does not affect the model predictions much.

which is our program doing. For Appendix C, our original paragraph is:

Surprisingly, attribute masking corresponding to image completion hurts the performance, which might result from our implementation: node attributes of superpixel graphs contain information of pixel value and location, and we might only mask the pixel value part rather than all analog to image completion.

suggesting that if we only mask partial information in superpixel graphs (i.e. the pixel content values) then it mimics image inpainting in CV which might provide better performance.

I sincerely apologize for the inconsistency leading to your confusion. We will make modifications and update arXiv.

from graphcl.

googlebaba avatar googlebaba commented on August 27, 2024

Thanks. My question has solved.

from graphcl.

googlebaba avatar googlebaba commented on August 27, 2024

Another bug! As far as I know, MNIST is a directed graph, so why can you drop the edges by an undirected way?

`def aug_drop_add_link(graph, drop_percent=0.2):

pro = drop_percent / 2
aug_graph = copy.deepcopy(graph)
edge_num = aug_graph.number_of_edges()

drop_num = int(edge_num * pro / 2)
add_num = int(edge_num * pro / 2)
del_edges_id_list = []
all_edges_id_list = [i for i in range(edge_num)]

for i in range(drop_num):

    random_idx = randint(0, edge_num - 1) # 在这个范围里面随机找边
    u_v = aug_graph.find_edges(all_edges_id_list[random_idx]) # 返回源和目的节点u v
    del_edge_id1 = aug_graph.edge_ids(u_v[0], u_v[1])
    del_edge_id2 = aug_graph.edge_ids(u_v[1], u_v[0])
    if del_edge_id1.size(0):
        del_edges_id_list.append(del_edge_id1)
        all_edges_id_list.remove(del_edge_id1.item())
    if del_edge_id2.size(0):
        del_edges_id_list.append(del_edge_id2)
        all_edges_id_list.remove(del_edge_id2.item())
    edge_num -= 2
aug_graph.remove_edges(del_edges_id_list)`

from graphcl.

googlebaba avatar googlebaba commented on August 27, 2024

Kindly reminder! You'd better double-check the code and paper to avoid wasting others' time.

from graphcl.

yyou1996 avatar yyou1996 commented on August 27, 2024

Not sure how the "directed graph" conclusion comes. According to the paper https://arxiv.org/abs/2003.00982, MNIST and CIFAR10 are super-pixel graphs datasets, where the topological relationship represents 8-nearest neighbors, so we suppose they are undirected graphs.

from graphcl.

googlebaba avatar googlebaba commented on August 27, 2024

KNN is a directed graph through the method described in benchmark_gnn. And I have checked the data and I suggest you also do that. By the way, your code cannot run directly and report the following bug!
image

from graphcl.

yongduosui avatar yongduosui commented on August 27, 2024

As for superpixel datasets, we just release the code of best augmentation method: node dropping or subgraph. As for edge perturbation for these datasets, we also apply these undirected way code to other undirected dataset during our experiments process. If you're interested in edge perturation, you can just modify the code in directed way. And we will also add them soon. Thanks.

from graphcl.

Related Issues (20)

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.