GithubHelp home page GithubHelp logo

dsgiitr / graph_nets Goto Github PK

View Code? Open in Web Editor NEW
1.1K 37.0 223.0 12.23 MB

PyTorch Implementation and Explanation of Graph Representation Learning papers: DeepWalk, GCN, GraphSAGE, ChebNet & GAT.

Jupyter Notebook 93.95% Python 6.05%
graph-representation-learning graph-convolutional-networks graph-attention-networks graph-embedding deepwalk node-embedding graph-sage chebyshev-polynomials pytorch

graph_nets's Introduction

Graph Representation Learning

This repo is a supplement to our blog series Explained: Graph Representation Learning. The following major papers and corresponding blogs have been covered as part of the series and we look to add blogs on a few other significant works in the field.

Setup

Clone the git repository :

git clone https://github.com/dsgiitr/graph_nets.git

Python 3 with Pytorch 1.3.0 are the primary requirements. The requirements.txt file contains a listing of other dependencies. To install all the requirements, run the following:

pip install -r requirements.txt

1. Understanding DeepWalk

Unsupervised online learning approach, inspired from word2vec in NLP, but, here the goal is to generate node embeddings.

2. A Review : Graph Convolutional Networks (GCN)

GCNs draw on the idea of Convolution Neural Networks re-defining them for the non-euclidean data domain. They are convolutional, because filter parameters are typically shared over all locations in the graph unlike typical GNNs.

3. Graph SAGE(SAmple and aggreGatE)

Previous approaches are transductive and don't naturally generalize to unseen nodes. GraphSAGE is an inductive framework leveraging node feature information to efficiently generate node embeddings.

4. ChebNet: CNN on Graphs with Fast Localized Spectral Filtering

ChebNet is a formulation of CNNs in the context of spectral graph theory.


5. Understanding Graph Attention Networks

GAT is able to attend over their neighborhoods’ features, implicitly specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix operation or depending on knowing the graph structure upfront.


Citation

Please use the following entry for citing the blog.

@misc{graph_nets,
  author = {A. Dagar and A. Pant and S. Gupta and S. Chandel},
  title = {graph_nets},
  year = {2020},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/dsgiitr/graph_nets}},
}

graph_nets's People

Contributors

ajitpant avatar anirudhdagar avatar bytesamurai avatar gupta1912 avatar shubhamgit1 avatar

Stargazers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

Watchers

 avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar  avatar

graph_nets's Issues

Problems about graphsage

In GraphSage part, just one hop neighbors were used in enc1 and enc2.
Maybe , using 2 hop neighbors in enc1 and 1 hop neighbors in enc2 is more proper.
Just a little suggestion.

ChebNet dataset

I saw the code has this code sentence
image
how can i get this datasets folder

Question: GCN pytorch implementation

Thanks a lot for the code. Found it via your blog post about GCNs.

I see Kipf used Laplace symmetric normalisation - as you wrote also here: https://dsgiitr.com/blogs/gcn/

But I don't understand, why the diagonal degree matrix is produced using A instead of A_hat:

        self.A_hat = A+torch.eye(A.size(0))
        self.D     = torch.diag(torch.sum(A,1))  #<< should this not be self.A_hat?
        self.D     = self.D.inverse().sqrt()

Recommend Projects

  • React photo React

    A declarative, efficient, and flexible JavaScript library for building user interfaces.

  • Vue.js photo Vue.js

    🖖 Vue.js is a progressive, incrementally-adoptable JavaScript framework for building UI on the web.

  • Typescript photo Typescript

    TypeScript is a superset of JavaScript that compiles to clean JavaScript output.

  • TensorFlow photo TensorFlow

    An Open Source Machine Learning Framework for Everyone

  • Django photo Django

    The Web framework for perfectionists with deadlines.

  • D3 photo D3

    Bring data to life with SVG, Canvas and HTML. 📊📈🎉

Recommend Topics

  • javascript

    JavaScript (JS) is a lightweight interpreted programming language with first-class functions.

  • web

    Some thing interesting about web. New door for the world.

  • server

    A server is a program made to process requests and deliver data to clients.

  • Machine learning

    Machine learning is a way of modeling and interpreting data that allows a piece of software to respond intelligently.

  • Game

    Some thing interesting about game, make everyone happy.

Recommend Org

  • Facebook photo Facebook

    We are working to build community through open source technology. NB: members must have two-factor auth.

  • Microsoft photo Microsoft

    Open source projects and samples from Microsoft.

  • Google photo Google

    Google ❤️ Open Source for everyone.

  • D3 photo D3

    Data-Driven Documents codes.