This repo is a supplement to our blog series Explained: Graph Representation Learning. The following major papers and corresponding blogs have been covered as part of the series and we look to add blogs on a few other significant works in the field.
Clone the git repository :
git clone https://github.com/dsgiitr/graph_nets.git
Python 3 with Pytorch 1.3.0 are the primary requirements. The requirements.txt
file contains a listing of other dependencies. To install all the requirements, run the following:
pip install -r requirements.txt
Unsupervised online learning approach, inspired from word2vec in NLP, but, here the goal is to generate node embeddings.
GCNs draw on the idea of Convolution Neural Networks re-defining them for the non-euclidean data domain. They are convolutional, because filter parameters are typically shared over all locations in the graph unlike typical GNNs.
- GCN Blog
- Jupyter Notebook
- Code
- Paper -> Semi-Supervised Classification with Graph Convolutional Networks
Previous approaches are transductive and don't naturally generalize to unseen nodes. GraphSAGE is an inductive framework leveraging node feature information to efficiently generate node embeddings.
ChebNet is a formulation of CNNs in the context of spectral graph theory.
- ChebNet Blog
- Jupyter Notebook
- Code
- Paper -> Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering
GAT is able to attend over their neighborhoods’ features, implicitly specifying different weights to different nodes in a neighborhood, without requiring any kind of costly matrix operation or depending on knowing the graph structure upfront.
Please use the following entry for citing the blog.
@misc{graph_nets,
author = {A. Dagar and A. Pant and S. Gupta and S. Chandel},
title = {graph_nets},
year = {2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/dsgiitr/graph_nets}},
}