Need help with GraphGallery?
Click the “chat” button below for chat support from the developer who created it, or find similar developers for support.

About the developer

EdisonLeeeee
251 Stars 40 Forks MIT License 744 Commits 0 Opened issues

Description

GraphGallery is a gallery for benchmarking Graph Neural Networks (GNNs) and Graph Adversarial Learning with TensorFlow 2.x and PyTorch backend.

Services available

!
?

Need anything else?

Contributors list

No Data

banner

TensorFlow or PyTorch, both!

Python tensorflow pytorch pypi license

GraphGallery

GraphGallery is a gallery for benchmarking Graph Neural Networks (GNNs) and Graph Adversarial Learning with TensorFlow 2.x and PyTorch backend. Besides, Pytorch Geometric (PyG) backend and Deep Graph Library (DGL) backend now are available in GraphGallery.

💨 NEWS

  • We have removed the TensorFlow dependency and use PyTorch as the default backend for GraphGallery. <!-- + We have integrated the Adversarial Attacks in this project, examples please refer to Graph Adversarial Learning examples. -->

🚀 Installation

Please make sure you have installed PyTorch. Also, TensorFlow, Pytorch Geometric (PyG) and Deep Graph Library (DGL) are alternative choices. ```bash

Maybe outdated

pip install -U graphgallery

or
bash

Recommended

git clone https://github.com/EdisonLeeeee/GraphGallery.git && cd GraphGallery pip install -e . --verbose ``

where
-e` means "editable" mode so you don't have to reinstall every time you make changes.

🤖 Implementations

In detail, the following methods are currently implemented:

Node Classification

| Method | Author | Paper | PyTorch | TensorFlow | PyG | DGL | | ------------------ | ---------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------- | ------------------ | ------------------ | ------------------ | ------------------ | | ChebyNet | Michaël Defferrard et al | Convolutional Neural Networks on Graphs with Fast Localized Spectral Filtering (NeurIPS'16) | :heavycheckmark: | :heavycheckmark: | | | | GCN | Thomas N. Kipf et al | Semi-Supervised Classification with Graph Convolutional Networks (ICLR'17) | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: | | GraphSAGE | William L. Hamilton et al | Inductive Representation Learning on Large Graphs (NeurIPS'17) | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: | | | FastGCN | Jie Chen et al | FastGCN: Fast Learning with Graph Convolutional Networks via Importance Sampling (ICLR'18) | :heavycheckmark: | :heavycheckmark: | | | | LGCN | Hongyang Gao et al | Large-Scale Learnable Graph Convolutional Networks (KDD'18) | | :heavycheckmark: | | | | GAT | Petar Veličković et al | Graph Attention Networks (ICLR'18) | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: | | SGC | Felix Wu et al | Simplifying Graph Convolutional Networks (ICLR'19) | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: | | GWNN | Bingbing Xu et al | Graph Wavelet Neural Network (ICLR'19) | :heavycheckmark: | :heavycheckmark: | | | | GMNN | Meng Qu et al | Graph Attention Networks (ICLR'19) | | :heavycheckmark: | | | | ClusterGCN | Wei-Lin Chiang et al | Cluster-GCN: An Efficient Algorithm for Training Deep and Large Graph Convolutional Networks (KDD'19) | :heavycheckmark: | :heavycheckmark: | | | | DAGNN | Meng Liu et al | Towards Deeper Graph Neural Networks (KDD'20) | :heavycheckmark: | :heavycheckmark: | | | | GDC | Johannes Klicpera et al | Diffusion Improves Graph Learning (NeurIPS'19) | :heavycheckmark: | :heavycheckmark: | | | | TAGCN | Jian Du et al | Topology Adaptive Graph Convolutional Networks (arxiv'17) | :heavycheckmark: | :heavycheckmark: | | | | APPNP, PPNP | Johannes Klicpera et al | Predict then Propagate: Graph Neural Networks meet Personalized PageRank (ICLR'19) | :heavycheckmark: | :heavycheckmark: | | | | PDN | Benedek Rozemberczki et al | Pathfinder Discovery Networks for Neural Message Passing (ICLR'21) | | | :heavycheckmark: | | | SSGC | Zhu et al | Simple Spectral Graph Convolution (ICLR'21) | :heavycheckmark: | :heavycheckmark: | | | | AGNN | Zhu et al | Attention-based Graph Neural Network for semi-supervised learning (ICLR'18 openreview) | :heavycheckmark: | :heavycheckmark: | | | | ARMA | Bianchi et al | Graph Neural Networks with convolutional ARMA filters (Arxiv'19) | | :heavycheckmark: | | | | GraphML*P* | Yang Hu et al | Graph-MLP: Node Classification without Message Passing in Graph (Arxiv'21) | :heavycheckmark: | | | | | LGC, EGC, hLGC | Luca Pasa et al | Simple Graph Convolutional Networks (Arxiv'21) | | | | :heavycheckmark: | | GRAND | Wenzheng Feng et al | Graph Random Neural Network for Semi-Supervised Learning on Graphs (NeurIPS'20) | | | | :heavycheckmark: | | AlaGCN, AlaGAT | Yiqing Xie et al | When Do GNNs Work: Understanding and Improving Neighborhood Aggregation (IJCAI'20) | | | | :heavycheckmark: | | JKNet | Keyulu Xu et al | Representation Learning on Graphs with Jumping Knowledge Networks (ICML'18) | | | | :heavycheckmark: | | MixHop | Sami Abu-El-Haija et al | MixHop: Higher-Order Graph Convolutional Architecturesvia Sparsified Neighborhood Mixing (ICML'19) | | | | :heavycheckmark: | | DropEdge | Yu Rong et al | DropEdge: Towards Deep Graph Convolutional Networks on Node Classification (ICML'20) | | | :heavycheckmark: | | | Node2Grids | Dalong Yang et al | Node2Grids: A Cost-Efficient Uncoupled Training Framework for Large-Scale Graph Learning (CIKM'21) | :heavycheckmark: | | | |

Defense models (for Graph Adversarial Learning)

Robust Optimization

| Method | Author | Paper | PyTorch | TensorFlow | PyG | DGL | | -------------------------- | ------------------------ | ------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ | ------------------ | ------------------ | --- | | RobustGCN | Petar Veličković et al | Robust Graph Convolutional Networks Against Adversarial Attacks (KDD'19) | :heavycheckmark: | :heavycheckmark: | | | | SBVAT, OBVAT | Zhijie Deng et al | Batch Virtual Adversarial Training for Graph Convolutional Networks (ICML'19) | :heavycheckmark: | :heavycheckmark: | | | | SimPGCN | Wei Jin et al | Node Similarity Preserving Graph Convolutional Networks (WSDM'21) | :heavycheckmark: | | | | | GCN-VAT, GraphVAT | Fuli Feng et al | Graph Adversarial Training: Dynamically Regularizing Based on Graph Structure (TKDE'19) | :heavycheckmark: | | | | | LATGCN | Hongwei Jin et al | Latent Adversarial Training of Graph Convolution Networks ([email protected]'19) | :heavycheckmark: | | | | | DGAT | Weibo Hu et al | Robust graph convolutional networks with directional graph adversarial training (Applied Intelligence'19) | :heavycheckmark: | | | | | MedianGCN , TrimmedGCN | Liang Chen et al | Understanding Structural Vulnerability in Graph Convolutional Networks | :heavycheckmark: | | :heavycheckmark: | |

Graph Purification

The graph purification methods are universal for all models, just specify:

graph_transform="purification_method"

so, here we only give the examples of

GCN
with purification methods, other models should work.

| Method | Author | Paper | PyTorch | TensorFlow | PyG | DGL | | --------------- | ---------------------- | ------------------------------------------------------------------------------------------------------------------------------------------- | ------------------ | ------------------ | ------------------ | ------------------ | | GCN-Jaccard | Huijun Wu et al | Adversarial Examples on Graph Data: Deep Insights into Attack and Defense (IJCAI'19) | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: | | GCN-SVD | Negin Entezari et al | All You Need Is Low (Rank): Defending Against Adversarial Attacks on Graphs (WSDM'20) | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: |

LinkPrediction

| Method | Author | Paper | PyTorch | TensorFlow | PyG | DGL | | ------------- | ---------------------- | ------------------------------------------------------------------------------- | ------------------ | ---------- | ------------------ | --- | | GAE, VGAE | Thomas N. Kipf et al | Variational Graph Auto-Encoders (NeuIPS'16) | :heavycheckmark: | | :heavycheckmark: | |

Node Embedding

The following methods are framework-agnostic.

| Method | Author | Paper | PyTorch | TensorFlow | PyG | DGL | | ------------- | --------------------------------- | --------------------------------------------------------------------------------------------------------------- | ------------------ | ------------------ | ------------------ | ------------------ | | Deepwalk | Bryan Perozzi et al | DeepWalk: Online Learning of Social Representations (KDD'14) | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: | | Node2vec | Aditya Grover and Jure Leskovec | node2vec: Scalable Feature Learning for Networks (KDD'16) | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: | | Node2vec+ | Renming Liu et al | Accurately Modeling Biased Random Walks on Weighted Graphs Using Node2vec+ | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: | | BANE | Hong Yang et al | Binarized attributed network embedding (ICDM'18) | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: | :heavycheckmark: |

⚡ Quick Start on GNNs

Datasets

more details please refer to GraphData.

Example of GCN (Node Classification Task)

It takes just a few lines of code.

python
from graphgallery.gallery.nodeclas import GCN
trainer = GCN()
trainer.setup_graph(graph)
trainer.build()
history = trainer.fit(train_nodes, val_nodes)
results = trainer.evaluate(test_nodes)
print(f'Test loss {results.loss:.5}, Test accuracy {results.accuracy:.2%}')
Other models in the gallery are the same.

If you have any troubles, you can simply run

trainer.help()
for more messages.

Other Backends

>>> import graphgallery
# Default: PyTorch backend
>>> graphgallery.backend()
PyTorch 1.9.0+cu111 Backend
# Switch to TensorFlow backend
>>> graphgallery.set_backend("tf")
# Switch to PyTorch backend
>>> graphgallery.set_backend("th")
# Switch to PyTorch Geometric backend
>>> graphgallery.set_backend("pyg")
# Switch to DGL PyTorch backend
>>> graphgallery.set_backend("dgl")

But your codes don't even need to change.

❓ How to add your datasets

This is motivated by gnn-benchmark ```python from graphgallery.data import Graph

Load the adjacency matrix A, attribute matrix X and labels vector y

A - scipy.sparse.csrmatrix of shape [numnodes, num_nodes]

X - scipy.sparse.csrmatrix or np.ndarray of shape [numnodes, num_attrs]

y - np.ndarray of shape [num_nodes]

mydataset = Graph(adjmatrix=A, nodeattr=X, node_label=y)

save dataset

mydataset.to_npz('path/to/mydataset.npz')

load dataset

mydataset = Graph.from_npz('path/to/mydataset.npz') ```

⭐ Road Map

  • [x] Add PyTorch trainers support
  • [x] Add other frameworks (PyG and DGL) support
  • [x] set tensorflow as optional dependency when using graphgallery
  • [ ] Add more GNN trainers (TF and Torch backend)
  • [ ] Support for more tasks, e.g.,
    graph Classification
    and
    link prediction
  • [x] Support for more types of graphs, e.g., Heterogeneous graph
  • [ ] Add Docstrings and Documentation (Building)
  • [ ] Comprehensive tutorials

❓ FAQ

Please fell free to contact me if you have any troubles.

😘 Acknowledgement

This project is motivated by Pytorch Geometric, Tensorflow Geometric, Stellargraph and DGL, etc., and the original implementations of the authors, thanks for their excellent works!

Cite

Please cite our paper (and the respective papers of the methods used) if you use this code in your own work: ```bibtex @inproceedings{li2021graphgallery, author = {Jintang Li and Kun Xu and Liang Chen and Zibin Zheng and Xiao Liu}, booktitle = {2021 IEEE/ACM 43rd International Conference on Software Engineering: Companion Proceedings (ICSE-Companion)}, title = {GraphGallery: A Platform for Fast Benchmarking and Easy Development of Graph Neural Networks Based Intelligent Software}, year = {2021}, pages = {13-16}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, }

We use cookies. If you continue to browse the site, you agree to the use of cookies. For more information on our use of cookies please see our Privacy Policy.