Skip to content

[Roadmap] GNN Explainability Support 🚀 #5520

@RexYing

Description

@RexYing

🚀 The feature, motivation and pitch

Explainability has been an important component for users to probe model behavior, understand feature and structural importances, obtain new knowledge about the underlying task, and extract insights from a GNN model.

Over the years, many papers on GNN explainability have been published (some of which are integrated into PyG already), and many surveys, benchmakrs and evaluation frameworks have been proposed, such as the taxonomic survey and GraphFramEx multi-aspect evaluation framework. Some of these recent progress raise new challenges in terms of method, evaluation and visualization, outlined here.

  • A high-level explainability support framework to support explainability methods in PyG. For example, for post-hoc explanations, we can generally classify existing techniques into gradient-based techniques (such as saliency and gradcam), perturbation-based techniques (such as GNNExplainer, PGExplainer). We can provide a unified interface to invoke these explanation methods given an instance or a group of instances given a model and the instance(s) that we want to explain.
  • Move existing implementations to the new unified interface
  • Provide support for synthetic datasets commonly used in explainability papers. GNN Explainability Dataset Generation #5817
  • Provide basic evaluation metrics, such as fidelity+/- measures (taxonomic survey, GraphFramEx) to evaluate the quality of explanations. Explainability Evaluation Metrics #5628
  • Support different explanation evaluation based on hard / soft masks; explanation of phenomenon vs. model etc. (GraphFramEx). GNN Explanation Settings #5629
  • Provide support for different explainability methods (SubgraphX, PGExplainer, ...)
  • Explainability support for heterogeneous graphs. There exist simple ways to adapt current explainability methods into heterogeneous graphs. For example, we can use gradient-based methods by taking gradient with respect to adjacencies (edge_index) of different message types. We can also adapt GNNExplainer by creating a mask corresponding to each of the edge_index for different message types. Explanability for Heterogeneous Graphs #5630
  • Explainability support for other types of graph modalities supported by PyG, such as bipartite graphs.
  • Improve visualization utility functions of the explanation. It can be in the form of feature importance maps, or visualization of the explanation subgraph. Support parameters such as the size of explanation and show weighted importance for edges to control the visualization output. Visualization of GNN Explanations #5631
  • Notebook Demo, Example script
  • Documentation, Tutorials

Alternatives

No response

Additional context

There are other explainability functionalities that are still in relatively early stage, such as concept / class-wise explanations and counterfactual explanations. There are ongoing research projects that could potentially be integrated in future.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions