Graph kernel methods have traditionally been the most common technique for graph classification tasks, even though their performance is restricted by their hand-crafted combinatorial features. The emergence in recent years of graph neural networks (GNNs) based on high-performance message-passing neural networks (MPNNs) has made these increasingly popular choices for graph classification. MPNN frameworks however also have drawbacks, namely their poor expressivity.
In the new paper KerGNNs: Interpretable Graph Neural Networks with Graph Kernels, a research team from Yale University and IBM proposes Kernel Graph Neural Networks (KerGNNs), which integrate graph kernels and the GNN message-passing process in a single framework. KerGNNs achieve performance comparable to state-of-the-art methods while significantly improving model interpretability compared to conventional GNNs.

The team summarizes their work’s main contributions as:
- We use neighbourhood subgraph topology combined with kernel methods for GNN neighbourhood aggregation, and show with proof that the expressivity of this approach is not limited by the 1-WL algorithm.
- We provide a new perspective to generalize CNNs into the graph domain, by showing that both 2-D convolution and graph neighbourhood aggregation can be interpreted using the language of kernel methods.
- Besides envisioning the output graphs of the model, KerGNNs can further reveal the local structure of the input graphs by visualizing the topology of trained graph filters, which significantly improves the model interpretability and transparency compared with standard MPNNs.
The main idea behind KerGNNs is combining graph kernels with the GNN message-passing process to exploit the strengths of both methods. Graph kernels have been widely used for assessing the similarity between graphs, enabling classification and regression with graph-structured data. Kernel functions can operate in a high-dimensional feature space by simply computing the kernel value in the low-dimensional feature space, which is more computationally efficient than directly computing in the high-dimensional space. Graph kernels’ ability to capture high-dimensional information in large graphs is however restricted by their hand-crafted features and fixed feature construction scheme.
MPNNs meanwhile are neural models that capture the dependence of graphs via message-passing between their nodes, operating in two stages: neighbourhood aggregation and graph-level readout. Although MPNNs have advanced the state-of-the-art on graph-related tasks, they have theoretical limits in terms of expressivity, as studies have shown they cannot exceed the power of the WeisfeilerLehman (WL) algorithm on the graph isomorphism test.

The researchers propose a subgraph-based node aggregation algorithm to leverage these two approaches. For neighbourhood aggregation, they apply graph kernels that use the subgraph induced by node neighbours, such that the expressivity will not be limited by 1-WL isomorphism test. In a bid to boost adaptability, they make the graph kernels’ feature construction scheme trainable in accordance with the standard GNN training framework.
The proposed KerGNN model generalizes the scenario introduced in Random Walk Graph Neural Networks (Nikolentzos and Vazirgiannis, 2020), applying hidden graphs to extract local structural information instead of the entire graph, producing a multi-layer structure with better graph classification performance. To improve graph interpretability, KerGNN not only visualizes output graphs as GNNs do, it also produces trained hidden graphs as a byproduct of training. These contain useful structural information on common characteristics of the whole dataset instead of one specific graph.
The team notes that KerGNNs can generalize convolutional neural networks (CNNs) into the graph domain by replacing the kernel function for vectors with the graph kernel function, providing new insights on the design of GNN architectures. KerGNNs can also generalize most standard MPNN structures by deploying more complex graph filters with multiple nodes and a learnable adjacency matrix and using more expressive and efficient graph kernels.

The team evaluated KerGNNs on graph and node classification tasks, where they outperformed conventional GNNs with 1-WL limits and achieved similar performance to high-order GNNs while reducing running time. Also, unlike standard GNN variants, KerGNNs’ graph filters can provide extra information that can be used to help explain model predictions.

Overall, KerGNNs demonstrate performance competitive with state-of-the-art methods while providing improved explainability and transparency via their visualization of graph filters and output graphs, indicating their potential to increase the representation ability of GNNs.
The paper KerGNNs: Interpretable Graph Neural Networks with Graph Kernels is on arXiv.
Author: Hecate He | Editor: Michael Sarazen

We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.
Pingback: r/artificial - [R] Yale & IBM Propose KerGNNs: Interpretable GNNs with Graph Kernels That Achieve SOTA-Competitive Performance - Cyber Bharat