AI Machine Learning & Data Science Research

Google & J.P. Morgan Propose Advanced Bandit Sampling for Multiplex Networks

A team from Google Research and J.P. Morgan AI Research proposes an algorithm for scalable learning on multiplex networks with a large number of layers, improving efficiency over recent popular approaches.

Graph neural networks (GNNs) have gained popularity in the AI research community due to their impressive performance in high-impact applications such as drug discovery and social network analyses. Most existing studies on GNNs however have focused on “monoplex” settings (networks with only a single type of connection between entities) and not on multiplex settings (multiple types of connections between entities), which reflect many real-world scenarios.

In the new paper Bandit Sampling for Multiplex Networks, a team from Google Research and J.P. Morgan AI Research explores the problem of computationally efficient link prediction in the multiplex setting, introducing an algorithm for scalable learning on multiplex networks with a large number of layers. In evaluations, the proposed method is shown to improve efficiency over prior work such as Multiplex Network Embedding (MNE, Zhang et al., 2018) and the DEEPLEX layer-sampling approach (Potluru et al., 2020).

The team summarizes their study’s main contributions as:

  1. A formulation of the multiplex layer sampling problem as an online learning problem with partial information.
  2. An algorithm for adaptive identification and sampling of important layers of a multiplex network for training.
  3. Evaluations and comparisons on real-world data sets that demonstrate the practicality of our approach.

The multiplex network problem can be considered as a graph with many layers, where each layer has nodes neighbouring other layers. The team considers the problem of sampling relevant layers at each time step of the training process, which is then extended to sampling neighbours of all layers. The researchers assume that at each time step, the relevancy of a neighbouring layer can be quantified by a loss — the smaller the loss, the more relevant the layer becomes toward the embedding of the layer under consideration.

In this general setting, the problem can be reformulated as: What should the sampling distribution of the neighbouring layers be so that only the relevant layers are considered? This is a typical exploration-exploitation trade-off dilemma of the sort that has been well-studied in multi-armed bandits and online learning with partial information settings.

The team reviews the use of bandit algorithms and variants for the bandit sampling of nodes, and proposes a novel layer-sampling method that adaptively learns a sampling distribution over the neighbours of each layer so that only the layers with relevant information will be sampled and used in training and prediction. Because this approach focuses only on the relevant information at each time step, it can significantly reduce training and inference time requirements.

The researchers evaluated their approach in the link prediction setting and compared its network test and training accuracy to the uniform sampling of neighbouring layers. The proposed approach outperformed uniform sampling across the entire training regime on a variety of synthetic and real-world datasets, indicating its effectiveness in improving training and inference efficiency in multiplex networks with a large number of layers.

The paper Bandit Sampling for Multiplex Networks is on arXiv.


Author: Hecate He | Editor: Michael Sarazen


We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.

0 comments on “Google & J.P. Morgan Propose Advanced Bandit Sampling for Multiplex Networks

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: