AI Machine Learning & Data Science Research

70-Page Paper From Yoshua Bengio Team: GFlowNet Foundations

In the new paper GFlowNet Foundations, a research team from Mila, University of Montreal, McGill University, Stanford University, CIFAR and Microsoft Azure AI builds upon GFlowNets, providing an in-depth formal foundation and expansion of the set of theoretical results for a broad range of scenarios, especially active learning.

There’s no slowing down the godfathers of deep learning, who continue to innovate. Several years ago Geoffrey Hinton introduced Capsule Networks (CapsNets) for dynamic image modelling, and this past summer a Yoshua Bengio team proposed Generative Flow Networks (GFlowNets), a low-network-based generative method that can turn a given positive reward into a generative policy that samples with a probability proportional to the return. GFlowNets achieve competitive results on molecule synthesis domain tasks and perform well on a simple domain where there are many modes to the reward function.

In the new paper GFlowNet Foundations, a research team from Mila, University of Montreal, McGill University, Stanford University, CIFAR and Microsoft Azure AI builds upon GFlowNets, providing an in-depth formal foundation and expansion of the set of theoretical results for a broad range of scenarios, especially active learning.

The researchers say their study extends the theory informing the original GFlowNet architecture in several new directions:

  1. Formulations enabling the calculation of marginal probabilities (or free energies) for subsets of variables.
  2. The introduction of an unsupervised form of GFlowNet (the reward function is not needed while training, only observations of outcomes) enabling sampling from a Pareto frontier.
  3. The original formulation of GFlowNet is also limited to discrete and deterministic environments, while this paper suggests how these two limitations could be lifted.
  4. Whereas the basic formulation of GFlowNets assumes a given reward or energy function, this paper considers how the energy function could be jointly learned with the GFlowNet, opening the door to novel energy-based modelling methodologies and a modular structure for both the energy function and the GFlowNet.

GFlowNets are inspired by the way information propagates in temporal-difference reinforcement learning. Both methods rely on the credit assignment consistency principle and achieve asymptotic behaviour only when trained to converge. As the number of paths in the state space increases exponentially, it is difficult to calculate the gradient accurately, and as such both methods depend on the local consistency of different components and a training target: if all the learning components are locally consistent with each other, then the system that can make global estimations.

Regarding GFlowNets’ application, paper co-author and Yoshua’s son Emmanuel Bengio tweeted, “Ever wanted to generate diverse samples of discrete data based on a reward function? Our new method, GFlowNet, based on flow networks & a TD-like objective, gets great results on a molecule generation domain… Even though this framework emerged from our desire to sample novel drugs mutual information, it turns out that we can do much more with it: general probabilistic operations on sets & graphs, such as otherwise intractable marginalizations, estimating partition functions and free energies, conditional probabilities of superset given subsets, estimating entropy mutual information, and more.”

Overall, this work connects the notion of flow in GFlowNets with probability measure over trajectories and introduces a novel training objective — the detailed balance loss — that enables selecting a parametrization separating the backward policy that controls order preferences from the constraints imposed by the target reward function. It also presents alternatives to the flow-matching objective that may bypass the slow “bootstrapping” propagation of credit information. Another key contribution is its mathematical framework for marginalization or free energy estimation using GFlowNets.

The team believes their 70-page paper presents new possibilities for the marginalization over supergraphs of graphs, supersets of sets, and supersets of (variable,value) pairs, providing formulae for estimating entropies, conditional entropies and mutual information. The elder Bengio meanwhile says the GFlowNet framework “may become a key ingredient for system-2 deep learning enabling causal discovery and reasoning.”

The paper GFlowNet Foundations is on arXiv.


Author: Hecate He | Editor: Michael Sarazen


We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.

1 comment on “70-Page Paper From Yoshua Bengio Team: GFlowNet Foundations

  1. Pingback: r/artificial - [R] 70-Page Paper From Yoshua Bengio Team: GFlowNet Foundations - Cyber Bharat

Leave a Reply

Your email address will not be published. Required fields are marked *