AI Machine Learning & Data Science Popular Research

Bronstein, Bruna, Cohen and Velickovic Leverage the Erlangen Programme to Establish the Geometric Foundations of Deep Learning

Twitter Chief Scientist Michael Bronstein, Joan Bruna from New York University, Taco Cohen from Qualcomm AI and Petar Veličković from DeepMind publish a paper that aims to geometrically unify the typical architectures of CNNs, GNNs, LSTMs, Transformers, etc. from the perspective of symmetry and invariance to build an "Erlangen Programme" for deep neural networks.

A recently published 156-page paper from a team led by Imperial College Professor and Twitter Chief Scientist Michael Bronstein aims to geometrically unify CNN, GNN, LSTM and Transformer architectures from a perspective of symmetry and invariance to build an “Erlangen Programme” for deep neural networks.

The ambitious work references a definition of symmetry by the great mathematician Hermann Weyl: “one idea by which man through the ages has tried to comprehend and create order, beauty, and perfection.” In modern mathematics, symmetry is univocally expressed as group theory and associated with Sophus Lie, who developed the theory of continuous symmetries; and German mathematician Felix Klein, who proposed group theory as the organizing principle of geometry in his Erlangen Programme, published in 1872 and named after the Bavarian university where he taught.

image.png

Modern geometry can be roughly classified as Euclidean geometry and non-Euclidean geometry. At the end of the nineteenth century, mathematicians and philosophers began debating the validity of and relations between these geometries as well as the nature of the “one true geometry.” Klein’s Erlangen Programme was a way to address this problem, approaching geometry as the study of invariants (properties unchanged under some class of transformations) viewed as symmetries of these geometries.

The Erlangen Programme has had a profound impact on geometry, and the pervasive Category theory in pure mathematics can be regarded as a continuation of it. The Erlangen Programme demonstrates that all geometries are simply special cases of a projective geometry. Klein associated every geometry with an underlying group of symmetries. The hierarchy of geometries could thus be mathematically represented as a hierarchy of these groups and a hierarchy of their invariants.

The researchers behind the new paper liken the current state of deep learning (DL) research to that of 19th-century research on geometry. On the one hand, DL has brought a revolutionary approach to many tasks. On the other hand, few unifying principles have been developed for the variety of DL architectures used with different kinds of datasets. This makes it difficult to understand the relationships between different DL approaches, and has resulted in an often-confusing reinvention and re-branding of the same concepts.

The researchers apply the Erlangen Programme mindset to the DL domain, aiming to build a systematization that can derive different inductive biases and network architectures, implementing them from first principles of symmetry and invariance. Their “Geometric Deep Learning” geometrization approach provides a common mathematical framework to derive the most successful neural network architectures, such as CNNs, RNNs, GNNs, and Transformers; while introducing a constructive procedure for building future architectures in a principled way.

image.png

The team focuses on the 5Gs of geometric domains: graphs, grids, groups, geodesics, and gauges. Groups applies to global symmetry transformations in homogeneous space, geodesics to metric structures on manifolds, and gauges to the local reference frames defined on tangent bundles.

image.png

The researchers say the geometric principles of symmetry, geometric stability, and scale separation can be combined to provide a universal blueprint for geometric deep learning; and that the geometry of an input domain provides three key building blocks: 1) a local equivariant map, 2) a global invariant map, and 3) a coarsening operator. These building blocks provide a rich function approximation space with invariance and stability properties when combined into the proposed Geometric Deep Learning Blueprint.

The team applies this Geometric Deep Learning Blueprint to popular deep learning architectures such as convolutional neural networks (CNNs), group-equivariant CNNs, graph neural networks, transformers, equivariant message-passing networks, intrinsic mesh CNNs, recurrent neural networks (RNNs), and long short-term memory networks (LSTMs). The idea is that after exploring these popular deep neural networks, it will be possible to easily categorize any future Geometric Deep Learning developments using the lens of invariances and symmetries.

Finally, the team provides an overview of influential works in Geometric Deep Learning and exciting and promising new applications, including chemistry and drug design, drug repositioning, protein biology, recommender systems and social networks, traffic forecasting, object recognition, game playing and so on.

The team, comprising researchers from New York University, Qualcomm AI Research and DeepMind, hopes the study can “transcend specific realisations,” stresses that it is a work-in-progress, and invites interested parties in the DL community to contribute comments or point out any errors or omissions.

The paper Geometric Deep Learning: Grids, Groups, Graphs, Geodesics, and Gauges is on arXiv.


Author: Hecate He | Editor: Michael Sarazen


We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.

3 comments on “Bronstein, Bruna, Cohen and Velickovic Leverage the Erlangen Programme to Establish the Geometric Foundations of Deep Learning

  1. Pingback: [R] Twitter Tech Lead Michael Bronstein & Team Leverage the Erlangen Programme to Establish the Geometric Foundations of Deep Learning – ONEO AI

  2. Pingback: r/artificial - [R] Twitter Tech Lead Michael Bronstein & Team Leverage the Erlangen Programme to Establish the Geometric Foundations of Deep Learning - Cyber Bharat

  3. Pingback: Tech roundup 99: a journal published by a bot - Javi López G.

Leave a Reply

Your email address will not be published. Required fields are marked *