DeepMind announced today that it has opened its Graph Nets (GN) library to the public, enabling the use of graph networks in TensorFlow and Sonnet. Graph Nets is a machine learning framework that was published by DeepMind, Google Brain, MIT and University of Edinburgh on Jun 15. After four months of waiting, the coding part of the work is finally here.
DeepMind has shared the library on GitHub and anyone can install and use it with TensorFlow. Demo tests such as shortest path and sort demo are also already available. In its first seven hours the Graph Nets library received over 800 stars on GitHub and more than 600 likes and 200 retweets on Twitter.
Graph Nets can generalize and extend different types of neural networks that perform calculations. It enables deep learning architecture to learn the entities, relations, and rules in graphs data. The structured knowledge can be detected by GN and the GN output graphs will have the same structured knowledge but updated attributes. Each graph’s features can be abstracted into three forms: Nodes, Relations and Global Attributes. For example in a social network graph, if User A follows User B and retweets User’s B tweet, Users A, B and Tweet are all Nodes; while Publishing and Retweets are Relations. Digging into the graph can highly increase human behavior prediction and fraud detection accuracy.
DeepMind believes that learning graphs and relations is a significant step toward the goal of realizing an Artificial General Intelligence (AGI). The team recently revised its June paper on graph nets, Relational inductive biases, deep learning, and graph networks, research which Graphcore Software Developer Christopher Gray tweeted “will kickstart what seems to be a far more fruitful basis for AI than DL alone.”
Turing Award Winner and esteemed UCLA Professor Judea Pearl shares the view that a “reasoning revolution” can lead to a breakthrough in AGI.
The Graph Nets Library is now available on Github.
Author: Alex Chen | Editor: Michael Sarazen