Facebook AI Research (FAIR) introduced their own Go bot last year, aiming to reproduce AlphaGo Zero results using their Extensible, Lightweight Framework (ELF) for reinforcement learning research. FAIR recently added new features to ELF OpenGo and has open-sourced the project.
The San Francisco-based AI non-profit however has raised eyebrows in the research community with its unusual decision to not release the language model’s code and training dataset. In a statement sent to Synced, OpenAI explained the choice was made to prevent malicious use: “it’s clear that the ability to generate synthetic text that is conditioned on specific subjects has the potential for significant abuse.”
Last December some 9,000 attendees packed a single venue in Montreal for a week-long academic conference. NeurIPS was completely sold out, the latest indication of just how hot AI is nowadays. As AI and machine learning continue to ignite discussion across a wide variety of disciplines, novel approaches to the tech are also garnering interest.
Alibaba Cloud recently announced that it has open sourced Mars — its tensor-based framework for large-scale data computation — on Github. Mars can be regarded as “a parallel and distributed NumPy.” Mars can tile a large tensor into small chunks and describe the inner computation with a directed graph, enabling the running of parallel computation on a wide range of distributed environments, from a single machine to a cluster comprising thousands of machines.
Tencent AI Lab has announced an open-source NLP dataset comprising vector representations for eight million Chinese words and phrases. The dataset aims to provide large-scale and high-quality support for deep learning-based Chinese language NLP research in both academic and industrial applications.
DeepMind announced today that it has opened its Graph Nets (GN) library to the public, enabling the use of graph networks in TensorFlow and Sonnet. Graph Nets is a machine learning framework that was published by DeepMind, Google Brain, MIT and University of Edinburgh on Jun 15.
The DeepMimic paper’s first author, Berkeley PhD student Xue Bin Peng, has now open-sourced the project’s codes, data, and frameworks. Moreover, Peng’s new research demonstrates that DeepMimic’s simulated characters can also learn to perform highly dynamic movements by using regular video clips of human examples as input data.
Tencent AI Lab has announced that it will open source its multi-label image dataset ML-Images and deep residual network ResNet-101 by the end of September. ML-Images contains 18 million images and more than 11,000 common object categories; while ResNet-101 has reached the highest precision level in the industry.
At the annual Google Cloud Next conference which kicked off July 24 in San Francisco the company unveiled a series of AI-based product releases and enhancements for its analytics and machine learning tools, additional applications on G Suite, and new IoT products.