Tag: Neural Networks

AI Conference Research

ICLR 2019 | Tsinghua, Google and ByteDance Propose Neural Networks for Inductive Learning & Logic Reasoning

Although machine learning has achieved huge advances in speech recognition, gaming and many other applications, some critics still regard it as little more than glorified “curve fitting” that lacks high-level cognitive abilities and reasoning skills.

AI Research

Global Minima Solution for Neural Networks?

New research from Carnegie Mellon University, Peking University and the Massachusetts Institute of Technology shows that global minima of deep neural networks can been achieved via gradient descent under certain conditions. The paper Gradient Descent Finds Global Minima of Deep Neural Networks was published November 12 on arXiv.

AI Research

Nanjing University Team Introduces Multi-layered Gradient Boosting Decision Trees (mGBDTs)

GcForest, a decision tree ensemble approach that is much easier to train than deep neural networks, has received a lot of attention from researchers since it was introduced by Prof. Zhihua Zhou and his student Ji Feng last year. Based on their previous work, Zhou, Feng and Nanjing University colleague Yang Yu have now proposed Multi-layered Gradient Boosting Decision Trees (mGBDTs).