Tag: Deep Neural Networks

AI Machine Learning & Data Science Research

Fujitsu AI, Tokyo U & RIKEN AIP Study Decomposes DNNs Into Modules That Can Be Recomposed Into New Models for Other Tasks

A research from the Fujitsu AI Laboratory, the University of Tokyo and the RIKEN Center for Advanced Intelligence Project proposes a modularization method that decomposes a DNN into small modules from a functionality perspective and recomposes them into new models appropriate for other tasks.

AI Machine Learning & Data Science Research

Microsoft & OneFlow Leverage the Efficient Coding Principle to Design Unsupervised DNN Structure-Learning That Outperforms Human-Designed Structures

A research team from OneFlow and Microsoft takes a step toward automatic deep neural network structure design, exploring unsupervised structure-learning and leveraging the efficient coding principle, information theory and computational neuroscience to design structure learning without label information.

AI Research

BatchNorm + Dropout = DNN Success!

A group of researchers from Tencent Technology, the Chinese University of Hong Kong, and Nankai University recently combined two commonly used techniques — Batch Normalization (BatchNorm) and Dropout — into an Independent Component (IC) layer inserted before each weight layer to make inputs more independent*.

AI Research

Global Minima Solution for Neural Networks?

New research from Carnegie Mellon University, Peking University and the Massachusetts Institute of Technology shows that global minima of deep neural networks can been achieved via gradient descent under certain conditions. The paper Gradient Descent Finds Global Minima of Deep Neural Networks was published November 12 on arXiv.