At last month’s RE•WORK Deep Learning in Finance Summit in London, leading AI industry practitioners and academics from prestigious universities discussed their research, provided insights on business trends and real-life AI applications, and addressed current challenges facing the AI industry as a whole.
As Facebook struggles with fallout from the Cambridge Analytica scandal, its research arm today delivered a welcome bit of good news in deep learning. Research Engineer Dr. Yuxin Wu and Research Scientist Dr. Kaiming He proposed a new Group Normalization (GN) technique they say can accelerate deep neural network training with small batch sizes.
Chinese netizens are all ears for the company’s “hearty” AI-powered music recommendations. In an interview with Synced, NetEase Data Scientist Jia Xu and Product Manager Bowen Shen explained the NetEase system, which learns how to predict what songs will resonate with a user’s particular taste in music…
In applying the adversarial training, this paper adopts distributed word representation, or word embedding, as the input, rather than the traditional one-hot representation. The reason lies in the fact that the higher dimensionality the input has, the more likely it is to be disturbed by noise.
We explore top-notch Swiss AI facilities: starting with deep learning and neural network research at IDSIA in Lugano, to interdisciplinary research at École Polytechnique Fédérale de Lausanne and University of Basel, and ending with robotics innovations at ETH in Zurich and University of Zurich.