Synced Tradition and Machine Learning Series | Part 2: Optimization Basics
This is the second in a special Synced series of introductory articles on traditionally theoretical fields of studies and their impact on modern-day machine learning.
AI Technology & Industry Review
This is the second in a special Synced series of introductory articles on traditionally theoretical fields of studies and their impact on modern-day machine learning.
OpenAI has trained a neural network called DALL·E that creates images from text captions for a wide range of concepts expressible in natural language.
Rapid and accurate identification of mosquitoes that transmit human pathogens such as malaria is an essential part of mosquito-borne disease surveillance.
ICLR 2021 submitted paper proposes efficient VAEs that outperform PixelCNN-based autoregressive models in log-likelihood on natural image benchmarks.
The ReDNA Labs research team has devised a new lightweight approach called IGLOO which allows to deal with sequences up to 25,000 steps long.
Facebook has introduced a model that turns common two-dimensional pictures into 3D photos.
DeepMind introduced a new approach designed to improve the generalizability and efficiency of algorithms represented by neural networks.
There are one trillion synapses in a cubic centimeter of the brain. If there is such a thing as General AI, it would probably require one trillion synapses.
A look at three ICLR papers to look at that fall under the topic of robustness
Recent advances in deep learning, also known as using neural machine learning, has proven to achieve state of the art results in machine translation.
A “data echoing” technique that enables these time-consuming upstream training stages to also benefit from accelerators.
This paper proposes a novel graph-constrained generative adversarial network, whose generator and discriminator are built upon relational architecture.
How much is this going to cost? And what are the main factors affecting that price tag?
The latest ResNet improvement comes courtesy researchers from Amazon and UC Davis, who unveiled their Split-Attention Networks, ResNeSt.
Researchers unify DNN normalization layers and activation functions into a single computation graph.
Researchers proposed a new training scheme that targets this bias by controlling and exposing textural information slowly through the training process.
DeepMind announced yesterday the release of Haiku and RLax — new JAX libraries designed for neural networks and reinforcement learning respectively.
Researchers have proposed a novel generator network specialized on the illustrations in children’s books.
Synced Global AI Weekly February 16th
Facebook’s new HiPlot is a lightweight interactive visualization tool that takes this further, using parallel plots to discover correlations and patterns in such high-dimensional data.
Now, DeepMind and University College London (UCL) have introduced a new deep network called MEMO which matches SOTA results on Facebook’s bAbI dataset for testing text understanding and reasoning, and is the first and only architecture capable of solving long sequence novel reasoning tasks.
Google recently introduce Flax - a neural network library for JAX that is designed for flexibility.
Researchers trained a neural network, CGANet (Convolution module, bidirectional Gated Recurrent Unit module, Attention module), to automate the mating success prediction process for pandas based on their vocal sounds.
Researchers recently proposed a new machine learning method for worldbuilding based on content from LIGHT, a research environment open-sourced by Facebook comprising crowd-sourced game locations, characters, and objects, etc.
Results of the various experiments show GELU consistently has the best performance compared with ReLU and ELU, and can be considered a viable alternative to previous nonlinear approaches.
This research demonstrated that deep learning can contribute to the traditional discipline with much better performance than existing methods.
It’s not as easy as one might imagine to train an AI model to accurately predict what a human will do next, even when they are interacting with a relatively simple object like a ball.
Microsoft’s new tunable gigaword-scale neural network DialogGPT is a virtual master of conversation that outperforms strong baseline systems in generating relevant and context-consistent responses and attains near human level performance in conversational response generation tasks.
Synced invited Samuel R. Bowman, an Assistant Professor at New York University who works on artificial neural network models for natural language understanding, to share his thoughts on the “Text-to-Text Transfer Transformer” (T5) framework.
Exciting new research from Duke University introduces ProtoPNet, a deep learning network that can explain how it distinguishes a pigeon from a partridge in real time.
Cosmetic surgery companies are turning to big data, facial recognition, neural networks, adversarial learning and deep learning technologies that can assess the human face and generate outcomes for a specific procedures to guide patients to their best surgical options.
DeepMind researchers have brought quantum Monte Carlo (QMC) to a higher level with the Fermionic Neural Network — or Fermi Net — a neural network with more flexibility and higher accuracy.
To ramp up the robustness of neural networks, researchers from OpenAI have introduced a novel method that evaluates how well a neural network classifier performs against adversarial attacks that were not seen during their training.
Researchers from Two Six Labs and Stanford Schnitzer Lab have developed a deep learning system designed to explore the workings of the mouse mind and predict behavior by processing brain-based electrical activity with a neural network.
Now, researchers from the Victoria University of Wellington School of Engineering and Computer Science have introduced the HSIC (Hilbert-Schemidt independence criterion) bottleneck as an alternative to backpropagation for finding good representations.
Recently, Facebook AI Research (FAIR) researchers introduced a structured memory layer which can be easily integrated into a neural network to greatly expand network capacity and the number of parameters without significantly changing calculation cost.
Architecture and weights are two essential considerations for artificial neural networks. Architecture is akin to the innate human brain, and contains the neural network’s initial settings such as hyperparameters, layers, node connections (or wiring), etc.
DeepMind is a trailblazer in the trending computer vs humans gaming research space. Following milestone victories against human pros on the board game Go and video game StarCraft II, the Google-owned research company has now pitted their new AI system against humans in the first-person shooter multiplayer video game Quake III Arena.
A Google Brain research team led by Turing Award recipient Geoffrey Hinton recently published a paper that presents an effective method for measuring the similarity of representations learned by deep neural networks.
Artificial intelligence is closing the gap on humans. Machines are rapidly honing their skills in object recognition and natural language interaction, and advanced AI agents have already beat human champions in board and video games and even debates.