Preferred Networks’ ChainerRL Joins PyTorch Ecosystem as ‘PFRL’
Japanese AI startup Preferred Networks (PFN) is moving ChainerRL to the PyTorch ecosystem.
AI Technology & Industry Review
Technical review of the newest machine intelligence research.
Japanese AI startup Preferred Networks (PFN) is moving ChainerRL to the PyTorch ecosystem.
ICLR 2021 submission proposes LambdaNetworks, a transformer-specific method that reduces costs of modeling long-range interactions for CV and other applications.
Facebook AI open-sourced a multilingual machine translation (MMT) model that translates between any pair of 100 languages without relying on English data.
The UK researchers identify just how much AI research might benefit from the field of animal cognition.
Google AI recently launched the open-source browser-based toolset “rǝ,” which was created to enable the exploration of city transitions from 1800 to 2000 virtually in a three-dimensional view.
An international research team is suggesting AI might become even more efficient and reliable if it learns to think more like worms.
A new Facebook AI and CMU renewable energy storage project could enable labs to perform days of electrocatalyst screening and calculations in just seconds.
“The Computational Limits of Deep Learning” first author Neil Thompson of MIT says DL’s economic and environmental footprints are growing worrying fast
Pinkney and Adler NeurIPS 2020 workshop paper enables realistic image generation in domains such as animation and ukiyo-e with creative control on the output.
ICLR 2021 submitted paper proposes efficient VAEs that outperform PixelCNN-based autoregressive models in log-likelihood on natural image benchmarks.
ByteDance introduces a high-resolution piano transcription system trained by regressing the precise onset and offset times of piano notes and pedals.
PwC and arXiv jointly announced their partnership yesterday, unveiling a convenient new Code tab on the abstract page of arXiv Machine Learning articles.
ICLR 2021 paper An Image Is Worth 16×16 Words: Transformers for Image Recognition at Scale suggests Transformers can outperform top CNNs on CV at scale.
NeurIPS 2020 released its list of accepted papers this week with Google, Stanford, and MIT as the top affiliations.
Researchers introduce an isolated nanoscale electronic circuit element that can perform nonmonotonic operations and transistorless all-analogue computations.
NVIDIA, Mass General Brigham and 20 global hospitals launch federated learning initiative EXAM to build AI model for COVID-19 patient oxygen need prediction
Google AI researchers developed a sign language detection model for video conferencing applications that can perform real-time identification of a person signing as an active speaker.
LPar, a distributed multi agent platform for large scale industrial deployment of polyglot, diverse and inter-operable agents.
Google AI has announced a new audiovisual speech enhancement feature in YouTube Stories (iOS) that enables creators to make better selfie videos by automatically enhancing their voices and reducing noise.
A team from Google, University of Cambridge, DeepMind, and Alan Turing Institute have proposed a new type of Transformer dubbed Performer, based on a Fast Attention Via positive Orthogonal Random features (FAVOR+) backbone mechanism.
Novel model uses a quality estimator and evolutionary optimization to search the latent space of GANs trained on limited datasets.
Imaginaire, a universal PyTorch library designed for various GAN-based tasks and methods.
Researchers introduced retrieval-augmented generation - a hybrid, end-to-end differentiable model that combines an information retrieval component with a seq2seq generator.
Google Brain recently introduced a new open-sourced TensorFlow package, TensorFlow Recommenders designed to simplify the process of building, evaluating, and serving sophisticated recommender models.
Facebook AI researchers have open-sourced the new wav2vec 2.0 algorithm for self-supervised language learning.
Top Data Scientists Honored for Advanced Research and Applied Data Science in the Field of Knowledge Discovery in Data and Data Mining
Chinese researchers propose a novel regression framework in pursuit of “fast, accurate and stable 3D dense face alignment simultaneously.”
The trimmed-down pQRNN extension to Google AI’s projection attention neural network PRADO compares to BERT on text classification tasks for on-device use.
VR and AR will converge to combine the real and virtual, as Facebook Reality Labs researchers, developers, and engineers aim to change how we see the world.
Synced has identified a few significant technical advancements in the 3D photo field that we believe may be of interest to our readers.
Microsoft announced today that it has teamed up with OpenAI to exclusively license the AI research institute’s GPT-3 language model.
UIUC, Adobe Research and University of Oregon propose HDMatt, a Deep Learning-based image matting Cross-Patch Context module for high-resolution image inputs.
Augmented Temporal Contrast (ATC), a new unsupervised learning (UL) task for learning visual representations agnostic to rewards and without degrading the control policy.
A group of researchers from Google Research and the University of Oxford have introduced a novel technique that can “retiming” people’s movements in videos.
NumPy is the foundation upon which the scientific Python ecosystem is constructed.
From an augmented view of an image, the researchers trained the online network to predict the target network representation of the same image under a different augmented view.
Monster Mash, a novel AI-powered 3D modelling and animation tool, aims to make these arduous 3D animation processes a whole lot easier.
Facebook AI researchers and engineers just made live video content more accessible by enabling automatic closed captions for Facebook Live and Workplace Live.
OpenAI sets out to advance methods for training large-scale language models on objectives that more closely capture human preferences.
Microsoft has released four additional DeepSpeed technologies to enable even faster training times, whether on supercomputers or a single GPU.