Google AI Blog Introduces Lyra: A New Very Low-Bitrate Codec for Speech Compression
Google AI created Lyra, a high-quality, very low-bitrate speech codec that makes voice communication available even on the slowest networks.
AI Technology & Industry Review
Global machine intelligence updates.
Google AI created Lyra, a high-quality, very low-bitrate speech codec that makes voice communication available even on the slowest networks.
A research team lead by Geoffrey Hinton has created an imaginary vision system called GLOM that enables neural networks with fixed architecture to parse an image into a part-whole hierarchy with different structures for each image.
A research team from Facebook AI has proposed a Unified Transformer (UniT) encoder-decoder model that jointly trains on multiple tasks across different modalities and achieves strong performance on seven tasks with a unified set of model parameters.
A team from Microsoft and Université de Montréal proposes a new mathematical framework that uses measure theory and integral operators to achieve the goal of quantifying the regularity of the attention operation.
A research team from UC Berkeley, University of Maryland and UC Irvine identifies pitfalls that cause instability in the GPT-3 language model and proposes a contextual calibration procedure that improves accuracy by up to 30 percent.
On February 3, Japanese automotive manufacturer Toyota has began to construct a high-tech city at the base of Mount Fuji.
South Korea’s Customs Service has partnered with the government-funded Institute for Basic Science (IBS) to conduct joint research on AI technology using customs trade data.
Dell Technologies has opened a new innovation facility in Singapore, to focus on R&D for edge computing, data analytics and augmented reality.
On February 18, Baidu released the 2020 full year and Q4 financial report, presenting revenue of RMB 107.1 Billion, and its net profit of RMB 22 billion.
On February 19, Beijing-based AI startup Unisound has decided halted its IPO application to the Shanghai Stalk Exchange.
On February 22, Chinese state-owned automotive company SAIC Motor announced that it has forged a comprehensive strategic cooperation with Horizon Robotics, an AI smart chip unicorn.
Torc Robotics has selected Amazon Cloud Services (AWS) as its cloud service provider to meet large-scale data transmission, storage and computing speed requirements.
A Microsoft team has introduced Error Analysis, a responsible AI toolkit designed to identify and diagnose errors in machine learning models.
Huawei and Mercedes-Benz S-Class are cooperating to provide car owners with the HMS for Car Huawei smart vehicle solution.
University of Toronto researchers propose a BERT-inspired training approach as a self-supervised pretraining step to enable deep neural networks to leverage newly and publicly available massive EEG (electroencephalography) datasets for downstream brain-computer-interface (BCI) applications.
Apple has laid out the design characteristics of a new generic system that enables federated evaluation and tuning (FE&T) systems on end-user devices.
A research team from Google and Johns Hopkins University identifies variance-limited and resolution-limited scaling behaviours for dataset and model size in four scaling regimes.
British consulting firm L.E.K. has released a recent research report stating that as unmanned technology and network scale greatly reduce costs, by 2040, drones may account for as much as 30 percent of same-day parcel deliveries.
On February 16, Goldman Sachs announced that it is launching an automated wealth management platform to invest client funds in a portfolio of stocks and bonds.
According to statement on February 12, Chinese AI chip unicorn Horizon Robotics announces a third time series C financing.
As part of the long-term strategy to “expand AI capabilities”, Swiss flavours and fragrance giant Givaudan announced that it will acquire the French company Myrissi.
UC Berkeley, Facebook AI Research and New York University researchers’ Multiple Sequence Alignments (MSA) Transformer surpasses current state-of-the-art unsupervised structure learning methods by a wide margin.
Researchers from UC Berkeley and Google Research have introduced BoTNet, a “conceptually simple yet powerful” backbone architecture that boosts performance on computer vision (CV) tasks such as image classification, object detection and instance segmentation.
DeepMind has designed a family of Normalizer-Free ResNets (NFNets) that can be trained in larger batch sizes and stronger data augmentations and have set new SOTA validation accuracies on ImageNet.
Stanford researchers’ DERL (Deep Evolutionary Reinforcement Learning) is a novel computational framework that enables AI agents to evolve morphologies and learn challenging locomotion and manipulation tasks in complex environments using only low level egocentric sensory information.
A research team from DeepMind and University College London have released Alchemy, a novel open-source benchmark for meta-RL research.
Researchers from the University of Wisconsin-Madison, UC Berkeley, Google Brain and American Family Insurance propose Nyströmformer, an adaption of the Nystrom method that approximates standard self-attention with O(n) complexity.
The research develop machine learning methods that enable virtual agents (such as avatars from a computer game) to communicate non-verbally.
In a bid to solve the temporal generalization problem of modern language models, a team of DeepMind researchers propose it’s time to develop adaptive language models that will remain up-to-date in our ever-changing world.
A new study by University Tübingen introduces the world’s largest unified eye dataset with over 20 million human eye images captured using head-mounted eye trackers.
To track progress in natural language generation (NLG) models, 55 researchers from more than 40 prestigious institutions have proposed GEM (Generation, Evaluation, and Metrics), a “living benchmark” NLG evaluation environment.
The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21) kicked off today as a virtual conference. The organizing committee announced the Best Paper Awards and Runners Up during this morning’s opening ceremony.
A bold Carnegie Mellon University (CMU) team recently explored the prospect of using AI to review AI papers.
Researchers from the University of Sheffield, Beihang University, and Open University’s Knowledge Media Institute have proposed a transfer learning approach that can automatically process historical texts at a semantic level to generate modern language summaries.
A new study by the Georgia Institute of Technology and Facebook AI introduces TT-Rec, a way to drastically compress the size of memory-intensive Deep Learning Recommendation Models (DLRM) and make them easier to deploy at scale.
A recent study by the Google Brain Team proposes a new way of programming automated machine learning (AutoML) based on symbolic programming.
UmlsBERT is a deep Transformer network architecture that incorporates clinical domain knowledge from a clinical Metathesaurus in order to build ‘semantically enriched’ contextual representations that will benefit from both the contextual learning and domain knowledge.
A new study by NVIDIA, University of Toronto, McGill University and the Vector Institute introduces an efficient neural representation that enables real-time rendering of high-fidelity neural SDFs for the first time while delivering SOTA geometry reconstruction quality.
Researchers from University of California, Merced and Microsoft have introduced ZeRO-Offload, a novel heterogeneous DL training technology that enables training of multi-billion parameter models on a single GPU without any model refactoring.
A new BibTeX-normalizing tool dubbed Rebiber is gaining popularity in the AI research community. The creation of a PhD student, Rebiber addresses incomplete or confusing paper citation information.