Baidu’s Revenue Hits RMB 100 Billion in 2020, Driven by AI Transitions
On February 18, Baidu released the 2020 full year and Q4 financial report, presenting revenue of RMB 107.1 Billion, and its net profit of RMB 22 billion.
AI Technology & Industry Review
On February 18, Baidu released the 2020 full year and Q4 financial report, presenting revenue of RMB 107.1 Billion, and its net profit of RMB 22 billion.
On February 19, Beijing-based AI startup Unisound has decided halted its IPO application to the Shanghai Stalk Exchange.
On February 22, Chinese state-owned automotive company SAIC Motor announced that it has forged a comprehensive strategic cooperation with Horizon Robotics, an AI smart chip unicorn.
Torc Robotics has selected Amazon Cloud Services (AWS) as its cloud service provider to meet large-scale data transmission, storage and computing speed requirements.
Ministry of Industry and Information technology appointed five more cities to the country’s National Artificial Intelligence Innovation Application Pilot Zones.
A Microsoft team has introduced Error Analysis, a responsible AI toolkit designed to identify and diagnose errors in machine learning models.
Huawei and Mercedes-Benz S-Class are cooperating to provide car owners with the HMS for Car Huawei smart vehicle solution.
University of Toronto researchers propose a BERT-inspired training approach as a self-supervised pretraining step to enable deep neural networks to leverage newly and publicly available massive EEG (electroencephalography) datasets for downstream brain-computer-interface (BCI) applications.
Apple has laid out the design characteristics of a new generic system that enables federated evaluation and tuning (FE&T) systems on end-user devices.
A research team from Google and Johns Hopkins University identifies variance-limited and resolution-limited scaling behaviours for dataset and model size in four scaling regimes.
British consulting firm L.E.K. has released a recent research report stating that as unmanned technology and network scale greatly reduce costs, by 2040, drones may account for as much as 30 percent of same-day parcel deliveries.
On February 16, Goldman Sachs announced that it is launching an automated wealth management platform to invest client funds in a portfolio of stocks and bonds.
According to statement on February 12, Chinese AI chip unicorn Horizon Robotics announces a third time series C financing.
As part of the long-term strategy to “expand AI capabilities”, Swiss flavours and fragrance giant Givaudan announced that it will acquire the French company Myrissi.
UC Berkeley, Facebook AI Research and New York University researchers’ Multiple Sequence Alignments (MSA) Transformer surpasses current state-of-the-art unsupervised structure learning methods by a wide margin.
Researchers from UC Berkeley and Google Research have introduced BoTNet, a “conceptually simple yet powerful” backbone architecture that boosts performance on computer vision (CV) tasks such as image classification, object detection and instance segmentation.
DeepMind has designed a family of Normalizer-Free ResNets (NFNets) that can be trained in larger batch sizes and stronger data augmentations and have set new SOTA validation accuracies on ImageNet.
Stanford researchers’ DERL (Deep Evolutionary Reinforcement Learning) is a novel computational framework that enables AI agents to evolve morphologies and learn challenging locomotion and manipulation tasks in complex environments using only low level egocentric sensory information.
A research team from DeepMind and University College London have released Alchemy, a novel open-source benchmark for meta-RL research.
Researchers from the University of Wisconsin-Madison, UC Berkeley, Google Brain and American Family Insurance propose Nyströmformer, an adaption of the Nystrom method that approximates standard self-attention with O(n) complexity.
The research develop machine learning methods that enable virtual agents (such as avatars from a computer game) to communicate non-verbally.
In a bid to solve the temporal generalization problem of modern language models, a team of DeepMind researchers propose it’s time to develop adaptive language models that will remain up-to-date in our ever-changing world.
A new study by University Tübingen introduces the world’s largest unified eye dataset with over 20 million human eye images captured using head-mounted eye trackers.
To track progress in natural language generation (NLG) models, 55 researchers from more than 40 prestigious institutions have proposed GEM (Generation, Evaluation, and Metrics), a “living benchmark” NLG evaluation environment.
The Thirty-Fifth AAAI Conference on Artificial Intelligence (AAAI-21) kicked off today as a virtual conference. The organizing committee announced the Best Paper Awards and Runners Up during this morning’s opening ceremony.
A bold Carnegie Mellon University (CMU) team recently explored the prospect of using AI to review AI papers.
Researchers from the University of Sheffield, Beihang University, and Open University’s Knowledge Media Institute have proposed a transfer learning approach that can automatically process historical texts at a semantic level to generate modern language summaries.
A new study by the Georgia Institute of Technology and Facebook AI introduces TT-Rec, a way to drastically compress the size of memory-intensive Deep Learning Recommendation Models (DLRM) and make them easier to deploy at scale.
A recent study by the Google Brain Team proposes a new way of programming automated machine learning (AutoML) based on symbolic programming.
UmlsBERT is a deep Transformer network architecture that incorporates clinical domain knowledge from a clinical Metathesaurus in order to build ‘semantically enriched’ contextual representations that will benefit from both the contextual learning and domain knowledge.
A new study by NVIDIA, University of Toronto, McGill University and the Vector Institute introduces an efficient neural representation that enables real-time rendering of high-fidelity neural SDFs for the first time while delivering SOTA geometry reconstruction quality.
Researchers from University of California, Merced and Microsoft have introduced ZeRO-Offload, a novel heterogeneous DL training technology that enables training of multi-billion parameter models on a single GPU without any model refactoring.
A new BibTeX-normalizing tool dubbed Rebiber is gaining popularity in the AI research community. The creation of a PhD student, Rebiber addresses incomplete or confusing paper citation information.
Researchers from Naver AI Lab says they’ve found a computationally efficient re-labelling strategy that fixes a significant flaw in popular image classification benchmark ImageNet.
On January 22, Beijing-based AI firm 4Paradigm announced USD 700 million series D investments from Boyu Capital, Primavera Capital and Hopu Investments.
Recent footage shows agricultural robot co-developed by National Research Institute Agricultural Research Organization, Ritsumeikan University, and automobile parts manufacturer Denso harvesting apples in Japan’s orchards.
Is it possible for machines to understand and integrate human emotions in visual arts? Meet ArtEmis, a new large-scale dataset of emotional reactions and explanations for visual artworks.
Analyzing papers and patent data collected from top AI journals and conferences, 44 in total from 2011 to 2020, Tsinghua University’s new AI development report revealed new statistical insights.
IBM has closed off its Beijing-based China Research Laboratory (CRL) after 25 years of operation.
This is the second in a special Synced series of introductory articles on traditionally theoretical fields of studies and their impact on modern-day machine learning.