Facebook AI Is Teaching Robots to Perceive, Understand, and Interact Through Touch
On November 1, Facebook AI Research shared its progress on developing AI systems that can understand and interact through touch.
AI Technology & Industry Review
On November 1, Facebook AI Research shared its progress on developing AI systems that can understand and interact through touch.
A Yann LeCun team proposes dictionary learning to provide detailed visualizations of transformer representations and insights into semantic structures such as word-level disambiguation, sentence-level pattern formation, and long-range dependency captured by transformers.
A research team from Facebook AI proposes a Contrastive Semi-supervised Learning (CSL) approach that synthesizes pseudo-labelling and contrastive losses to improve the stability of learned speech representations for automatic speech recognition (ASR).
UC Berkeley, Facebook AI Research and New York University researchers’ Multiple Sequence Alignments (MSA) Transformer surpasses current state-of-the-art unsupervised structure learning methods by a wide margin.
Recent AI research on speech separation has explored ways to associate lip motions in videos with audio, but this approach suffers when speakers’ lips are occluded, which they often are in busy multi-speaker environments.
Facebook AI says DNNs can perform well without class specific neurons and overreliance on intuition-based methods for understanding DNNs can be misleading.
TaBERT-powered neural semantic parsers showed performance improvements on the challenging benchmark WikiTableQuestions and demonstrated competitive performance on the text-to-SQL dataset Spider.
Researchers propose a neuro-symbolic hybrid approach to address the challenge of creativity in generative art.
Facebook this week released Detection Transformers (DETR), a new approach for object detection and panoptic segmentation tasks that uses a completely different architecture than previous object detection systems.
To deliver human-level voices to its platform’s billions of users while maintaining strict compute efficiency, Facebook AI researchers have deployed a new neural TTS system that works on CPU servers.
Researchers have introduced Active Neural SLAM, a modular and hierarchical approach to learning policies for exploring 3D environments.
In a new study, researchers use a physics simulator to learn to predict physical forces in videos of humans interacting with objects.
Researchers proposed a “radioactive data” technique for subtly marking images in a dataset to help researchers later determine whether they were used to train a particular model.
Facebook’s new HiPlot is a lightweight interactive visualization tool that takes this further, using parallel plots to discover correlations and patterns in such high-dimensional data.
Facebook AI researchers have further developed the BART model with the introduction of mBART.
Facebook AI Research team has introduced a new “point-based rendering” neural network module with an iterative subdivision algorithm that can integrate SOTA image segmentation models.
The model reduces the number of parameters from some 3 billion to 270 million while improving task performance by an average of 2.05 points.
As global AI development and deployment continues, the demand for AI talents is growing faster than ever. A number of industry leaders and reputable institutions offer AI residency programs designed to help nurture promising AI talents.
The recent rapid development of pretrained language models has produced significant performance improvements on downstream NLP tasks.
Recently, Facebook AI Research (FAIR) researchers introduced a structured memory layer which can be easily integrated into a neural network to greatly expand network capacity and the number of parameters without significantly changing calculation cost.
acebook’s AI’s new “Inverse Cooking” AI system reverse-engineers recipes from food images, predicting both the ingredients in the dish and their preparation and cooking instructions.
Researchers from New York University and Facebook AI Research recently added 50,000 test samples to the dataset. Facebook Chief AI Scientist Yann LeCun, who co-developed the MNIST, tweeted his approval: “MNIST reborn, restored and expanded.”
Synced Global AI Weekly June 2nd
ImageNet Pre-training is common in a variety of CV (Computer Vision) tasks, as something of a consensus has emerged that pre-training can help a model learn transferrable information that can be useful for target tasks.