ICLR 2021 Announces List of Accepted Papers
Of the 2997 submissions, 860 papers have made it to ICLR 2021, for an acceptance rate of 28.7 percent — slightly higher than last year’s 26.5 percent.
AI Technology & Industry Review
Of the 2997 submissions, 860 papers have made it to ICLR 2021, for an acceptance rate of 28.7 percent — slightly higher than last year’s 26.5 percent.
OpenAI has trained a neural network called DALL·E that creates images from text captions for a wide range of concepts expressible in natural language.
This is the fourth Synced year-end compilation of “Artificial Intelligence Failures.” Our aim is not to shame nor downplay AI research, but to look at where and how it has gone awry with the hope that we can create better AI systems in the future.
As part of our year-end series, Synced has compiled a global list of proposals, rules and regulatory frameworks for AI introduced in 2020.
As part of our year-end series, Synced highlights 10 AI-powered efforts that contributed to the fight against COVID-19 in 2020.
Synced has compiled a list of nonfiction books that notable AI researchers and engineers have recommended on Twitter over the last 12 months.
As part of our year-end series, Synced highlights 10 AI-powered art projects that inspired and entertained us in 2020.
Synced has selected 10 AI-related podcasts for readers to check out over the holiday season.
Researchers combine the effectiveness of the inductive bias in CNNs with the expressivity of transformers to model and synthesize high resolution images.
In the new paper Canonical Capsules: Unsupervised Capsules in Canonical Pose, Turing Award Honoree Dr. Geoffrey Hinton and a team of researchers propose an architecture for unsupervised learning with 3D point clouds based on capsules.
As part of our year-end series, Synced highlights 10 artificial intelligence papers that garnered extraordinary attention and accolades in 2020.
This year, 22 Transformer-related research papers were accepted by NeurIPS, the world’s most prestigious machine learning conference. Synced has selected ten of these works to showcase the latest Transformer trends.
Yoshua Bengio and Anirudh Goyal from Mila - Quebec AI Institute delve into human and non-human animal intelligence and how it can inform deep learning.
“Depix” is a new AI-powered tool that can easily undo pixelization to enable recovery of the information therein.
The new AI-powered Multi-Ingredient Pizza Generator (MPG) can deliver all these mouth-watering pies and many more.
ImHex’s creator says his trending hex editor is aimed at “Reverse Engineers, Programmers and people that value their eyesight when working at 3 AM.”
Google DeepMind has added Jraph to the JAX ecosystem, the machine learning framework will also appear in a NeurIPS 2020 Spotlight.
OpenAI’s groundbreaking GPT-3 language model paper, a no-regret learning dynamics study from Politecnico di Milano & Carnegie Mellon University, and a UC Berkeley work on data summarization have been named the NeurIPS 2020 Best Paper Award winners.
AAAI 2021 received a record-high 9034 submissions and over 7911 papers went to review and a total of 1692 papers made it, for an acceptance rate of 21 percent
The approach dramatically reduces bandwidth requirements by sending only a keypoint representation [of faces] and reconstructing the source video on the receiver side with the help of generative adversarial networks (GANs) to synthesize the talking heads.
A Princeton student designed a GAN framework for Chinese landscape painting generation that is so effective most humans can’t distinguish its works from the real thing.
Google’s UK-based lab and research company DeepMind says its AlphaFold AI system has solved the protein folding problem, a grand challenge that has vexed the biology research community for half a century.
Researchers from the City University of Hong Kong and SenseTime propose a lightweight matting objective decomposition network (MODNet) that can smoothly process real-time human matting from a single input image with diverse and dynamic backgrounds.
University of Alberta recently proposed U^2-Net, a novel deep network architecture that achieves very competitive performance in salient object detection.
A new AI-powered image synthesis framework makes “learning” to moonwalk or drop Blackpink dance moves a snap.
Facebook AI is building an automatic differentiation system for the Kotlin programming language and developing a system for tensor typing.
The Conference on Empirical Methods in Natural Language Processing (EMNLP 2020) kicked off on Monday as a virtual conference.
A new DeepMind scalable environment simulator takes a digital approach to the question, enabling the examination of environmental factors on AI agents.
Google Research and DeepMind debut Long-Range Arena (LRA) benchmark for Transformer research on tasks with long sequence lengths.
Google Brain ICLR 2021 submission analyzes learned optimizers’ performance advantage over well-tuned baseline optimizers.
Amazon Alexa AI paper asks whether NLU problems could be mapped to question-answering (QA) problems using transfer learning.
A new AI Expert Roadmap developed by German software company AMAI is garnering keen interest from aspiring AI professionals around the world.
In a new paper, researchers from Google, OpenAI, and DeepMind introduce “behaviour priors,” a framework designed to capture common movement and interaction patterns that are shared across a set of related tasks or contexts.
Facebook AI says DNNs can perform well without class specific neurons and overreliance on intuition-based methods for understanding DNNs can be misleading.
Probability trees may have been around for decades, but they have received little attention from the AI and ML community.
Amazon extracts an optimal subset of architectural parameters for BERT architecture by applying recent breakthroughs in algorithms for neural architecture search.
Now, just in time for costume season, another indie developer has taken facial image transfer tech to the opposite end of the cuteness spectrum, building a zombie generator.
“Trust in AI systems is becoming, if not already, the biggest barrier for enterprises — as they start to move from exploring AI or potentially piloting or doing some proof of concept works into deploying AI into a production system”
ICLR 2021 submission proposes LambdaNetworks, a transformer-specific method that reduces costs of modeling long-range interactions for CV and other applications.
Google AI recently launched the open-source browser-based toolset “rǝ,” which was created to enable the exploration of city transitions from 1800 to 2000 virtually in a three-dimensional view.