n the new paper Adversarial Examples Are Not Bugs, They Are Features, a group of MIT researchers propose that adversarial examples’ effectiveness can be attributed to non-robustness: “Adversarial vulnerability is a direct result of our models’ sensitivity to well-generalizing features in the data.”
The Seventh International Conference on Learning Representations (ICLR) kicked off today. One of the world’s major machine learning conferences, ICLR this year received 1591 main conference paper submissions — up 60 percent over last year — and accepted 24 for oral presentations and 476 as poster presentations.
There’s a lot more to a friendly game of Jenga than meets the eye. Strategies are informed by a complex set of tactile and visual stimuli — by touching a block and observing the tower, we not only see but also feel our actions and their consequences. The MIT Jenga robot thus marks an important step in AI’s transition to the physical world.
New research from Carnegie Mellon University, Peking University and the Massachusetts Institute of Technology shows that global minima of deep neural networks can been achieved via gradient descent under certain conditions. The paper Gradient Descent Finds Global Minima of Deep Neural Networks was published November 12 on arXiv.
The Massachusetts Institute of Technology (MIT) today announced they will invest US$1 billion into a new college for artificial intelligence. The MIT Stephen A. Schwarzman College of Computing will “constitute both a global center for computing research and education, and an intellectual foundry for powerful new AI tools.”
The dearth of AI talents capable of manually designing neural architecture such as AlexNet and ResNet has spurred research in automatic architecture design. Google’s Cloud AutoML is an example of a system that enables developers with limited machine learning expertise to train high quality models. The trade-off, however, is AutoML’s high computational costs.
Microsoft is working on a bias-detecting tool which can alert people if an AI algorithm might be treating them unfairly based on their race or gender. As more and more decisions are being made by or based on AI, the detection of unfair biases has become an important public issue.
If you are familiar with the biotech industry, you know Cambridge. The small city at the center of the Great Boston Area hosts over 1,000 biotechnology-related companies. Most of these companies cluster around Kendall Square, the same block as the Massachusetts Institute of Technology (MIT).
The Massachusetts Institute of Technology (MIT) and Chinese AI Unicorn SenseTime today announced the MIT-SenseTime Alliance on Artificial Intelligence, a partnership the duo says “aims to open up new avenues of discovery across MIT in areas such as computer vision, human-intelligence-inspired algorithms, medical imaging, and robotics.”
Personal computers and mobile devices are in their heyday. Researchers are swarming standalone AI, focusing on how to automate self-learning intelligent systems. The interfaces for wearables meanwhile are evolving from smart screens to gesture commands, like those often seen in AR and VR commercials.