AI Emerging Company Industry United States is Teaching Robots to See The World Like Humans

Dileep George and D. Scott Phoenix are confident they're on the right path to achieve AGI with unique solutions combining machine learning and the human brain itself.

The ultimate target in the AI research race is human-level artificial general intelligence (AGI). Although few researchers are audacious enough to predict how or when they might achieve that lofty goal, Dileep George and D. Scott Phoenix are confident they’re on the right path with unique solutions combining machine learning and the human brain itself.

The Founders of Union City, California-based are teaching robots to tackle problems that humans are good at solving.’s technology imitates the human visual cortex system to achieve significant results in computer vision tasks such as text and object recognition, robotic manipulation, and reasoning.

Founded in 2010, surprised the AI community in 2013 when it announced its technology had solved text-based CAPTCHAs — the widely-used Turing-style web security test designed to distinguish humans from bots. Such CAPTCHAs are considered ineffective if a computer program can fool them at a rate above 1%. The model achieved astonishing rates of 66.6% on reCAPTCHAs, 64.4% on BotDetect, 57.4% on Yahoo, and 57.1% on PayPal.

This news however raised suspicions, especially as did not release details on their tech. Yann LeCun, a principal contributor to the development of Convolutional Neural Networks, called the announcement “a textbook example of AI hype of the worst kind.”

“When we broke CAPTCHA in 2013, we were very concerned about after-effect security, so we were careful about how we released the details,” says George. only recently published a paper in Science Magazine introducing its Recursive Cortex Networks (RCNs), the tech behind the CAPTCA coup.

Unlike deep learning models, RCNs adopt generative probabilistic models which can simulate and regenerate an object’s features such as basic elements, corners, contours and shapes. Generative models have two distinct advantages over deep learning models: better performance in generalization, and the capability to deal with adversarial examples (handcrafted inputs that are used to fool a neural network). researchers borrow insights from human brains to build the RCN. For example, the human visual system has a lateral connection that ensures humans can retain object contours in mind. When applying such human vision characteristics to RCNs, the lateral connections enforce the continuity of contours. A top-down attention mechanism, which enables humans to easily recognize overlapping items separately, is also employed in RCNs.

The AI community has a running disagreement regarding the best method for pursuing AGI. Last month, LeCun debated New York University’s Gary Marcus on whether AI requires cognitive machinery like humans and animals. Marcus suggested AI researchers should apply cognitive science insights to machines; while LeCun countered that machines can be developed using only unsupervised deep learning, which is the major AI technology currently being adopted in industries.

George, however, told Synced he believes machines can only reach human-intelligence levels by referencing human brains. “Any learning algorithm can be considered as a search algorithm eventually, but the search is too huge without any reference. You definitely need a structure, which we call ‘scaffolding’, from the brain.”

George began studying human brains while doing his PhD in Computer Engineering at Stanford University. After graduating in 2005, he teamed up with Silicon Valley neuroscientist and entrepreneur Jeff Hawkins to found Numenta, a software company focused on machine intelligence.

At Numenta, George delved deeply into neuroscience-machine learning research, pioneering hierarchical temporal memory (HTM), a computational method based on principles of the neocortex. HTM technology is particularly suitable for problems such as streaming data, underlying patterns in data change over time, subtle patterns, and time-based patterns.

In 2010, George left Numenta and founded with D. Scott Phoenix, a tech entrepreneur who also regarded the human brain as the key to building human-like robots. “What’s magical about the human brain is it’s a truly general-purpose learning architecture that can learn any tasks in the rich sensory world you and I are living in,” Phoenix told Goldman Sachs.

Scott Phoenix.jpg
D. Scott Phoenix

The novel approach of caught the attention of Facebook angel investor and Paypal founder Peter Thiel, who financed the company’s seed round in late 2010. By 2014, had raised US$40 million in Series B funding, from investors such as Facebook’s founder & CEO Mark Zuckerberg, Y Combinator CEO Sam Altman, and Tesla founder Elon Musk. Total Vicarious AI financing has thus far exceeded US$130 million.

In recent years Vicarious has been ramping up research on how to apply RCN to robots, especially industrial robots. The company’s Head of Commercialization Dr. Xinghua Lou told Synced that the technology can, for example, help flexible manufacturing systems react to changes whether predicted or unpredicted.

“ will provide intelligent modules in aspects of visions and controls for warehouse and manufacturing robots,” says Dr. Lou. The company’s current robot prototypes are being supplied by its investors ABB Group and Amazon.

Although the road from industrial robots to AGI is bound to be a long and difficult one, is convinced it can get there by the year 2040. “We believe achieving advanced intelligence in AI is just as great as sending humans to the moon, and that motivates us to work,” says George. “I don’t think other companies will solve AGI before us.”

Journalist: Tony Peng | Editor: Michael Sarazen

1 comment on “ is Teaching Robots to See The World Like Humans

  1. Pingback: Synced | Unveiling China’s Mysterious AI Lead: Synced Machine Intelligence Award 2017

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: