Have you ever thought of supervising and teaching robots to learn English? Well, machine learning is opening up new ways to help solving complex problems that human beings have never envisioned before.
At last week’s Rework Machine Learning Summit at San Francisco, speakers from tech giants like Google, Amazon, Facebook are invited to present their latest study in machine learning as well as AI research labs and AI-related startups. I was invited to witness how the state-of-the-art innovation is disrupting the industry.
Among the speeches from over 40 invited speakers, the research from OpenAI caught my eyes. Partnered with UC Berkeley, McGill University and Stanford University, this San Francisco-based non-profit AI research company is investigating the emergence of grounded language in robots. It reminds me of Skynet, the self-aware artificial intelligence system featuring in the Terminator franchise, is more likely to be coming true than ever.
Speaking of the cause, the speaker Ryan Lowe from OpenAI presented three points – allowing collaboration between agents to solve complex problems, allowing machines to share knowledge with each one of us, and specifying a machine’s goals through language.
“Recent advances in machine learning, applied to large text corpora, have enabled strong results in natural language processing by capturing the statistical patterns between words,” Lowe explained the initiatives of the project. “While such approaches are useful, they are arguably insufficient for building general-purpose agents that can interact with humans, as the words lack grounding in an external environment.”
The approach they are using to realize the communication between (robot) agents is to train multiple agents in a simple environment with deep reinforcement learning. Agents will have two types of actions – environment actions like moving and looking, or communication actions like saying something to other agents. For the record, the communication symbols that agents are using are abstract one-hot vectors.
Based on that, the team was trying to take further steps by teaching robots grounded English with a teacher. The main idea is to have agents learn simplified form of English by interacting with a hard-coded bot that speaks a simplified form of English. By putting the bot in the multi-agent environment, agents should learn to use English to accomplish goals in their environment and generalize to new kinds of goals.
“The next step is to move from hard-coded bots to crowd-sourced humans eventually.” Lowe said.
Utilization of Noisy Labels
If you have no prior knowledge of data labeling or why it matters to machine learning, you might get lost in Google Brain’s deep learning resident Melody Guan’s 20-minute presentation.
Data labeling is necessary to machine learning by processing raw data and reorganizing data in different classes and labels for machines to ingest. If a machine learning model is designed to recognize chicken, for example, the fundamental step is to teach computers what photo has chicken and what photo has not chicken. In another word, unlabeled data cannot be used to feed the machine learning model.
However, the problem of data labeling is ineffective use of noisy labels. Guan introduced a new method to make more effective use of noisy labels when each example is labeled by a subset of a larger pool of experts.
“It allows us to give more weight to more reliable experts and take advantage of the unique strengths of individual experts at classifying certain types of data.” Guan said in the abstract of her speech.
Google brain learns from the identity of multiple noisy annotators by modeling them individually with a shared neural net that has separate set of outputs for each expert, then learning averaging weights for combining their models predictions. In their testing, they reduced error from computer automated diagnosis of diabetic retinopathy by a relative 13.6%.
Prediction of next shopping item
Used to be a buzzword, online grocery business has developed due to the massive scaling happening over the last few years. However, it is really hard for delivery services to lower the costs while improving the speed of delivery and making sure the items are in good quality synchronously.
Instacart, a well-known grocery delivery startup based in San-Francisco, presented their latest results – using deep learning algorithms to shorten the shopping time. Instacart hires tens of thousands of shoppers to buy items at local groceries like Safeway, Wholefoods and Costco, and to ship items to households in few hours after the order is placed. The shopping time is a big cost.
“There are 123 million households in the U.S. Assuming Instacart has just one percent of market share, if we are able to save 1 minute in shopping time, it equals to 123 years of shopping.” VP of Data Science at Instcart Jeremy Stanley said.
Instacart’s team built a deep learning model that is able to predict the items shoppers will most likely to pick in specific store locations – in some cases saving significant time in-store. To make this happen, the embedding of the products, store locations, product layouts is critical, which has already happened.
Speaking of the model, Jeremy Stanley explained how it works. The overall goal is to create a shopping list with the best sequence for shoppers. The architecture they have created can come up with a confidence score of a candidate item, a chocolate in this case. The score is based on the last item the shopper picked and the location of items. The higher the score is, the more he will be likely to pick.
At this stage, Jeremy remarked that the model has yet to answer the fundamental question, “What is the right sequence to allow faster shopping time.” however, it is still noteworthy to see how AI is integrated progressively with grocery delivery business at the moment.
Author: Tony Peng, Synced Tech Journalist
0 comments on “Rework Machine Learning Summit in SF: AI is Teaching Robots to Chat”