The success of artificial intelligence is built on large corpuses of centralized data collection, but rising concerns over user privacy and data misuse have left many people wary of fully embracing AI on their mobile devices. Google wants to change that.
In his keynote this morning at the company’s annual developer conference Google I/O 2019, CEO Sundar Pichai listed recent tech advances and stressed “We continue to believe that the biggest breakthroughs happen at the intersection of AI.”
The Silicon Valley tech giants are increasingly aware of their unique social responsibilities, and the privacy issue is now a top priority. Two months ago, Apple rolled out stricter-then-ever policies designed to prevent privacy abuse. Facebook, which had notoriously shared user data with third-party vendors to generate ad revenue, has raised the privacy bar with end-to-end encrypted communication.
Improving privacy strategies may seem counterproductive for tech companies desperate for user data, the fuel that powers AI engines. Google however is confident that it can do more AI without private data leaving your devices, and that the heart of the solution is federated learning.
Basically, federated learning is a distributed machine learning approach which enables model training on a large corpus of decentralized data.Michigan State University Assistant Professor Mi Zhang tells Synced that unlike the centralized training approach that major AI companies have relied on for years, which requires training data to be aggregated on a single machine or in a datacenter, federated learning enables mobile phones at different geographical locations to collaboratively learn a machine learning model without transferring any data that may contain personal information from the devices.

Google initiated federated learning development in 2017 and debuted the concept in front of the over 7000 Google I/O attendees today. “We ship the machine learning model to your device, and each phone computes a global model,” explained Pichai. Google has applied this technique on Gboard, Google’s smart digital keyboard, to improve next-word prediction for example by learning trending neologisms such as “zoodles” or “Yolo” at the same rate as humans begin using them.
Federated learning is a leap forward for on-device machine learning. Thanks to the AI chipsets embedded in today’s smartphones and improved deep learning and model compression techniques, Google can now downsize an AI model to allow it to run inside a smartphone. Pichai boasted that Google’s 100 GB voice recognition model had been shrunk down to just 0.5 GB.
Because on-device machine learning eliminates the time-consuming back-and-forth communication process between cloud and edge, it will enable Google to achieve quicker response times and almost zero-latency in the next-generation Google Assistant, the home-grown virtual AI assistant it launched in 2016. VP of Engineering Scott Hoffman says Google Assistant can now deliver answers up to 10 times faster. Google Assistant is on over one billion devices and supports 30 languages across 80 countries.
Leveraging on-device machine learning and federated learning with developments in computer vision, language understanding, and speech recognition, Google has created a series of new AI features that will be available on its upcoming mobile operating system Android Q. One example: Live Caption can generate real-time subtitles for any video or audio playing on a smartphone.

Google also announced today that is is adding features to Google Lens, a platform released in 2017 that can quickly identify and respond to information in a picture. Users can point their camera at real-world text in a language they do not understand and Lens will overlay a translation. Users could also use Lens to annotate recipes on a cooking show for example.

Federated learning can also be used to personalize user experience. Google Assistant will automatically select recipes tailored to user preference when asked for dinner suggestions via “Pick for you.” The new feature provides personalized suggestions in recipes, events, and podcasts on Google’s Smart Displays and will be available this summer. Google Assistant’s nuanced understanding of human language will also improve so that asking for example “How’s the traffic to Mom’s House?” will inform the user on routes to their mother’s residence — not to one of the many restaurants called “Mom’s House.”
Google is aggressively expanding the functionality of Google Duplex, the advanced conversational AI system it released last year for automatic telephone restaurant reservations. Duplex can now perform longer tasks for example providing a human customer service agent with information required to rent a car, such as selecting a preferred model, scheduling dates and discussing price.
In an additional step to assuage privacy concerns, Google announced that users can now delete any of their activities, search queries, and location histories on Google Assistant, Search, and Google Maps.
Hardware and software are increasingly interconnected, and Google today unveiled a couple of smart devices that bundle its new AI features: the smart display ; and a US$399 Pixel 3a smartphone equipped with stereo speakers, Bluetooth 5.0, USB digital audio, and a headphone jack.


So far, 2019 has been relatively tough for Google. Parent company Alphabet announced a Q1 earnings report that missed revenue expectations; sales of its high-end smartphones Pixel 3 are sluggish; the company caught flak for participating in the Pentagon’s AI project; and its new AI Ethics Board self-destructed within weeks.
Today’s Google I/O announcements however may suggest that the company’s dedicated AI efforts are finally paying off — if the audience’s enthusiastic clapping and chanting at today’s keynote speech is any indication.
“Google is no longer a company that just helps you find answers. Today, Google products also help you get stuff done, whether it’s finding the right words with Smart Compose in Gmail, or the fastest way home with Maps,” said Pichai.
Google I/O runs through Thursday May 9 at the Shoreline Amphitheater in Mountain View, California.
Journalist: Tony Peng | Editor: Michael Sarazen
Pingback: Google says it will address AI, machine learning model bias with technology called TCAV – ZDNet – AiProBlog.Com