AI Conference Industry Research United States

Google I/O 2017: Integrating Everything with AI

At the 10th Google I/O, held May 17-19 at the Shoreline Amphitheatre in Mountain View, California, Google took a different approach from unveiling exciting new products, by putting its focus on the convergence of existing products, aimed at providing a better user experience.

IMG_7882.JPG
Google CEO Sundar Pichai underlined the company’s important shift from Mobile First to AI First at the keynote speech of Google I/O on May 17 at the Shoreline Amphitheatre in Mountain View, California.

Typically, Google uses its annual Google I/O Conference to unveil exciting new products and technologies. At the 10th Google I/O, held May 17-19 at the Shoreline Amphitheatre in Mountain View, California, Google took a different tact — putting its focus on the convergence of existing products, aimed at providing a better user experience.

The integration of technologies is normal in a top-down company, but more challenging for a bottom-up company like Google. Innovation is in Google’s DNA — the company encourages its talents to freely explore and create independently. As a result, Google has sometimes struggled to communicate to users how its various products fit together.

Google I/O showed how the company is working to change that. For example, Google Lens, the newly released platform that can quickly determine what information is in a picture and take appropriate action, is expected to integrate with Google Assistant, the AI-powered technology launched at Google I/O last year. Google Assistant has already integrated various features, including text translation, speech recognition, human-machine conversation interface, and other default Google services like search engine, Gmail and calendar. Google Lens will significantly enhance Assistant’s feature list, creating a user-friendly integration of the new technology.

Another exciting announcement at this year’s Google I/O was TensorFlow Lite, which links the Android mobile operating system with TensorFlow, the framework for deep learning applications. This brings AI to mobile devices, allowing developers to build deep learning models on smartphones.

Meanwhile, Google Home — the Google Assistant-based smart speaker released last year — integrates systems like those in the Nest family, which can manage home appliances, temperature and security systems.

To encourage seamless collaboration across its various labs, Google has developed a consolidated infrastructure where programmers from different sections use the same code base, storage, interface and tools. Google is said to have built a huge code base, with over 2 billion codes contributed by thousands of programmers since the company was founded. This puts programmers from different sections on the same page, laying the foundation for efficient collaboration in the future.

Open API — a trend stimulated five years ago by social media platforms like Facebook and Twitter, which encourages innovation and integration in applications created by third-party developers by allowing them to access the back-end — will also simplify interaction and data-sharing between Google’s different sections. For example, programmers in the Firebase team (Google mobile and web application development platform) can access APIs from the Google Cloud, so they don’t need to build their own Cloud APIs.

AI integrates technologies

Technology is now at the point where next generation apps must be AI-powered. To this end, Google has switched its mantra from Mobile first to AI first. The company set up an independent machine learning team to teach programmers from different sections how to build machine learning models from scratch.

Although AI research has been successful in specific context scenarios, much of it remains far away from application, and most customers are still in the dark on how an AI-powered application should look or perform.

Google gave an example of AI’s potential with Google Assistant last year, and recently strengthened the smart app with Google Lens. By starting with a text translation and speech recognition conversational interface, then adding image recognition, Google is taking steps to integrate the five human senses into a cognitive computing experience.

By exploring and improving machines’ ability to see, hear, touch, taste and smell, AI will be able to process the world in parallel with the human sensorium. When machines learn how humans are perceiving and understanding their environments, Google hopes they will be better equipped to help humans solve problems.

There is also much interest these days in how AI can interface with AR/VR. Google I/O addressed that with the unveiling of several breakthroughs in immersive technologies, including its Visual Positioning Service (VPS), a new AR technology that identifies and triangulates distinct visual features in users’ surroundings. VPS will be one of the core capabilities of Google Lens.

Overall, this year’s Google I/O provided a big-picture understanding of Google’s vision for the next five to ten years: a future with increasingly powerful and increasingly integrated services, a future built on AI.

IMG_7872.JPG


Author: Tony Peng | Editors: Michael Sarazen, Nicholas Richards

0 comments on “Google I/O 2017: Integrating Everything with AI

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: