AI Conference

Intel Into AI: New Conference, Chips & Partners

Intel today opened its first-ever AI Developer Conference, AIDevCon, a two-day gathering at San Francisco's prestigious Palace of Fine Arts. While day one was short on hot announcements, the company did detail some aggressive plans for AI.

Intel today opened its first-ever AI Developer Conference, AIDevCon, a two-day gathering at San Francisco’s prestigious Palace of Fine Arts. While day one was short on hot announcements, the company did detail some aggressive plans for AI.

Intel is treating artificial intelligence seriously. Last year, the 50-year-old chip giant and market leader in CPU formed Artificial Intelligence Products Group (AIPG), a division building on Intel’s AI portfolio. Intel AIPG is led by Vice President Naveen Rao, who founded Nervana, an AI chip startup acquired by Intel in 2016.

In this morning’s keynote address, Rao said Intel’s goal is to build the ideal computing platform for AI developers: “It’s all about defining the tools and helping engineers to solve the problem.” Rao illustrated how developers might benefit from Intel’s AI platform’s computing capability, optimization software tools, and community.

image (33).png
Intel Vice President Naveen Rao

Spring Crest, next-generation Intel Nervana chip

Intel offers myriad processors to satisfy computing needs for a wide range of AI applications — Xeon processors for general computational tasks; Nervana neural network processors for training AI models; Movidius vision processing units for image/video processing on embedded IoT devices; and FPGAs for AI inferencing on the cloud and edge.

Intel’s biggest announcement today was Nervana Neural Net L-1000, codenamed Spring Crest, which will be Intel’s first commercial neural network processor (NNP), available in 2019. Spring Crest is estimated to be 3-4 times faster in training performance than its predecessor Lake Crest, Intel’s 2017 NNP microarchitecture.

Built for high utilization and model parallelism, Spring Crest will perform at 96.4 percent in GEMM operation utilization, 96.2 percent in multi-chip scaling, 2.4 TB/s in multi-chip communication, with power consumption under 210W.

8K6A3298.JPG
Spring Crest will support bfloat16, an emerging industrywide numerical format for neural networks. Intel expects to soon extend bfloat16 support across its AI product lines, including Intel Xeon processors and Intel FPGAs.

Intel is a latecomer to the AI-dedicated processor, eager to catch up with GPU market leaders like Nvidia and AMD. The company is likely also keeping an eye on neighbor Google, which launched a newly iterated AI Tensor Processing Unit 3.0 at its recent developer conference I/O.

Software optimization battle heats up

Apart from high performance chips and frameworks, what AI developers really want is effective software tools, which simplify workflow and save time and money. That is why Intel and other tech giants such as Nvidia, Google and Microsoft are all pushing the frontier of their software tools.

Jason Knight, Head of Software products at Intel AI showcased nGraph, a deep learning compiler and runtime system for running models across a variety of frameworks and hardware. NGraph relieves developers from tedious compiling and optimization work if they want to transfer a trained model into another framework or take the model to an upgraded device

The nGraph core creates a strongly-typed and device-neutral stateless graph representation of computations; a framework bridge acts as an intermediary between the nGraph core and the framework; and a transformer plays a similar role between the nGraph core and devices.

img_8108.jpg
nGraph Compiler poster

NGraph was open sourced this March, and now supports frameworks including Apache MXNet, Neon, PaddlePaddle, TensorFlow, and ONNX. Facebook has a similar machine learning compiler, Glow, which also accelerates framework performance on different hardware platforms. But Glow is only available on PyTorch. On the hardware side, nGraph supports Xeon, Nervana NNP, GPU, and various inference engines.

Another announcement was Intel’s Open Visual Inference & Neural Network Optimization (OpenVINO), a software toolkit for visual inferencing and neural network optimization to edge devices such as cameras and IoT devices. OpenVINO’s inference engine supports inferencing on CPU, GPU, Intel Movidius or FPGA without changes to the algorithms or deep learning network.

Knight also introduced an integration between Google TensorFlow and Intel Math Kernel Library for Deep Neural Networks (MKL-DNN), an open source performance library for accelerating deep learning applications and frameworks on Intel architecture. This integration can triple the AI inferencing speed of Broadwall and Skylake, achieve 94 percent efficiency when training with 64 node cluster, and scale AI training.

Adding partnerships: C3 IoT, Tokyo 2020 Olympics

Intel has been aggressively seeking alliances across a variety of industries, and announced today that it will partner with billion-dollar software and AI company C3 IoT to address the market for AI and IoT enterprise software applications.

Intel-Ai-DevCon-9-s.jpg
C3 IoT Chairman and CEO Thomas Siebel (left) speaks with Naveen Rao at Intel AI DevCon today in San Francisco.

The collaboration includes the C3 IoT AI Appliance, powered by Intel AI and a go-to-market program for joint marketing, sales, training, and rapid prototyping initiatives to accelerate customer success with AI and IoT application development. C3 IoT is also joining the Intel AI Builders Program, which provides partners with resources and support to accelerate the adoption of their Intel-based AI platforms.

Intel also announced it will be an official AI platform partner of the Tokyo 2020 Summer Olympics, a follow up on the company’s big promise earlier this year to build a huge 5G network for the games. Intel also successfully showcased a number of emerging technologies — drone lights, virtually reality, and 5G communication — at February’s PyeongChang Winter Olympics.

What to expect next?

The two-day Intel AIDevCon will host dozens of talks, hands-on labs and demos across different industries. Andrew Ng, one of AI’s top AI figures and the founder of Deeplearning.ai, Landing.ai and AI Fund will deliver a closing keynote address tomorrow. Synced will continue to update readers with reports from the conference.


Journalist: Tony Peng | Editor: Michael Sarazen

0 comments on “Intel Into AI: New Conference, Chips & Partners

Leave a Reply

Your email address will not be published. Required fields are marked *