AI Conference United States

GTC 2019 | Highlights & Disappointments at NVIDIA’s Annual Conference

NVIDIA's annual GPU Technology Conference (GTC) attracted some 9,000 developers, buyers and innovators to San Jose, California this week. CEO and Co-Founder Jensen Huang's two-and-a-half hour keynote speech fused GPU-based innovations in domains ranging from graphic design to autonomous driving.

NVIDIA’s annual GPU Technology Conference (GTC) attracted some 9,000 developers, buyers and innovators to San Jose, California this week. CEO and Co-Founder Jensen Huang’s two-and-a-half hour keynote speech fused GPU-based innovations in domains ranging from graphic design to autonomous driving.

My first GTC was 2017, when Huang wowed the crowd with NVIDIA’s new Volta GPU architecture, which introduced Tensor Cores to deliver powerful deep learning performance. Last year again, the energetic tech entrepreneur in the trademark black leather jacket paraded out DGX-2, the world’s largest GPU.

The keynote announcements at GTC this year were less dazzling, and focused instead on dealing with concrete issues. Wall Street Analysts praised NVIDIA’s efforts in promoting its graphic rendering technology ray tracing and AI ecosystem, and by Noon EDT today the company’s stock price had edged up by 4.5 percent.

Below are Synced’s highlights and letdowns from Day One at the NVIDIA GTC.

Amazing ray tracing demos


Last year NVIDIA introduced ray tracing (RTX), the cutting-edge rendering technique in its latest GPU architecture, Turing. RTX renders realistic lighting effects on a visual scene by simulating the actual physical behavior of light.

A keynote video demo that garnered a loud round of applause yesterday was Quake II RTX, a 1990s first-person shooter video game re-engineered by RTX.Quake II RTX runs on a Vulkan renderer with support for Linux, and all its realistic effects such as day lighting, refraction on water and glass, shadows and VFX, are ray-traced by GPUs. NVIDIA will share the Quake II engine source code in April.

Another well-received demo was a ray-traced digital video ad for the new 2019 BMW 8 Series Coupe. Huang displayed two side-by-side visuals — one of a physical car and the other a real-time rendered car — and challenged the audience to tell the difference. Most failed the test as the ray-tracing images look so convincing.

The selling point of ray tracing is faster implementation. An artist from computer animation company Pixar told Synced that ray tracing can cut the time spent on one-time rendering from five minutes to a few seconds. Huang said 80 percent of current leading tool makers — including Adobe Studio, Unreal Engine and Unity — have adopted NVIDIA RTX. It’s projected there will be nine million creators worldwide relying on RTX by the end of this year. If true, that would be a remarkable market-cornering success for NVIDIA.

Omniverse collaborates with global artists

image (1).png

What I liked most from Huang’s keynote was Omniverse, NVIDIA’s effort to create a collaborative tool for global studios. Omniverse is an open collaboration platform wherein artists and game developers can work simultaneously in a single workflow. Operating like a Google Docs for real-time 3D graphic design, Omniverse includes portals — two-way tunnels that maintain live connections between industry-standard applications such as Autodesk Maya, Adobe Photoshop and Epic Games’ Unreal Engine.

The NVIDIA blog provides a sample use case: An artist using Maya with a portal to Omniverse can collaborate with another artist using UE4 and both will see live updates of each other’s changes in their application — a very useful gift to artists and content creators.

CUDA-X AI accelerates AI workloads by 50X

image (2).png

I believe NIVIDIA’s various integration efforts will nurture the open-source spirit of the AI community, maintain consistency throughout the evolving AI development process, and reduce the friction when users transfer tasks between different isolated platforms and tools. That’s why I love the idea of CUDA-X AI.

CUDA-X AI is an end-to-end platform that combines all NVIDIA libraries into one bundle to streamline and accelerate data science workflows by as much as 50 times. It is designed to pack dozens of NVIDIA GPU-acceleration libraries, including cuDNN (a GPU-accelerated library of primitives for DNNs) and TensorRT (a GPU-accelerated neural network inference library), into a one-stop shop.

Huang has coined a snappy acronym for the innovation: PRADA (Programmable Acceleration of multiple Domains with one Architecture).

“Wherever in the stack, you want to code that’s great; you want to use domain-specific libraries, or AI framework and software packages, it’s all good for us,” VP and General Manager of NVIDIA Accelerated Computing Ian Buck told Synced.

A key component of CUDA-X AI is RAPIDS, a GPU-acceleration platform for data science and machine learning which enables end-to-end data science and analytics pipelines running entirely on GPUs. Incubated by NVIDIA for years, RAPIDS features low-level compute optimization, GPU parallelism and high-bandwidth memory speed.

Also announced today was Microsoft Azure Cloud Service’s adoption of NVIDIA RAPIDS. The advantage is obvious, as Microsoft claims an impressive 20 times performance speed up using four NVIDIA GPUs and RAPIDS for model training compared to traditional CPU solutions. Another early adopter is Walmart, which uses RAPIDS to improve the accuracy of its forecasts.

Everyone loves Jetson Nano

Jetson Nano_1.png

Jetson chips are NVIDIA’s embedded AI computers, and Huang announced a new addition to the family: Jetson Nano, a 70 mm x 45 mm SoC that delivers robotic capabilities in planning, perception, and reasoning.

The smallest Jetson device ever, Jetson Nano features Maxwell architecture with 128 NVIDIA CUDA cores (256 CUDA cores with Maxwell on Jetson TX1) and a quad-core ARM Cortex CPU co-processor, with 4 GB 64-bit LPDDR4 memory and 16 GB Flash drive. Jetson Nano delivers 472 gigaflops of compute performance and consumes as little as five watts (with a few limitations).

Priced at US$99, Jetson Nano is no doubt a cost-effective development kit option for entry-level hobbyists, DIYers and students who want to craft and play with robot devices. An additional US$129 production-ready module has a wider range of potential embedded IoT applications, including entry-level Network Video Recorders (NVRs), home robots, and intelligent gateways.

Jetson Nano can run a wide variety of neural network models to perform AI tasks such as object detection and image classification on major machine learning framework such as TensorFlow, PyTorch, Caffe/Caffe2, Keras, and MXNet. The product shows compelling performance in comparison with other platforms such as the Raspberry Pi 3 (US$35), Intel Neural Compute Stick 2 (US$79), and Google Edge TPU Coral Dev Board (US150).

Social media hailed the rollout of Jetson Nano. Facebook Research Scientist and PyTorch Inventor Soumith Chintala tweeted “pretty excited about NVIDIA’s Jetson Nano. 5W, $99 and a 128-core Maxwell sounds pretty great. This year is so good for embedded deep learning!”

屏幕快照 2019-03-19 上午2.29.43.png

Disappointment: No new graphic cards or architecture

After all the impressive rollouts at recent GTCs, I was hoping to see a new graphics card and even a rumoured (and long-awaited) new 7nm GPU architecture. However, neither appeared. The only new chip Huang pulled out of his pocket was a small Jetson Nano development kit.

The most-asked question I heard at GTC this year was “How do you think NVIDIA will do in the next couple of years?”No one would have worried about the company’s health last year or the year before, when NVIDIA stock was growing at a breakneck pace. However, in large part due the cryptocurrency collapse and AI’s relative cooling, NVIDIA stock has fallen by over 40 percent since its peak last October. The company’s largest investor, SoftBank, has sold its entire US$3.6 billion NVIDIA stake. With its future prospects now in question, it is difficult to say whether GTC 2019 instilled sufficient confidence in NVIDIA’s ability to recover.

NVIDIA faces harsh challenges from its competitors. Rival AMD recently unveiled its 7nm CPUs for data centres and 7nm GPUs for consumer-grade graphic cards, pressuring NVIDIA to speed up its next-generation semiconductor design.

In the AI chip market, tech giants are still purchasing large quantities of NVIDIA GPUs for data centres and to consolidate their cloud businesses. Huang announced yesterday that the NVIDIA T4 Tensor Core GPUs would be deployed to Amazon Web Services and Google Cloud in the coming weeks.However, most are also moving to reduce their dependence on NVIDIA. Players like Google, Amazon, Facebook and Alibaba have all upped efforts to design and build their own AI chips.

NVIDIA’s recent acquisition of Mellanox was seen as a smart move, though at a staggering price — the US$6.9 billion deal is the largest acquisition in NVIDIA history. Mellanox’s InfiniBand technology can significantly improve data interconnection between different devices and computers, which is exactly what NVIDIA needs. Mellanox CEO Eyal Waldman was invited to the stage at the GTC, and reaffirmed the company’s joint efforts with NVIDIA to improve data centre compute.

NVIDIA has achieved measurable progress in its emerging business sectors such as robotics and automotive. In his keynote Huang announced theNVIDIA DRIVE Constellation autonomous vehicle simulation platform is now available, and named the world’s number one car maker Toyota as the first adopter.Using the simulation engine, self-driving cars can be trained with millions of miles in virtual worlds across scenarios ranging from routine driving to rare and dangerous situations. These two sectors however are long-term investments, and are not expected to immediately boost the company’s fortunes.

GTC 2019 is still the place to be this week in Silicon Valley, and the conference signals NVIDIA’s ambitious roadmap for various industries. However, in the highly dynamic and competitive world of AI, what investors want to see is a major breakthrough. And many are still waiting.

Journalist: Tony Peng | Editor: Michael Sarazen

0 comments on “GTC 2019 | Highlights & Disappointments at NVIDIA’s Annual Conference

Leave a Reply

Your email address will not be published. Required fields are marked *