Last December some 9,000 attendees packed a single venue in Montreal for a week-long academic conference. NeurIPS was completely sold out, the latest indication of just how hot AI is nowadays. As AI and machine learning continue to ignite discussion across a wide variety of disciplines, novel approaches to the tech are also garnering interest.
Just a week after NeurIPS closed, the first TVM and Deep Learning Compiler Conference kicked off in Seattle. Researchers at University of Washington SAMPL (a collaboration between Sampa, Syslab, MODE, and PLSE) developed their “TVM” open-source deep learning compiler stack for CPUs, GPUs and specialized accelerators to close the gap between deep learning frameworks and hardware backends. The team introduced TVM in 2017 and published the paper TVM: An Automated End-to-End Optimizing Compiler for Deep Learning last February.
TVM – A NOVEL OPEN-SOURCE COMPILER
The application of machine learning technologies in a wide variety of hardware devices is increasing. However, current frameworks depend on specific operator libraries and can optimize for a limited range of server-class GPUs. Additionally, deploying workloads to new platforms such as mobile phones, embedded devices, and accelerators including FPGAs and ASICs requires an enormous amount of manual effort.
“TVM can be applied to a wide spectrum of applications. In particular, it automates deep learning deployments on all devices including CPUs, GPUs, and future ASICs. We are also already on track in supporting more devices,” Project Lead Tianqi Chen told Synced. Chen is a UW PhD student, a member of the SAMPL and MODE labs, and one of the TVM conference organizers.
THE FIRST TVM CONFERENCE
The University of Washington’s TVM conference opened on December 12 and welcomed 180 registered attendees from the TVM community to discuss recent advances in frameworks, compilers, systems and architecture support, security, training, and hardware acceleration. Professor Luis Ceze from the UW Computer Science Department explained that a preparatory TVM workshop had been held last June with around 45 people attending, and now “we have 180 people, so we expect the next conference will be like 600 people or so!” Ceze said researchers and engineers are seeing growing demand for machine learning, and “the huge challenge here is how to deploy deep learning everywhere efficiently.”
The one-day conference covered cloud computing, edge computing, AI chips, ultra-low precision quantization, differential programming, intelligent machine learning systems (ML for systems), privacy protection and other topics.
In the event’s keynotes, AWS, Huawei and Facebook respectively introduced TVM in deep learning cloud optimization services, accelerator support, and automatic optimization on mobile devices. Qualcomm revealed plans to support TVM and DSP (Digital Signal Processor); speakers from Microsoft, Intel, and Xilinx shared their current, related deep learning frameworks; and developers from NTT Japan elaborated on their exploration of automatic FPGA code generation and its future applications.
The UW SAMPL team traced the origins of TVM and its current development in their keynote. Talks from various universities covered topics including deep learning automatic optimization (autoTVM), differential programming and high-level optimization (Relay), AI chip automatic design (VTA), distributed machine learning (PHub), ultra-low precision quantization, and how to use TVM to build a privacy-safe machine learning system that supports complex operators (hybrid scripts, etc.). Shanghai Jiaotong University, Berkeley, UCLA, Cornell, and other universities also shared their insights on TVM at the conference.
Said Chen: “The exciting thing is that the industry and academia are working together as an open source community.” Chen believes TVM has applications spanning servers, IoT workloads, and specialized accelerators; and that the participation from multiple parties suggests the TVM stack is already production-ready for accelerating deep learning workloads.
FUTURE APPLICATIONS
An open source community is now driving the TVM open source project, which involves multiple industry and academic institutions. The project has adopted the Apache-style merit based governace model.
The long-term goal of TVM is to bridge the productivity-focused deep learning frameworks and the performance/efficiency-oriented hardware backends. The TVM website outlines its main features:
- Compilation of deep learning models in Keras, MXNet, PyTorch, Tensorflow, CoreML, DarkNet into minimum deployable modules on diverse hardware backends.
- Infrastructure to automatically generate and optimize tensor operators on more backend with better performance.
Chen told Synced that the TVM project team will hold a conference every year, with the hope that furthering collaboration between academia and industry will present more use-cases to help advance cutting-edge techniques in the area of deep learning compilation.
The TVM conference videos and slides have been posted on SAMPL’s website.
Journalist: Fangyu Cai | Editor: Michael Sarazen
Very good article!
This is so nice and inspiring, tnks for sharing
Hello, all is going perfectly here and ofcourse every one is sharing facts, that’s truly fine, keep up writing.|
https://www.ted.com/profiles/16447390