A new paper from Julia Computing Co-Founder and CTO Keno Fischer and Senior Research Engineer Elliot Saba introduces a method and implementation for offloading sections of Machine Learning models written in Julia programming language to TPUs.
Tensor Processing Units (TPUs) are Google’s custom-developed Application Specific Integrated Circuit (ASICs) used to accelerate machine-learning workloads. Google Cloud TPUs are the cutting edge hardware architecture for training today’s computationally demanding deep learning and machine learning models. A Google Cloud TPU machine learning accelerator was first made available to the public in 2017. Fischer and Saba’s method works by leveraging the Lower Level XLA (Accelerated Linear Algebra) Compiler that Google released in August 2018.
Mapping Julia Semantics to XLA
In order to offload Julia code to TPU, Julia code must be compiled to XLA code. To achieve this, the Julia compiler needs to bridge the gap between the dynamic semantics of the language and the static semantics of the LLVM (Low Level Virtual Machine) representation. If we can find a way to convert Julia code to XLA “High Lever Optimizer” (HLO) input language, then Julia can function on TPUs.
Julia programs are written in terms of functions and abstractions provided by Julia’s base library and use a multiple dispatch method, which provides the possibility of expressing their own operations in term of HLO operations. A few examples of this are shown below:
The paper also provides implementations of the higher level array abstractions, in particular, mapreduce and broadcast. Normally the HLO operation of a broadcast implementation is around 20 lines of code and omitted for space, but the implementation of ‘mapreduce’ is simply:
Evaluation on TPUS
To demonstrate that the Julia compiler is able to work on TPUs with no major issues, the paper includes examples such as VGG19 (Visual Geometry Group 19 convolutional neural network architecture) Forward Passing, and VGG19 Backward Passing. Below are some results with notes excerpted from paper:


The new method has been welcomed by ML researchers and garnered praise from Google AI Lead Jeff Dean, who tweeted “Julia + TPUs = fast and easily expressible ML computations!”
The paper Automatic Full Compilation of JULIA Programs and ML Models to Cloud TPUs is on arXiv.
Author: Robert Tian | Editor: Michael Sarazen
Your style is unique compared to other people I have read stuff from.
Thank you for posting when you have the opportunity, Guess
I’ll just book mark this blog. http://mail.rambler.ru/m/redirect?url=http://apetoknungup.mihanblog.com/post/112
I’d like to find out more? I’d like to find out more details.
Fastidious replies in return of this question with solid arguments and telling all on the
topic of that.
Thank you for every other magnificent post. The place else may just
anybody get that type of info in such a perfect approach of
writing? I have a presentation subsequent week, and I am at the look for
such information.
You should be a part of a contest for one of the most useful websites on the web.
I’m going to highly recommend this web site!