The team behind MLPerf has announced the machine learning benchmark’s first set of results.
MLPerf is a broad machine learning benchmark designed to measure the best performance of each participant with its own resources on a specific task. It was launched this May, supported by a group of researchers and scientists from more than 30 companies, including Intel, Nvidia, Baidu and Google; as well as researchers at seven universities. Facebook and Microsoft have now joined the list, both announcing their support for MLPerf today.
MLPerf is used to compare the speed of various major machine learning (ML) hardware platforms, including Google TPUs, Intel CPUs, and Nvidia GPUs. The results released today also reflect the speed of ML software frameworks such as TensorFlow, PyTorch, and MXNet. The work can help researchers and decision makers in the assessment of existing offerings, as well as in their ML development strategies.
Nvidia captured the lead spot in six benchmarks: image classification, object detection (heavy weight), object detection (light weight), translation (recurrent) GNMT, translation (non-recurrent), transformer, and recommendation NCF. “MLPerf demonstrates the importance of innovating in scale-up computing as well as at all levels of the computing stack — from hardware architecture to software and optimizations across multiple frameworks,” said Nvidea’s President and General Manager of Accelerated Computing Ian Buck.
Intel meanwhile said the results demonstrated that its Xeon Scalable processors can be an effective and cost-reducing choice for data scientists running multiple workloads on their infrastructure, as they would not have to invest in dedicated hardware.
Facebook did not participate in the MLPerf tests, but said that it would contribute to MLPerf and open source Mask R-CNN2Go. For the image classification, Facebook will provide an implementation for the current best ShuffleNet model.
More information is available on the MLPerf website.
Author: Herin Zhao | Editor: Michael Sarazen