Tag: GPU

AI Machine Learning & Data Science Popular Research

NVIDIA, Stanford & Microsoft Propose Efficient Trillion-Parameter Language Model Training on GPU Clusters

A research team from NVIDIA, Stanford University and Microsoft Research propose a novel pipeline parallelism approach that improves throughput by more than 10 percent with a comparable memory footprint, showing such strategies can achieve high aggregate throughput while training models with up to a trillion parameters.

AI

AI Chip Duel: Apple A12 Bionic vs Huawei Kirin 980

Apple has unveiled the latest iteration of its smartphone chip: the A12 Bionic SoC (system-on-a-chip). The company made the announcement yesterday at its annual product showcase event in Cupertino, California, hailing the A12 as the industry’s first ever 7nm chip (the smallest current transistor scale). It will be embedded in Apple’s new XR, XS, and XS Max iPhones.