EU Antitrust Regulators Halt Investigation Into NVIDIA, ARM Multibillion-Dollar Deal
On December 6, EU antitrust regulators have temporarily paused the investigation into NVIDIA’s multibillion-dollar acquisition of UK chip design company ARM.
AI Technology & Industry Review
On December 6, EU antitrust regulators have temporarily paused the investigation into NVIDIA’s multibillion-dollar acquisition of UK chip design company ARM.
On November 22, the NVIDIA blog introduced the interactive demo website app to generate photorealistic landscape images in real-time via text description.
On October 11, Microsoft introduced the largest and “the most powerful monolithic transformer language model” trained to date, a 530 billion parameter GPT-3-style generative language model.
A research team from Technical University of Munich, Google, Nvidia and LMU München proposes CodeTrans, an encoder-decoder transformer model which achieves state-of-the-art performance on six tasks in the software engineering domain, including Code Documentation Generation, Source Code Summarization, Code Comment Generation, etc.
A new study by NVIDIA, University of Toronto, McGill University and the Vector Institute introduces an efficient neural representation that enables real-time rendering of high-fidelity neural SDFs for the first time while delivering SOTA geometry reconstruction quality.
NVIDIA blog introduced company’s latest NeurIPS presentation: applying a novel neural network training technique, adaptive discriminator augmentation, to the popular NVIDIA StyleGAN2 model.
What if, instead of hard-coding road rules into self-driving algorithms, AI agents were free to come up with their own ways of safely and efficiently sharing the road?
The approach dramatically reduces bandwidth requirements by sending only a keypoint representation [of faces] and reconstructing the source video on the receiver side with the help of generative adversarial networks (GANs) to synthesize the talking heads.
Researchers introduced a modular primitive that uses existing, highly optimized hardware graphics pipelines to deliver performance superior to previous differentiable rendering systems.
NVIDIA, Mass General Brigham and 20 global hospitals launch federated learning initiative EXAM to build AI model for COVID-19 patient oxygen need prediction
The GTC kicked off with the release of nine Nvidia keynote video with major announcements in data centers, edge AI, collaboration tools and healthcare.
Imaginaire, a universal PyTorch library designed for various GAN-based tasks and methods.
On September 13th, NVIDIA announced the acquisition of UK-based semiconductor startup ARM for US$40 billion, unveiling the largest acquisition dealContinue Reading
Nvidia CEO Jensen Huang today unveiled the company’s new GeForce RTX 30 Series GPUs.
The industry-standard MLPerf benchmark today released the results of the third round of its ongoing ML Training Systems competition.
The University of Florida (UF) and Nvidia University announced on Tuesday the plan to build the world’s fastest AI supercomputer in academia, providing 700 petaflops of AI performance.
According to Bloomberg, Nvidia is eyeing the possibility of acquiring Arm, the chip design subsidiary of Japan’s Softbank Group.
NVIDIA researchers propose a novel vid2vid framework that utilizes all past generated frames during rendering.
GameGAN, a generative model that learns to visually imitate video game environments by ingesting screenplay and keyboard actions during training.
The A100 represents the largest leap in performance across the company’s eight GPU generations - a boost of up to 20x over its predecessors.
The US chip giant now appears to be exploring the terrain beyond GPU architecture, raising the stakes on other dedicated electronic circuits.
A research team from NVIDIA, Oak Ridge National Laboratory (ORNL), and Uber has introduced new techniques that enabled them to train a fully convolutional neural network on the world’s fastest supercomputer, Summit, with up to 27,600 NVIDIA GPUs.
Ascend 910 delivers performance of up to 256 teraFLOPS under FP16 and 512 teraOPS under IN8 with declared max power consumption of 310W.
Synced Global AI Weekly August 18th
Synced Global AI Weekly July 14th
Forerunners Google and NVIDIA announced today that they have set new AI training time records on the MLPerf benchmark competition.
The top semiconductor and hardware companies have arrived in Taipei, Taiwan for Computex 2019. The world’s biggest computer show is a preferred stage for the likes of AMD, Intel, NVIDIA, and Arm to showcase their latest chip designs.
Synced Global AI Weekly May 19th
NVIDIA has opened a fun online AI platform (https://nvlabs.github.io/FUNIT/petswap.html) that can swap pet faces onto other animals. Simply upload a photo of your Spot or Sylvester, draw a rectangle around its head, click on “Translate” and voila!
Synced Global AI Weekly May 12nd
Thanks to the CUDA architecture [1] developed by NVIDIA, developers can exploit GPUs’ parallel computing power to perform general computation without extra efforts. Our objective is to evaluate the performance achieved by TensorFlow, PyTorch, and MXNet on Titan RTX.
At the recent NVIDIA GPU Technology Conference (GTC) 2019, Synced reported on a ‘magical brush’ app that could transform simple line drawings and sketches into realistic landscapes.
10 AI News You Must Know from April W 1 – W 2
It’s a fanciful little one-piece in shimmering green and aquamarine with bold fuschia shoulder accents — perfect for a night out on the town. Is this a new dress from a Milan or Tokyo collection? Nope, it was designed by an AI-powered machine, and produced by a couple of MIT graduates.
“CityFlow,” a city-scale traffic camera dataset paper from NVIDIA researchers, has been accepted by CVPR 2019 as an Oral Session, earning two “Strong Accepts” and one “Accept” from reviewers.
Synced Global AI Weekly March 24th
While there is nothing wrong with viewing autonomous driving as an emerging sector in the car manufacturing industry, big tech companies are also playing a central role, and have been making huge investments in automotive technologies.
Preferred Networks (PFN) is completing a new private supercomputer, MN-2, which the Japanese AI startup expects to have operational in July 2019.
NVIDIA CEO and Co-Founder Jensen Huang says a rumored next-generation GPU architecture is not a priority for the company, and that he remains optimistic about clearing the chip inventory built up for cryptocurrency mining. Huang made the remarks in a press conference Tuesday at the GPU Technology Conference (GTC) in Santa Clara.
NVIDIA’s annual GPU Technology Conference (GTC) attracted some 9,000 developers, buyers and innovators to San Jose, California this week. CEO and Co-Founder Jensen Huang’s two-and-a-half hour keynote speech fused GPU-based innovations in domains ranging from graphic design to autonomous driving.