GTC 2019 | New NVIDIA One-Stop AI Framework Accelerates Workflows by 50x
No wow moments, no bells, and no whistles. Jensen Huang has delivered some groundbreaking keynote speeches in his years at the helm of NVIDIA, but today’s was not among them.
AI Technology & Industry Review
No wow moments, no bells, and no whistles. Jensen Huang has delivered some groundbreaking keynote speeches in his years at the helm of NVIDIA, but today’s was not among them.
At the NVIDIA GPU Technology Conference (GTC) which kicked off today, NVIDIA unveiled its latest image processing research effort — GauGAN, a generative adversarial network-based technique capable of transforming segmentation maps into realistic photos.
GTC 2019 runs next Monday through Thursday (March 18 — 21), and while we can only speculate what surprises NVIDIA CEO Jensen Huang might have in store for us, we can get some sense of where the company is headed by looking at what it’s been up to for the last 12 months.
A new GitHub project, PyTorch Geometric (PyG), is attracting attention across the machine learning community. PyG is a geometric deep learning extension library for PyTorch dedicated to processing irregularly structured input data such as graphs, point clouds, and manifolds.
In December Synced reported on a hyperrealistic face generator developed by US chip giant NVIDIA. The GAN-based model performs so well that most people can’t distinguish the faces it generates from real photos. This week NVIDIA announced that it is open-sourcing the nifty tool, which it has dubbed “StyleGAN”.
Subscribe to Synced Global AI Weekly – CES Day One: AI Is Everywhere – CES 2019: What we’ve seen, and are still seeing,Continue Reading
The world’s largest and most exciting technology show, CES officially kicks off this morning in Las Vegas, USA. With an abundance of products to present, tech giants such as Nvidia, Qualcomm, Samsung, Intel started revving up their engines two days ago, hosting press conferences to showcase their “New Year’s Resolutions.”
NVIDIA researchers have developed a deep learning-based system which can produce high-quality slow-motion video from a standard (30 fps) video clip. In comparison with manual slow motion results, the NVIDIA demonstration video shows far superior smoothness.
The team behind MLPerf has announced the machine learning benchmark’s first set of results. MLPerf is a broad machine learning benchmark designed to measure the best performance of each participant with its own resources on a specific task.
The NVIDIA paper proposes an alternative generator architecture for GAN that draws insights from style transfer techniques. The system can learn and separate different aspects of an image unsupervised; and enables intuitive, scale-specific control of the synthesis.
American chip company Xilinx is an industry pioneer, with 34 years of experience and counting. And yet the company remains a niche supplier lacking the star power of chip giants like Intel, whose processors are found in most PCs; or Nvidia, whose GPU are the choice for most AI applications.
Synced surveyed a number of 2019 AI residency programs that may be of interest to readers.
Chip giant Nvidia today announced the opening of its new AI research centre in Toronto.
Nvidia Director of AI Sanja Fidler will lead the AI Research Lab. The University of Toronto Assistant Professor previously worked at the Toyota Technological Institute in Chicago as a research assistant professor.
Nvidia and the MIT Computer Science & Artificial Intelligence Laboratory (CSAIL) have open-sourced their video-to-video synthesis model. By using a generative adversarial learning framework, the method can generate high-resolution, photorealistic and temporally coherent results with various input formats, including segmentation masks, sketches, and poses.
Nvidia Founder and CEO Jensen Huang today unveiled the company’s long-waited next-generation graphic processing unit (GPU) cards for games, Geforce RTX.
At the prestigious SIGGRAPH (Special Interest Group on Computer GRAPHics and Interactive Techniques) conference in Vancouver yesterday, Nvidia CEO Jensen Huang announced Turing, an eighth-generation GPU architecture introducing ray tracing and AI capability to real-time graphics.
Facebook is working with a select group of advertisers to create augmented reality ads for its News Feed. When users activate an ad’s tap-to-try AR capability it can display for example how a pair of glasses would look on their face via the user’s webcam and screen. Facebook says it also intends to expand shopping support in Instagram Stories.
Salesforce announces a deeper data sharing partnership with Google. Consumer insights from Salesforce’s Marketing Cloud and Google Analytics 360 will be merged into one dashboard for either platform. Marketing Cloud data can be used to create a more customized web experience.
The NVIDIA DeepStream Software Development Kit (SDK) was originally released in 2017 to simplify the deployment of scalable intelligent video analytics (IVA) powered by deep learning. Developers can use DeepStream to process, understand and categorize video frames in real time and within stringent throughput and latency requirements.
The US Department of Energy’s Oak Ridge National Laboratory in Tennessee today introduced the world’s fastest supercomputer Summit, whose computing power reaches 200 petaflops or 200 million billion calculations per second.
“The world of computing has changed,” announced Nvidia founder and CEO Jensen Huang this week as he unveiled the new NVIDIA HGX-2 Cloud Server Platform at his company’s GPU Technology Conference in Taiwan.
Singapore’s NGEE Ann Polytechnic (NP) and London’s Centre for Finance, Technology and Entrepreneurship (CFTE) launch an AI in Finance course. The course will provide an overview of AI knowledge in the fintech and digital finance fields.
Google introduces starter kits designed to help people learn and experiment with AI solutions.
IBM announces a four-year program in collaboration with Calgary-based Natural Resources Solutions Center to help oil and gas companies with sustainability and efficiency.
Huang and his NVIDIA team pioneered the graphics processing chip (GPU) in 1999, revolutionizing the visual performance of device displays. But even Huang never dreamt that his GPU would one day become a driving force in the arena of AI.
Now social and behavioral scientists can use the TuringBox platform to study artificial intelligence algorithms. AI contributors can upload existing and novel algorithms for review, gaining a reputation in their community.
Chip giant NVIDIA Founder and CEO Jensen Huang created a bit of a stir at yesterday’s GPU Technology Conference in Santa Clara, USA, when he appeared to dis one of these chips’ appropriateness for autonomous vehicle system development: “FPGA is not the right answer,” he said.
At the GPU Technology Conference in Santa Clara, USA today, Huang unveiled the world’s largest GPU — a binary beast packed with 16 Tesla V100 with doubled memory 32 GB.
Embedded AI can transform a tabletop speaker into a personal assistant; give a robot brains and dexterity; and turn a smartphone into a smart camera, music player, or game console. Traditional processors, however, lack the computational power to support many of these intelligent features.
The staffless Amazon Go opens to the public in downtown Seattle. Just like a normal convenience store, this intelligent store sells pre-cooked food, snacks, drinks , etc. — but there’s no cashier scanning the goods.
NVIDIA Announces Delivery Target For Self Driving Processor Xavier; Facebook Plans to Shut Down Its Personal Assistant “M” on January 19th; Amazon Alexa Is Integrating With Windows 10; A US Media Team Plans to Launch Vital Intelligence Data Live
The annual NVIDIA GTA Conference opened in Beijing on September 26th.
09/05 — Google Updates Its Street View Cameras with AI and Machine Learning Google just upgraded its street view camera for the firstContinue Reading
RACECAR is a Powerful Platform for Robotics Research and Teaching. It is one of the greatest university courses in this field.
In this report, we will touch on some of the recent technologies, trends and studies on deep neural network inference acceleration and continuous training in the context of production systems.
A practical way to make an autonomous vehicle is not by programming a car to drive in any environment, but by showing the car how to drive and make the car learn by itself. NVIDIA created a system of this kind, named PilotNet.
At this year’s GPU Technology Conference, Nvidia CEO and Founder Jensen Huang unveiled a new generation GPU architecture called Volta.
In this article we will provide some insights of Intel’s recent deep learning products.
This talk is focused on the future potential of deep learning with NVIDIA Deep Learning SDK and GPU hardware families.
In a tech talk at University of Toronto, NVIDIA shared some updates regarding their research of self-driving car and End-to-End Learning