Thanks to the CUDA architecture  developed by NVIDIA, developers can exploit GPUs’ parallel computing power to perform general computation without extra efforts. Our objective is to evaluate the performance achieved by TensorFlow, PyTorch, and MXNet on Titan RTX.
It’s a fanciful little one-piece in shimmering green and aquamarine with bold fuschia shoulder accents — perfect for a night out on the town. Is this a new dress from a Milan or Tokyo collection? Nope, it was designed by an AI-powered machine, and produced by a couple of MIT graduates.
NVIDIA CEO and Co-Founder Jensen Huang says a rumored next-generation GPU architecture is not a priority for the company, and that he remains optimistic about clearing the chip inventory built up for cryptocurrency mining. Huang made the remarks in a press conference Tuesday at the GPU Technology Conference (GTC) in Santa Clara.
NVIDIA’s annual GPU Technology Conference (GTC) attracted some 9,000 developers, buyers and innovators to San Jose, California this week. CEO and Co-Founder Jensen Huang’s two-and-a-half hour keynote speech fused GPU-based innovations in domains ranging from graphic design to autonomous driving.
GTC 2019 runs next Monday through Thursday (March 18 — 21), and while we can only speculate what surprises NVIDIA CEO Jensen Huang might have in store for us, we can get some sense of where the company is headed by looking at what it’s been up to for the last 12 months.
In December Synced reported on a hyperrealistic face generator developed by US chip giant NVIDIA. The GAN-based model performs so well that most people can’t distinguish the faces it generates from real photos. This week NVIDIA announced that it is open-sourcing the nifty tool, which it has dubbed “StyleGAN”.
The world’s largest and most exciting technology show, CES officially kicks off this morning in Las Vegas, USA. With an abundance of products to present, tech giants such as Nvidia, Qualcomm, Samsung, Intel started revving up their engines two days ago, hosting press conferences to showcase their “New Year’s Resolutions.”
American chip company Xilinx is an industry pioneer, with 34 years of experience and counting. And yet the company remains a niche supplier lacking the star power of chip giants like Intel, whose processors are found in most PCs; or Nvidia, whose GPU are the choice for most AI applications.
Chip giant Nvidia today announced the opening of its new AI research centre in Toronto.
Nvidia Director of AI Sanja Fidler will lead the AI Research Lab. The University of Toronto Assistant Professor previously worked at the Toyota Technological Institute in Chicago as a research assistant professor.
Nvidia and the MIT Computer Science & Artificial Intelligence Laboratory (CSAIL) have open-sourced their video-to-video synthesis model. By using a generative adversarial learning framework, the method can generate high-resolution, photorealistic and temporally coherent results with various input formats, including segmentation masks, sketches, and poses.
At the prestigious SIGGRAPH (Special Interest Group on Computer GRAPHics and Interactive Techniques) conference in Vancouver yesterday, Nvidia CEO Jensen Huang announced Turing, an eighth-generation GPU architecture introducing ray tracing and AI capability to real-time graphics.
Facebook is working with a select group of advertisers to create augmented reality ads for its News Feed. When users activate an ad’s tap-to-try AR capability it can display for example how a pair of glasses would look on their face via the user’s webcam and screen. Facebook says it also intends to expand shopping support in Instagram Stories.
Salesforce announces a deeper data sharing partnership with Google. Consumer insights from Salesforce’s Marketing Cloud and Google Analytics 360 will be merged into one dashboard for either platform. Marketing Cloud data can be used to create a more customized web experience.
The NVIDIA DeepStream Software Development Kit (SDK) was originally released in 2017 to simplify the deployment of scalable intelligent video analytics (IVA) powered by deep learning. Developers can use DeepStream to process, understand and categorize video frames in real time and within stringent throughput and latency requirements.