Parsing fine-grained temporal actions is vital in application scenarios that require an understanding of detailed and precise operations over long-term periods, such as daily activity understanding, surgical robots, human motion analysis and animal behavior analysis.
Thanks to the CUDA architecture  developed by NVIDIA, developers can exploit GPUs’ parallel computing power to perform general computation without extra efforts. Our objective is to evaluate the performance achieved by TensorFlow, PyTorch, and MXNet on Titan RTX.
Researchers from Facebook, the National University of Singapore, and the Qihoo 360 AI Institute have jointly proposed OctConv (Octave Convolution), a promising new alternative to traditional convolution operations. Akin to a “compressor” for Convolutional Neural Networks (CNN), the OctConv method saves computational resources while boosting effectiveness.
In a scene that looks like it’s from a sci-fi movie, a YouTube video posted today by robotics company Boston Dynamics shows a huge, ostrich-like robot “Handle” whirling round while deftly moving boxes in a warehouse. The video has garnered over 138,000 views in less than four hours.
NVIDIA CEO and Co-Founder Jensen Huang says a rumored next-generation GPU architecture is not a priority for the company, and that he remains optimistic about clearing the chip inventory built up for cryptocurrency mining. Huang made the remarks in a press conference Tuesday at the GPU Technology Conference (GTC) in Santa Clara.
The dearth of AI talents capable of manually designing neural architecture such as AlexNet and ResNet has spurred research in automatic architecture design. Google’s Cloud AutoML is an example of a system that enables developers with limited machine learning expertise to train high quality models. The trade-off, however, is AutoML’s high computational costs.