In an effort to benchmark deep learning on smartphones in the Android ecosystem, researchers from the ETH Zurich Computer Vision Lab last year developed an AI Benchmark application to measure the AI readiness of more than 200 Android devices and 100 mobile SoCs collected in the wild.
Although natural language processing (NLP) has been around for decades, the recent and rapid rise of deep learning algorithms together with the increasing availability of massive amounts of text data are creating new and appealing opportunities for the tech across many industry sectors, including in the investment world.
The two-day RE•WORK Deep Learning Summit Boston 2019 gathered more than 60 speakers from top AI labs such as MIT CSAIL, Uber AI Labs, Adobe Research and other experts from the AI healthcare industry who provided high-level deep learning technical discussions and industry application insights.
Google’s deep learning TensorFlow platform has added Differentiable Graphics Layers with TensorFlow Graphics, a combination of computer graphics and computer vision. Google says TensorFlow Graphics can solve data labeling challenges for complex 3D vision tasks by leveraging a self-supervised training approach.
Google has achieved a milestone in machine learning research that will boost the company’s broader ambitions in healthcare. In a paper published today in Nature Medicine, Google researchers present an end-to-end deep learning model that can predict lung cancer comparably or better than human radiologists.
Traditional methods used to estimate 3D structure and camera motion in videos rely heavily on manual assumptions such as continuity and planarity. Google researchers have now presented an alternative deep learning method which is able to obtain these assumptions from unlabelled video.
With its improved productivity and accuracy and more personalized experience, AI is revolutionizing medical imaging. According to Signify Research, the world market for AI in medical imaging — comprising software for automated detection, quantification, decision support, and diagnosis — will reach US$2 billion by 2023.
Thanks to the CUDA architecture  developed by NVIDIA, developers can exploit GPUs’ parallel computing power to perform general computation without extra efforts. Our objective is to evaluate the performance achieved by TensorFlow, PyTorch, and MXNet on Titan RTX.
Researchers from Facebook, the National University of Singapore, and the Qihoo 360 AI Institute have jointly proposed OctConv (Octave Convolution), a promising new alternative to traditional convolution operations. Akin to a “compressor” for Convolutional Neural Networks (CNN), the OctConv method saves computational resources while boosting effectiveness.