In a joint effort with the Perimeter Institute for Theoretical Physics and Alphabet (Google) X, Google AI researchers recently announced a new open source library, TensorNetwork, which can greatly improve the efficiency of tensor calculations for tensor network algorithms.
Google’s deep learning TensorFlow platform has added Differentiable Graphics Layers with TensorFlow Graphics, a combination of computer graphics and computer vision. Google says TensorFlow Graphics can solve data labeling challenges for complex 3D vision tasks by leveraging a self-supervised training approach.
Google has achieved a milestone in machine learning research that will boost the company’s broader ambitions in healthcare. In a paper published today in Nature Medicine, Google researchers present an end-to-end deep learning model that can predict lung cancer comparably or better than human radiologists.
Designing accurate and efficient CNNs for mobile devices is challenging due to the large design space and expensive computational methods. Although many mobile CNNs are available for developers to train and deploy to mobile devices, existing CNN architecture may not be able to achieve the best results for some tasks on mobile devices.
Google today announced the release of a new and improved landmark recognition dataset. Google-Landmarks-v2 includes over 5 million images, doubling the number in the landmark recognition dataset the tech giant released last year. The dataset now covers more than 200 thousand different landmarks, a seven times increase over the first version.
ccording to a Fuji Research Laboratory report, the Japanese smart home market is expected to top JP¥4.2 trillion (US$38 billion) in 2025, up 36.3 percent from 2017. The market is being driven by smart devices including smartphones, which already account for more than half the market and are continuing to grow due their ability to conveniently connect IoT devices.
Baidu has released ERNIE (Enhanced Representation through kNowledge IntEgration), a new knowledge integration language representation model which outperforms Google’s state-of-the-art BERT (Bidirectional Encoder Representations from Transformers) in Chinese language tasks.
Google yesterday announced a new program, Seasons of Docs, that aims to make a substantive contribution to open source software development. The eight-month project will assemble a team of technical writers to work on improving documentation development for various open source projects.
TensorFlow is the world’s most popular open source machine learning library. Since its initial release in 2015, the Google Brain product has been downloaded over 41 million times. At this week’s 2019 TensorFlow Dev Summit, Google announced a major upgrade on the framework, the TensorFlow 2.0 Alpha version.
Natural language processing has made significant progress in the past year, but few frameworks focus directly on NLP or sequence modeling. Google Brain recently released Lingvo, a deep learning framework based on TensorFlow. Synced invited Ni Lao, Chief Science Officer at Mosaix, to share his thoughts on Lingvo.
Having notched impressive victories over human professionals in Go, Atari Games, and most recently StarCraft 2 — Google’s DeepMind team has now turned its formidable research efforts to soccer. In a paper released last week, the UK AI company demonstrates a novel machine learning method that trains a team of AI agents to play a simulated version of “the beautiful game.”
In 2017 Google introduced Federated Learning (FL), “a specific category of distributed machine learning approaches which trains machine learning models using decentralized data residing on end devices such as mobile phones.” A new Google paper has now proposed a scalable production system for federated learning to enable increasing workload and output through the addition of resources such as compute, storage, bandwidth, etc.