Google Brain Software Engineer Martin Wicke says a preview version of TensorFlow 2.0 will be released later this year. To cope with dramatic changes in both users and use-cases, TensorFlow 2.0 will shift its focus to “ease of use.” Wicke made the announcements yesterday in a Google Groups post.
Artificial intelligence can now match or outperform human experts in diagnosis and referral on eye diseases, suggests a new paper from DeepMind. The UK-based, Google-owned research institute today released joint research results with the UK’s Moorfields Eye Hospital and UCL Institute of Ophthalmology, which present a new AI technique in the context of OCT imaging. The paper was published on Nature Medicine’s website.
Nvidia’s paper Large Scale Language Modeling: Converging on 40GB of Text in Four Hours introduces a model that uses mixed precision arithmetic and a 32k batch size distributed across 128 Nvidia Tesla V100 GPUs to improve scalability and transfer in Recurrent Neural Networks (RNNs) for Natural Language tasks.
Benjamin Sanchez-Lengeling from Harvard University and Alán Aspuru-Guzik from the University of Toronto have successfully applied machine learning models to speed up the materials discovery process. Their paper Inverse molecular design using machine learning: Generative models for matter engineering was published July 27 in Science Vol. 361.
The Internet is woven into our everyday lives. We access massive amounts of data through our laptops, smartphones and tablets. This free flow of information however has prompted attempts to filter content which may not be appropriate for example for young people. One such new effort from Brazil puts virtual bikinis on nudes.
The Technical University of Munich (TUM) had lured Cheng from a Japanese research institute. At TUM he founded the Institute of Cognitive Systems (ICS). With eight employees in a central office on Karlstraße 45, Cheng set to work on his arduous task: recreating the complexities of human skin and wiring it all to a brain.
Neural networks can be notoriously difficult to debug, but a Google Brain research team believes it may have come up with a novel solution. A paper by Augustus Odena and Ian Goodfellow introduces Coverage-Guided Fuzzing (CGF) methods for neural networks. The team also announced an open source software library for CGF, TensorFuzz 1.
Since 2010, the annual ImageNet Large-Scale Visual Recognition Challenge has been the most widely recognized benchmark for testing image recognition algorithms. Tencent Machine Learning picks up the challenge with its new paper Highly Scalable Deep Learning Training System with Mixed-Precision: Training ImageNet in Four Minutes.
The dearth of AI talents capable of manually designing neural architecture such as AlexNet and ResNet has spurred research in automatic architecture design. Google’s Cloud AutoML is an example of a system that enables developers with limited machine learning expertise to train high quality models. The trade-off, however, is AutoML’s high computational costs.
From Hayao Miyazaki’s Spirited Away to Satoshi Kon’s Paprika, Japanese anime has made it okay for adults everywhere to enjoy cartoons again. Now, a team of Tsinghua University and Cardiff University researchers have introduced CartoonGAN — an AI-powered technology that simulates the styles of Japanese anime maestri from snapshots of real world scenery.
At the recent IEEE International Conference on Robotics and Automation (ICRA) in Brisbane, Australia, the Best Student Paper award went to ETH Zurich Autonomous Systems Laboratory (ASL)’s Miguel de la Iglesia Valls et al. for Design of an Autonomous Racecar: Perception, State Estimation and System Integration.
IBM today announced it will release the world’s largest facial attribute dataset in order to fight bias in artificial intelligence systems used to recognize human faces. The dataset was built by IBM research scientists and contains one million images, five times the image count of the current largest facial attribute dataset. It will be publically available this fall.
The NVIDIA DeepStream Software Development Kit (SDK) was originally released in 2017 to simplify the deployment of scalable intelligent video analytics (IVA) powered by deep learning. Developers can use DeepStream to process, understand and categorize video frames in real time and within stringent throughput and latency requirements.
Earlier this week the Association for Computational Linguistics (ACL) 2018 announced its Best Two Short Papers, neither of which had yet been published. Today the AI community got its first look at one of the winners when Know What You Don’t Know: Unanswerable Questions for SQuAD was released on arXiv.
GcForest, a decision tree ensemble approach that is much easier to train than deep neural networks, has received a lot of attention from researchers since it was introduced by Prof. Zhihua Zhou and his student Ji Feng last year. Based on their previous work, Zhou, Feng and Nanjing University colleague Yang Yu have now proposed Multi-layered Gradient Boosting Decision Trees (mGBDTs).
Synced recently spoke with Delian Capital Senior Vice President Xuesong Fan about the current status of automated driving startups. Fan graduated from Harbin Engineering University and worked for years on Chinese satellite engineering. In 2015 came down to earth, applying his experience to self-driving car investments.
To boost learning research aimed at endowing robots with better generalization capabilities, Yi Wu from UC Berkeley and Yuxin Wu, Georgia Gkioxari, and Yuandong Tian from Facebook AI research recently published the paper Building Generalizable Agents with a Realistic and Rich 3D Environment.