‘Snip’ Converts Math Screenshots Into LaTeX
A new math tool called “Snip” is creating a buzz on Twitter. Thousands of netizens are sharing the tool, which is being heralded as a “life changer” for scientific writing.
AI Technology & Industry Review
A new math tool called “Snip” is creating a buzz on Twitter. Thousands of netizens are sharing the tool, which is being heralded as a “life changer” for scientific writing.
Facebook AI Research has announced it is open-sourcing PyTorch-BigGraph (PBG), a tool that can easily process and produce graph embeddings for extremely large graphs. PBG can also process multi-relation graph embeddings where a model is too large to fit in memory.
Google Brain researchers have proposed LAMB (Layer-wise Adaptive Moments optimizer for Batch training), a new optimizer which reduces training time for its NLP training model BERT (Bidirectional Encoder Representations from Transformers) from three days to just 76 minutes.
10 AI News You Must Know from March W 3 – W 4
It is no secret that deep neural networks (DNNs) can achieve state-of-the-art performance in a wide range of complicated tasks. DNN models such as BigGAN, BERT, and GPT 2.0 have proved the high potential of deep learning. Deploying DNNs on mobile devices, consumer devices, drones and vehicles however remains a bottleneck for researchers.
GTC 2019 runs next Monday through Thursday (March 18 — 21), and while we can only speculate what surprises NVIDIA CEO Jensen Huang might have in store for us, we can get some sense of where the company is headed by looking at what it’s been up to for the last 12 months.
Last November Synced ran an interview with Yoshua Bengio, in which the deep learning maverick, Université de Montréal Professor and MILA Scientific Director discussed his research and commented on the current state of deep learning and AI.
Natural language processing has made significant progress in the past year, but few frameworks focus directly on NLP or sequence modeling. Google Brain recently released Lingvo, a deep learning framework based on TensorFlow. Synced invited Ni Lao, Chief Science Officer at Mosaix, to share his thoughts on Lingvo.
A paper recently accepted for ICLR 2019 challenges this with a novel optimizer — AdaBound — that authors say can train machine learning models “as fast as Adam and as good as SGD.” Basically, AdaBound is an Adam variant that employs dynamic bounds on learning rates to achieve a gradual and smooth transition to SGD.
oogle this week introduced GPipe, an open-source library that dramatically improves training efficacy for large-scale neural network models.
Microsoft Research Asia and University of Science and Technology of China have jointly released a new human pose estimation model which has set records on three COCO benchmarks.
Synced spoke with AI pioneer Professor Yoshua Bengio at the Computing in the 21st Century Conference in Beijing, where he discussed his recent research and the current state of AI.
Synced is proud to present Gary Marcus as the last installment in our Lunar New Year Project — a series of interviews with AI experts reflecting on AI development in 2018 and looking ahead to 2019. (Read the previous articles on Clarifai CEO Matt Zeiler and Google Brain Researcher Quoc Le.)
Uber has unveiled Ludwig, a new TensorFlow-based toolkit that enables users to train and test deep learning models without writing any code. The toolkit will help non-experts understand models and accelerate their iterative development by simplifying the prototyping process and data processing.
NVIDIA researchers have developed a deep learning-based system which can produce high-quality slow-motion video from a standard (30 fps) video clip. In comparison with manual slow motion results, the NVIDIA demonstration video shows far superior smoothness.
Reinforcement learning (RL) has been making spectacular achievements, e.g., Atari games, AlphaGo, AlphaGo Zero, AlphaZero, DeepStack, Libratus, OpenAI Five, Dactyl, DeepMimic, Catch The Flag, learning to dress, data center cooling, chemical syntheses, drug design, etc. See more RL applications.
Japanese AI startup Preferred Networks (PFN) has developed a new processor dedicated to deep learning. The company unveiled the MN-Core chip, board, and server last week at SEMICON Japan 2018 in Tokyo.
In 2016 Google’s DeepMind stunned the world when their Go computer AlphaGo secured a historic victory over Korean grandmaster Lee Sedol. Yesterday the UK’s top AI team delivered their latest “wow moment” as their AI system AlphaFold topped the Critical Assessment of Structure Prediction (CASP) competition.
Japanese global trading giant Mitsui & Co. and leading deep learning startup Preferred Networks (PFN) have announced a joint venture in the US to provide Biomedical/Healthcare Solutions, including Cancer Diagnostic Services, based on deep learning technology.
Intel’s open-source programming function computer vision library OpenCV has released the first stable version in its 4.0 line. The community has waited for more than 3.5 years for the update.
DARCCC (Detecting Adversaries by Reconstruction from Class Conditional Capsules) is a technique which uses a similarity metric to compare reconstructed images with an original input image to identify whether it was an adversarial image, and further detects whether the system was attacked.
Researchers from Japanese electronics giant Sony have trained the ResNet-50 neural network model on ImageNet in a record-breaking 224 seconds — 43.4 percent better than the previous fastest time for the benchmark task.
Deep Learning has become an essential toolbox which is used in a wide variety of applications, research labs, industries, etc. In this tutorial given at NIPS 2017, the speakers provide a set of guidelines which will help newcomers to the field understand the most recent and advanced models and their application to diverse data modalities.
One of the top minds in machine learning, Andrew Ng is having an increasingly profound impact on AI education. Ng’s machine learning course at Stanford University remains the most popular on Coursera, the world-leading online education platform he co-founded in 2012.
RE•WORK organized the second Canadian edition of its RE•WORK Global DL Summit Series in Toronto on October 25 – 26. The event attracted over 600 attendees from more than 20 countries who joined conversations with leading AI and Deep Learning experts.
DeepMind announced today that it has opened its Graph Nets (GN) library to the public, enabling the use of graph networks in TensorFlow and Sonnet. Graph Nets is a machine learning framework that was published by DeepMind, Google Brain, MIT and University of Edinburgh on Jun 15.
Founded in 1999, Tokyo-based DeNA has developed popular platforms and services for gaming, E-commerce, automotive, healthcare and entertainment content distribution. As AI continues transforming all things digital, DeNA is expanding its deep learning tech capabilities to support R&D on new techniques.
Last month’s ReWork Deep Learning Summit in London provided a peek at current recent research progress and future trends in artificial intelligence technologies. The two-day event featured top scientists and engineers from Facebook, MIT Media lab, DeepMind and other leading institutes.
The computational power of smartphones and tablets has skyrocketed to the point where they approach the level of desktop computers on the market not long ago. Although it’s easy for mobile devices to run all the standard smartphone apps, today’s artificial intelligence algorithms can be too compute-heavy for even high-end devices to handle.
UC Berkeley researchers have published a paper demonstrating how Deep Reinforcement Learning can be used to control dexterous robot hands for complicated tasks. Learning Complex Dexterous Manipulation with Deep Reinforcement Learning and Demonstrations proposes a low-cost and high-efficiency control method that uses demonstration and simulation techniques to accelerate the learning process.
In conjunction with yesterday’s release of open source AI software framework PyTorch 1.0, leading deep learning course developer Fast.ai has announced its first open source library for deep learning — fastai v1.
Nadja Rhodes is enamoured with artificial intelligence. A Seattle-based Microsoft software developer unpracticed in AI techniques such as deep learning, Rhodes had applied to a number of tech company sponsored AI residency initiatives, but to no avail. And so she was thrilled to be accepted by OpenAI Scholars.
Deep Reinforcement Learning has shown impressive performance in a wide range of applications, including video games. StarCraft II, one of the most challenging Real Time Strategy (RTS) games, has however remained unsolved. Until now.
Last week’s RE•WORK AI in Finance Summit featured 50 speakers and drew 250 technologists to the Westin New York at Times Square to explore the intersection between AI and Fintech.
Chinese AI maker startup Suiyuan Technology announced today that it has completed its Series Pre-A funding with nearly CN¥340 million (US$50 million). Funding was led by Tencent Holdings Ltd, Zhen Fund, Delta Capital, Yunhe Partners and Summitview Capital.
DeepMind’s 2018 AlphaGo Zero requires 300,000 times more computing power than AlexNet did in 2013. With larger-than-ever datasets and demandingContinue Reading
A new research paper demonstrating the first Automatic Grammatical Error Correction system to reach human-level performance has been published on arXiv by Tao Ge, Furu Wei and Ming Zhou from the Natural Language Computing Group, Microsoft Research Asia.
The 2018 Conference on Computer Vision and Pattern Recognition (CVPR) opened today in Salt Lake City, USA. The CVPR organizing committee used the occasion to announce its coveted Best Paper and Best Student Paper selections.
The Baidu Silicon Valley Artificial Intelligence Lab has released a paper which proposes a neural conditional random field (NCRF) process for cancer metastasis detection on Whole Slide Images (WSIs).
The US Department of Energy’s Oak Ridge National Laboratory in Tennessee today introduced the world’s fastest supercomputer Summit, whose computing power reaches 200 petaflops or 200 million billion calculations per second.







































