Deep Learning Pioneers Yann LeCun and Yoshua Bengio Elected as AAAI-20 Fellows
The AAAI has announced the election of ten AAAI 2020 Fellows — including AI pioneers and 2018 Turing Award Winners Yann LeCun and Yoshua Bengio.
AI Technology & Industry Review
The AAAI has announced the election of ten AAAI 2020 Fellows — including AI pioneers and 2018 Turing Award Winners Yann LeCun and Yoshua Bengio.
There is increasing attention on machine learning, deep learning, IoT and computer vision technologies in attempts to reduce the damage done by alcohol and improve the safety of drinkers.
This research demonstrated that deep learning can contribute to the traditional discipline with much better performance than existing methods.
A Google researcher has released a deep neural network model that makes animating a VTube persona a little easier.
Researchers from The Chinese University of Hong Kong, Tencent AI Lab and University of Macau have proposed a new neuron interaction based representation composition for NMT.
In fact, the accuracy of results in few-shot learning, both with and without labeled data, is very high.
In an effort to benchmark deep learning on smartphones in the Android ecosystem, researchers from the ETH Zurich Computer Vision Lab last year developed an AI Benchmark application to measure the AI readiness of more than 200 Android devices and 100 mobile SoCs collected in the wild.
Featuring 100+ models, new mobile inference engine, and frameworks for graph & federated learning
Exciting new research from Duke University introduces ProtoPNet, a deep learning network that can explain how it distinguishes a pigeon from a partridge in real time.
With deep learning emerging as something of a panacea in the world of science, AI researchers and seismologists alike are leveraging the tech in pursuit of better aftershock forecast solutions.
The latest version, PyTorch 1.3, includes PyTorch Mobile, quantization, and Google Cloud TPU support. The release was announced today at the PyTorch Developer Conference in San Francisco.
On September 22 about 300 artificial intelligence (AI) and deep learning enthusiasts gathered at the University of Toronto Bahen Centre for Information Technology for the AI Squared Forum 2019.
Researchers from Two Six Labs and Stanford Schnitzer Lab have developed a deep learning system designed to explore the workings of the mouse mind and predict behavior by processing brain-based electrical activity with a neural network.
Now, researchers from the Victoria University of Wellington School of Engineering and Computer Science have introduced the HSIC (Hilbert-Schemidt independence criterion) bottleneck as an alternative to backpropagation for finding good representations.
Synced Global AI Weekly July 21st
The developers behind TabNine have introduced a new deep learning model, “Deep TabNine,” which significantly improves suggestion quality.
Although natural language processing (NLP) has been around for decades, the recent and rapid rise of deep learning algorithms together with the increasing availability of massive amounts of text data are creating new and appealing opportunities for the tech across many industry sectors, including in the investment world.
Synced Global AI Weekly July 7th
Baidu’s homegrown deep learning framework PaddlePaddle will empower Huawei’s Kirin smartphone chips, the company announced at the Baidu Create 2019 AI Developer Conference that kicked off today in Beijing.
In the late 2000s Fortune Global 500 healthcare companies ramped up AI deployment in the industry, from in-hospital diagnosis and treatment to drug supply chain and out-of-hospital scenarios.
Citadel Chief AI Officer Li Deng has been named a Fellow of the Canadian Academy of Engineering (CAE) in recognition of his notable achievements in deep learning and speech recognition.
Deep learning model performance has taken huge strides, allowing researchers to tackle tasks which were simply not possible for machines less than a decade ago.
The two-day RE•WORK Deep Learning Summit Boston 2019 gathered more than 60 speakers from top AI labs such as MIT CSAIL, Uber AI Labs, Adobe Research and other experts from the AI healthcare industry who provided high-level deep learning technical discussions and industry application insights.
Google AI has introduced a deep learning based approach that generates depth prediction from videos where both camera and subject are in motion.
Google’s deep learning TensorFlow platform has added Differentiable Graphics Layers with TensorFlow Graphics, a combination of computer graphics and computer vision. Google says TensorFlow Graphics can solve data labeling challenges for complex 3D vision tasks by leveraging a self-supervised training approach.
Google has achieved a milestone in machine learning research that will boost the company’s broader ambitions in healthcare. In a paper published today in Nature Medicine, Google researchers present an end-to-end deep learning model that can predict lung cancer comparably or better than human radiologists.
Artificial intelligence is closing the gap on humans. Machines are rapidly honing their skills in object recognition and natural language interaction, and advanced AI agents have already beat human champions in board and video games and even debates.
Traditional methods used to estimate 3D structure and camera motion in videos rely heavily on manual assumptions such as continuity and planarity. Google researchers have now presented an alternative deep learning method which is able to obtain these assumptions from unlabelled video.
Researchers from the Sri Lanka’s University of Moratuwa and the University of Sydney in Australia have proposed a technique for generating new handwritten character training samples from existing samples.
The non-profit organization behind the popular worldwide library of computer vision programming functions, OpenCV (Open Source Computer Vision), is launching a kickstarter campaign to raise funds for a series of summer 2019 AI courses.
New research from Harvard Medical Group shows AI-powered autonomous device navigation is possible in minimally invasive surgical procedures, and even in heart surgery.
With its improved productivity and accuracy and more personalized experience, AI is revolutionizing medical imaging. According to Signify Research, the world market for AI in medical imaging — comprising software for automated detection, quantification, decision support, and diagnosis — will reach US$2 billion by 2023.
Thanks to the CUDA architecture [1] developed by NVIDIA, developers can exploit GPUs’ parallel computing power to perform general computation without extra efforts. Our objective is to evaluate the performance achieved by TensorFlow, PyTorch, and MXNet on Titan RTX.
The Stanford ML Group led by Andrew Ng has released its MRNet Dataset, which contains more than 1000 annotated knee MRI Scans; and announced an associated public model competition.
Researchers from Facebook, the National University of Singapore, and the Qihoo 360 AI Institute have jointly proposed OctConv (Octave Convolution), a promising new alternative to traditional convolution operations. Akin to a “compressor” for Convolutional Neural Networks (CNN), the OctConv method saves computational resources while boosting effectiveness.
In a YouTube video released today, Spot looks a lot more like a workhorse, harnessed up and hauling a box truck up a hill.
The recent ReWork Deep Learning in Finance Summit in London, UK, featured 46 top scientists and professors from world-leading institutions, who presented their research progress and provided a glimpse into emerging trends in the field of artificial intelligence and fin tech.
After eight-months of development efforts, the “Open AI Five” exacted their revenge today against one of the world’s top teams in a highly anticipated best-of-three 5v5 Dota 2 showdown in San Francisco.
This morning the NAACL 2019 conference committee announced its best paper awards. Synced has prepared a summary of the winning papers.
A collaboration between researchers from China’s Beihang University and Microsoft Research Asia has produced TableBank, a new image-based dataset for table detection and recognition built with novel weak supervision from Word and Latex documents on the Internet.







































