Fei-fei Li in Google Cloud NEXT ’17: Annoucing Google Could Video Intelligence API, and more Cloud Machine Learning Updates
Between March 8-10, Synced was invited as media guest to attend the Google Cloud NEXT ’17 conference in San Francisco.
AI Technology & Industry Review
Between March 8-10, Synced was invited as media guest to attend the Google Cloud NEXT ’17 conference in San Francisco.
Graham Taylor from the University of Guelph gave a talk at the University of Toronto, summarizing current techniques used to address the issue of insufficient labeled data.
The team of Rob Fergus, who is currently a research scientist at Facebook AI Research, have devised two neural net models for handling unstructured data.
Summaries and recommendations of peer-reviewed papers that discuss various aspects of machine learning.
On March 6th 2017, IBM announced the company’s initiative to build the first universal quantum computing system for commercial use in New York.
At RE•WORK Summits, speakers are invited to present advances from the world’s leading innovators, showcase the opportunities in emerging health care industry
This talk describes the dialog system architecture and explains the three main steps of the architecture: understanding, generation, and dialog manager and their challenges for machine learning.
If thinking can be understood as the step-by-step process that it is, then we can build artificial intelligences to have the potential to be as conscious as we are
A glance at the state of the art research shows that neural networks would still serve us, and artificial general intelligence is not yet in sight.
Youichiro Miyake presented an initial concept of creating artificial awareness in game AI system.
Baidu will now take lead in building China’s National Engineering Laboratory of Deep Learning Technology and Application
This talk summarizes the limitations of RNN (including LSTM, GRU), from both empirical and computational hierarchy mechanism perspectives.
This talk is focused on the future potential of deep learning with NVIDIA Deep Learning SDK and GPU hardware families.
This paper presents a method for synthesizing a frontal, neutral-expression image of a person’s face given an input facial photograph.
Chief Scientist Lei Li of Toutiao discusses how to apply machine learning to natural language understanding and producing machine-written news articles
The neurosurgeons and pathologists at Michigan Medicine recently combined a powerful imaging technique with deep learning algorithm for automatic tumor diagnosis during brain surgery.
This paper applies deep learning to a large-scale EHR dataset to extract robust patient descriptors that can be used to predict future patient diseases.
professor Yann LeCun discussed about “predicting under uncertainty: the next frontier in AI” during the lecture at the University of Edinburgh
There are three ways to combine DL and RL, based on three different principles: value-based, policy-based, and model-based approaches with planning.
This review will go over some of the current methods that are used to visualize and understand deep neural networks.
Sample-based Monte Carlo Localization is notable for its accuracy, efficiency, and ease of use in global localization and position tracking.
In a tech talk at University of Toronto, NVIDIA shared some updates regarding their research of self-driving car and End-to-End Learning
A University of Toronto Ph.D. student named Hang Chu recently published his project, a song completely composed and vocalized by Artificial Intelligence.
The 2017 award will be given to the most influential paper from the Sixteenth National Conference on Artificial Intelligence, held in 1999 in Orlando,USA.
The recipient of this year’s Outstanding Paper Award utilizes prior domain knowledge to constrain output space to a specific learning structure rather than a simple mapping from input to output
Synced had a special interview with Ng to learn more about the progress of AI research at Baidu, why and how he became an AI expert.
This study is the first to perform extensive personal iPOP of an individual through healthy and diseased states. This paper was published in Cell, 2012.
The future of AI belongs to scalable methods, search and learning; as presented by Richard Sutton in seminars at University of Toronto
computer science graduate student named Kaheer Suleman founded a company called Maluuba, using an intelligent program he invented as the product.
Machine learning Advances and Applications Seminar address how to use fast weights to effectively store temporary memories, at University of Toronto
Lukasz Kaiser, Senior Research Scientist at Google Brain, gives a presentation about the developments in Natural Language Processing techniques at 2017’s AI Frontier Conference.
2016 was full of ups, downs, and surprises: for the world and for AI.
Human-level control through deep reinforcement learning The theory of reinforcement learning provides a normative account, deeply rooted in psychological andContinue Reading
Professor Richard Sutton is considered to be one of the founding fathers of modern computational reinforcement learning. He made several significant contributions to the field, including temporal difference learning, policy gradient methods, and the Dyna architecture.
This talk focuses on engineer techniques for large-scale NMT systems. It helps us to understand how GPU works andContinue Reading
Yoshua Bengio, Geoffrey Hinton, Richard Sutton and Ruslan Salakhutdinov Panel Summary at 2016 I was at the 2016 “Machine LearningContinue Reading
Born in Oakland, California on October 29, 1949, John Markoff grew up in Palo Alto, California and graduated from WhitmanContinue Reading