In an exclusive interview with Synced at NeurIPS, members of the University of Toronto and Vector Institute team led by Assistant Professor David Duvenaud discussed their winning submission Neural Ordinary Differential Equations — a math-based approach to designing deep learning models that is stimulating discussion across the machine learning community.
A founding member of Google Brain and the mind behind AutoML, Quoc Le is an AI natural: he loves machine learning and loves automating things. Le used millions of YouTube thumbnails to develop an unsupervised learning system that recognized cats when he was a Stanford University PhD in 2011.
As Chinese Internet giant Baidu has expanded from search to mobile apps, cloud services, and emerging business sectors like autonomous driving and voice assistants, it has correspondingly beefed up its research efforts, particularly in AI, to keep pace with growing security threats.
Robert S. Warren, MD is a Professor of Surgery and a specialist in gastrointestinal and liver cancer. Dr. Warren joined UCSF Medical Center in 1988. Highly respected by his peers, Dr. Warren was named to the list of U.S. News “America’s Top Doctors,” a distinction reserved for the top 1% of physicians in the nation for a given specialty.
MORE Health is a Silicon Valley-based company that provides access to top international physicians for patients faced with critical illnesses such as cancer or heart disease. The company was founded in 2013, and recently took a leap forward by partnering with Houston-based Melax Technologies…
Personal computers and mobile devices are in their heyday. Researchers are swarming standalone AI, focusing on how to automate self-learning intelligent systems. The interfaces for wearables meanwhile are evolving from smart screens to gesture commands, like those often seen in AR and VR commercials.
Professor Richard Sutton is considered to be one of the founding fathers of modern computational reinforcement learning. He made several significant contributions to the field, including temporal difference learning, policy gradient methods, and the Dyna architecture.