Machine Learning & Data Science Research Talk Review

2020 in Review With Brian Tse

Synced has invited Mr. Brian Tse to share his insights about the current development and future trends of artificial intelligence.

In 2020, Synced has covered a lot of memorable moments in the AI community. Such as the current situation of women in AI, the born of GPT-3, AI fight against covid-19, hot debates around AI bias, MT-DNN surpasses human baselines on GLUE, AlphaFold Cracked a 50-Year-Old Biology Challenge and so on. To close the chapter of 2020 and look forward to 2021, we are introducing a year-end special issue following Synced’s tradition to look back at current AI achievements and explore the possible trend of future AI with leading AI experts. Here, we invite Mr. Brian Tse to share his insights about the current development and future trends of artificial intelligence.

Brian Tse.png

Meet Brian Tse

Brian Tse focuses on researching and improving cooperation over AI safety, governance, and stability between great powers. He is a Policy Affiliate at the University of Oxford’s Center for the Governance of AI, Coordinator at the Beijing AI Academy’s AI4SDGs Cooperation Network, and Senior Advisor at the Partnership on AI. He has advised organizations such as Google DeepMind, OpenAI, Baidu, Tencent’s WeBank, Carnegie Endowment for International Peace, and Tsinghua University’s World Peace Forum. Brian has served as a member of the program committee of SafeAI workshops at AAAI and IJCAI; AI and Formal Methods at ICFEM; and IEEE P2894 XAI Working Group on privacy-preserving machine learning.

The Best AI Technology Developed in the Past 3 to 5 Years: “AlphaFold”

I define the goodness of AI technologies in terms of whether they help enable widely shared benefits and minimal negative impact on the long-term trajectory of our global civilization. AlphaFold demonstrates how AI research can drive new scientific discoveries and produce significant benefits for healthcare and environment protection. The AI system has been developed by the Science team at DeepMind.

As Linus Pauling said in 1960: “we shall be able to get a more thorough understanding of the nature of disease in general by investigating the molecules that make up the human body.” Scientists have long been interested in determining the structures of proteins because a protein’s form is thought to dictate its function. AI methods, in particular deep learning, help predict a protein’s shape computationally from its genetic code alone, rather than determining it through costly experimentation.

Apart from benefits to healthcare, protein design can enable advances in biodegradable enzymes, which could help manage pollutants like plastic and oil and break down waste in ways that are more friendly to our environment.

The Most Promising AI Technology in the Next 1 to 3 Years: “Natural Language Processing & Bayesian Approaches”

There is significant excitement about the promise of natural language processing. This has been prompted by the development of transformer models for GPT-3 by OpenAI and Qingyuan CPM by BAAI/Tsinghua. These models can generate text which human evaluators have difficulty distinguishing from those written by humans. Apart from massively beneficial applications, the authors of GPT-3 also called for research on risk mitigation on issues such as misinformation, spam, and phishing.

Another development I am excited about is Bayesian approaches, which might offer advantages over deep learning in the longer-term. For example, probabilistic programming languages (PPL) systems can be provided with prior knowledge about subjects such as physics and linguistics so that they can learn faster. They can also be safer because they calibrate uncertainty and can better detect distributional shift. In other words, this can help ensure that AI systems reliably function as intended and possible risks are mitigated. Logic and probability are ancient, foundational disciplines whose unification holds significant potential for the field of AI. However, many Bayesian models do not currently work well for most applications, so PPLs could also remain more of a niche research area that would only see the payoff in research investment over the timescale of decades.

The Biggest Challenge in the Field of AI: “Next-Generation Paradigm of Deep Learning”

The biggest challenge is finding a next-generation paradigm of deep learning, which should be safer, more robust, and more interpretable than the current paradigm. There are now more than 1,000 incident reports related to the use of AI systems, broadly defined. The most obvious symptom for the insufficiency of deep learning is adversarial examples.

More fundamentally, the majority of AI research is based on the oversimplified idea that there is a fixed, known objective to be optimized. For example, in standard deep learning we define a loss function that fixes the cost of making each type of error. This and other optimisation practices have led to well-observed flaws such as Google’s algorithm classifying a person as a gorilla, social media platforms optimising for click-through rate, increased safety risks for food delivery service workers, self-driving car accidents due to false positives in the model, and more. There are emerging research communities and angles that seek to mitigate these issues, through focusing for instance on fairness, safety, and interpretability. In his latest book Human Compatible, Prof. Stuart Russell at UC Berkeley recommends the principle that “the machine is initially uncertain about what the human preferences are,” and will seek to actively learn these preferences. At the World AI Conference in 2020, Prof. Andrew Yao at Tsinghua University endorsed a similar principle and encouraged the exploration of theoretical areas such as probability theory and game theory.

The Latest Noteworthy Development: “Casual influence diagrams for AI safety, Learning from human feedback, ML for climate change, and More”

In the past several years, promising approaches and techniques have emerged for ensuring that safe and beneficial AI can be developed:

  • The use of causal influence diagrams for understanding agent incentives and modelling AI safety frameworks. Casual influence diagrams is a well-established type of graphical model used to represent decision-making problems. In these diagrams, graphical criteria can be used to determine both agent observation incentives and agent intervention incentives. (Everitt, T., Carey, R., Langlois, E., Ortega, P., Legg, S. (2021) Agent Incentives: A Causal Perspective)
  • Much recent work has focused on how an agent can learn what to do from human feedback. This paper explores the paradigm of assistance. Simple experiments show desirable qualitative behaviors of assistive agents that cannot be found by agents based on reward learning. Benefits of Assistance over Reward Learning (Rohin Shah, Pedro Freire, Neel Alex, Rachel Freedman, Dmitrii Krasheninnikov, Lawrence Chan, Michael Dennis, Pieter Abbeel, Anca Dragan and Stuart Russell)
  • The development of Safety Gym, a suite of environments and tools for measuring progress towards reinforcement learning agents that respect safety constraints while training. (Ray, A., Achiam, J., & Amodei, D. (2019). Benchmarking safe exploration in deep reinforcement learning. arXiv preprint arXiv:1910.01708.)
  • The increasing popularity of federated learning and its applications in a variety of sectors. (Yang, Q., Liu, Y., Cheng, Y., Kang, Y., Chen, T., & Yu, H. (2019). Federated learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 13(3), 1-207.)
  • The use of machine learning for reducing greenhouse gas emissions, with applications such as smart grids to disaster management. Rolnick, D., Donti, P. L., Kaack, L. H., Kochanski, K., Lacoste, A., Sankaran, K., … & Luccioni, A. (2019). Tackling climate change with machine learning. arXiv preprint arXiv:1906.05433.


Synced Report | A Survey of China’s Artificial Intelligence Solutions in Response to the COVID-19 Pandemic — 87 Case Studies from 700+ AI Vendors

This report offers a look at how China has leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon KindleAlong with this report, we also introduced a database covering additional 1428 artificial intelligence solutions from 12 pandemic scenarios.

Click here to find more reports from us.

AI Weekly.png

We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.

2 comments on “2020 in Review With Brian Tse

  1. Pingback: 2020 In Review With Brian Tse - AI Summary

  2. Conventional network training methods of SR networks require paired datasets for training, which consist of pairs of high-resolution images and low-resolution versions of the same images. Recent methods enable training neural networks based on unpaired images.

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: