The importance of artificial intelligence (AI) technology does not need to be re-emphasized. However, while new technologies have historically been shaped by the market, society needs to pro-actively contemplate AI regulation — simply because the technology’s social impact may otherwise quickly grow beyond our control. To address such pressing issues, the International Telecommunication Union (ITU) and the XPRIZE Foundation, in partnership with concerned UN agencies, organized the “AI for Good Global Summit,” held June 7-9 in Geneva, Switzerland. The gathering sought to initiate discussions on the deployment of AI with regard to 17 sustainable development goals (SDGs).
Prominent researchers like Peter Norvig, Director of Research at Google; Russell Stuarts, Director of the UC Berkeley Artificial Intelligence Lab; Yoshua Bengio, Head of the Montreal Institute for Machine Algorithms (MILA); Fei-Fei Li, Director of the Stanford Artificial Intelligence Lab; Jürgen Schmidhuber, Co-Director of the Dalle Molle Institute for Artificial Intelligence; and Gary Marcus, Professor of Psychology at NYU, were joined by cross-sector panelists and attendees from government, industry, UN agencies, civil society and the research community.
The discussions convened around the topic of “How can we make AI an effective, socially good tool”? This is a grand proposition, and while participants tried to avoid overgeneralizations, countless open-ended questions emerged. What do we mean by “artificial intelligence”? Will intelligent systems and robots cause humanity to progress or regress? How can an unpredictable and exponentially-growing technology be made to stay in step with human values?
To Start, We First Need to Bridge Gaps
AI technology is not the most self-explanatory thing in the world. Thus arises a triple challenge: translating the technology for multi-sector stakeholders, engaging the interested public, and educating those who lack information on AI. Nicholas Davis, Head of Society and Innovation at the World Economic Forum, reminded the audience: “Not everyone has access to the discussions here at Geneva. There are people don’t know these discussions are going on, but they will be affected.”
In addition to bridging the gap in understanding, there is also the challenge of aligning end-goals for AI technology. Academics are research-driven and focus on solving AI problems at hand; industry is competition-driven and sees the exponential development of AI as a means to become more competitive; while many governments are still disoriented by the technology. Even in 2016, the European Parliament was still referencing science fiction writer Asimov’s three laws of robotics, and “considering transferring human rights to robots [once] artificial intelligence (AI) becomes so powerful that droids end up thinking for themselves.”
Stuart Russell, the well-known computer scientist who co-authored the standard AI curriculum textbook, Artificial Intelligence: A Modern Approach, considers much of the contemporary media coverage of AI technology to be “misinforming” or simply wrong.
Asymmetrical information is a serious problem, especially when it comes to planning AI governance. Governments will need to regulate the deployment of AI technology in a global commercial setting, encompassing the impact of automation on jobs, the use of autonomous weapons, cyber security, trade tariffs, cross-border data access and much more.
Lan Xue, Professor and Dean at Tsinghua University’s School of Public Policy and Management, addresses the issue with years of policy expertise, “[there are three challenges for governments], 1) regime complex: AI is such a big problem, it is very difficult for multiple — sometimes hierarchical — levels of government to manage. 2) cycle gaps: due to the structural complexity of political regimes, the development of AI technology is much faster than what the governance cycles can process; 3) fragmentation of ethics: it is difficult to align moral and ethical values in the global context.”
The problems at hand cannot be solved by linear logic. Xue suggests “adaptivity” and called on attendees to accept ambiguity and gaps in understanding. We must listen to one another, even if the messages are not yet fully translatable across sectors and industries, or even among individuals.
Artificial Intelligence Can Help or Hurt Humanity
AI is a tool with the potential to help humanity solve many problems. The summit emphasized 17 SDGs in the following fields:
- No poverty: Map poverty with predictive big data analytics
- Zero Hunger: Increase agricultural productivity
- Good Health and Well Being: Analyze vast amounts of healthcare data
- Quality Education: Revolutionize classrooms with individualized learning
- Gender Equality: Pinpoint gender inequality, drive balanced hiring
- Clean Water and Sanitation: Improve efficient, clean water provision
- Affordable and Clean Energy: Improve photovoltaic energy capture
- Decent Work and Economic Growth: Increase productivity through intelligent automation
- Industry Innovation and Infrastructure: Help drive industry innovation
- Reduced Inequality: Build a more inclusive society (e.g. disability robotics)
- Sustainable Cities and Communities: Power urban-planning decisions through sensor data
- Responsible Consumption and Production: Predict optimal production levels to reduce waste
- Climate Action: Model climate change to predict disasters
- Life Below Water: Track illegal fishing through pattern-recognition software
- Life on Land: Outwit poachers and monitor species’ health
- Peace, Justice and Strong Institutions: Reduce discrimination and corruption in government
- Partnerships with Goals: Multi-sectoral collaboration is essential
AI can help 21st century entrepreneurs and governments build and expedite progressive social changes, for example by promoting AI augmented mobile phones for health monitoring, said World Health Organization Director-General Margaret Chan in her keynote speech.
However, there is also a downside narrative. The coming decades will be especially difficult for underprivileged children in developing countries. Christopher Fabian, Co-Founder of UNICEF Innovation, addressed the audience with a sense of urgency: “We are facing increasing challenges following the advance of automation, the world is becoming increasingly complicated, and disparity will rise further due to technological advancement.” UNICEF is a UN agency that helps children living in the bottom quintiles of the population, and Fabian believes technical innovations like machine learning and predictive analytics using datasets can help UNICEF. Fabian says this is not about new-age philanthropy, rather it is about maintaining a baseline for human survival.
Concerns were also raised regarding a number of more insidious scenarios. These included dangers arising from AI in political propaganda, direct digital payment theft, information warfare, and issues as immediate as AI-generated targeted-marketing peddling alcohol to someone indicated as depressed by their social media profile.
“Deliberate misuse of AI is potentially a much worse problem than malware,” said Stuart Russell, who is especially concerned about the use of AI in Lethal Autonomous Weapons (LAWS). According to UC Berkeley, these are systems that “select and engage targets without human intervention [and] might include, for example, armed quadcopters that can search for and eliminate enemy combatants in a city, but do not include cruise missiles or remotely piloted drones for which humans make all targeting decisions.”
At the UN’s third Convention on Certain Conventional Weapons (CCW) in Geneva, Russell called for a ban on autonomous weapons. He also initiated the “Autonomous Weapons: An Open Letter From AI Robotics Researchers” appeal, which has been endorsed by 20,000 researchers and others, including Steven Hawking, Elon Musk, and Steve Wozniak.
Trend: Individual Entrepreneurs as Problem Solvers
In his keynote speech, Marcus Shingles identified a new type of problem solver: the individual entrepreneur. To Shingles, our greatest challenge is the struggle between “linearly wired culture” and the “exponential” trends in AI development. According to renowned AI and deep learning researcher Yoshua Bengio, even if we were to stop AI research today, humanity would still have an estimated 20 years “for reaping the benefits of current technology.” Many question the capability of standardized social processes for dealing with the complexity of today’s world, and believe that we are approaching an inflection point.
Many contemporary entrepreneurs were born in the digital age, and have solid industry experience and global exposure. Philanthropic organizations such as the Bill & Melinda Gates Foundation or the Chan-Zuckerberg Initiative have impact comparable to political influence, garnering high exposure on social media and attracting technical experts eager to work for a cause. For example, Meta, a Toronto-based startup recently acquired by the Chan-Zuckerberg initiative, is working on data transparency within the scientific community.
Organizational Information: For Further Reading
Geneva Digital Convention: An initiative proposed by Microsoft that calls on the UN to create regulations on the security of cyber space. It harkens back to the spirit of Fourth Geneva Convention, which protected civilians during war.
AI4All: A US nonprofit organization initiated by Fei-Fei Li and students of Stanford University, working to increase diversity in AI. The organization supports educational programs around the country that give underrepresented high school students early exposure to humanistic AI, or AI that addresses societal problems. Participating universities include Stanford, Princeton, UC Berkeley, and Carnegie Mellon.
IEEE Code of Conduct: Protocols regulating the professional and ethical conduct of technologists, including AI researchers: “We, the members of IEEE, in recognition of the importance of our technologies in affecting the quality of life throughout the world, and in accepting a personal obligation to our profession, its members and the communities we serve, do hereby commit ourselves to the highest ethical and professional conduct.”
Data Collection Tax: A possibility proposed by an AI for Good Global Summit audience member, seeking to constrain large corporations from heaping the data of individual citizens.
Journalist: Meghan Han | Editor: Michael Sarazen