AI

Google Researchers Hail New AI Principles Designed to Halt AI Weaponization

Over the past three months, criticism and protests have been mounting over Google's participation in Project Maven, a Pentagon pilot program to build machine learning models to detect and categorize objects in drone footage provided by the US Department of Defense.

Since becoming Google CEO in 2015, Sundar Pichai has written just two Google blog posts, each seemingly seeking to quell criticism of the company. The first was in response to last August’s controversial memo from a senior company engineer condemning Google’s diversity initiatives; while the latest, published yesterday, introduces ethical principles for Google’s artificial intelligence research and product development — including a commitment to prohibit weaponization of the company’s AI research.

Over the past three months, criticism and protests have been mounting over Google’s participation in Project Maven, a Pentagon pilot program to build machine learning models to detect and categorize objects in drone footage provided by the US Department of Defense.

Over 3,100 Google employees — including dozens of senior engineers — signed a petition protesting the company’s involvement in Project Maven: “We believe that Google should not be in the business of war. Therefore we ask that Project Maven be cancelled.”

2018-06-08 下午2.02.36
Last week, Google Cloud CEO Diane Greene announced the company would withdraw from Project Maven when the current contract ends in 2019. The news triggered speculation that Google was also drawing up a set of ethical principles related to its AI research and development.

So here we are. Pichai yesterday introduced seven Google objectives to guide the company’s approach to AI applications, namely that they should:

  • Be socially beneficial;
  • Avoid creating or reinforcing unfair bias;
  • Be built and tested for safety;
  • Be accountable to people;
  • Incorporate privacy design principles;
  • Uphold high standards of scientific excellence;
  • Be made available for uses that accord with these principles;

Pichai also stressed that Google will not design or deploy AI in technologies that can cause “overall harm,” injure people, gather or use information for surveillance, or contravene international law or human rights.

“We want to be clear that while we are not developing AI for use in weapons,” wrote Pichai, “we will continue our work with governments and the military in many other areas.” Pichai cited Google’s continuing collaborations with state players in areas such as cybersecurity, training, military recruitment, veterans’ healthcare, and search and rescue.

image (41).png
Sundar Pichai speaks at the Google Developer Conference I/O May 8 at Shoreline Amphitheater in Mountain View, California.

Google’s commitment to AI ethics was met favourably, and eased concerns about this issue within the company. Many Google staff and respected AI researchers praised the blog post on Twitter and Facebook.

Google Cloud Chief Scientist Fei-Fei Li referred to the AI principles as “an opportunity to reflect on our values and a reminder of our responsibility to develop technology that makes a positive impact on everyone, including our users, our community and the world.”

屏幕快照 2018-06-08 下午2.22.51.png
Head of Google AI Division Jeff Dean tweeted “As AI is applied to more and more problems through society, it’s important to think carefully about the principles by which this is done. We’ve just shared Google’s AI principles, which shows how we’ve been thinking about these issues.” Dean also introduced Google’s Responsible AI Practices, a practical guide to help AI developers implement the new principles.

However not everyone applauded Google’s exit from Project Maven. In a Nature publication this week, Gregory C. Allen, an Adjunct Fellow with the Technology and National Security Program, argued it’s inevitable that advanced AI will be adopted for military use, and tech giants like Google should therefore use their capabilities to help: “The ethical choice for Google’s artificial intelligence engineers is to engage in select national-security projects, not to shun them all,” wrote Allen.

The New York Times and ZDNet meanwhile warned that even if Google says it will do no harm, its open-sourced AI research could still be misused by others for malicious purposes. As Dr. Li suggested in her tweet, “this is only the beginning of this journey, not the conclusion. Significant challenges and unsolved problems remain. It will take all of our participation and effort to ensure that AI is a human-centered technology.”


Journalist: Tony Peng | Editor: Michael Sarazen

0 comments on “Google Researchers Hail New AI Principles Designed to Halt AI Weaponization

Leave a Reply

Your email address will not be published. Required fields are marked *