Cloud & ML Ops Popular

A Look at Google’s Efforts to Earn Public Trust Through ML Fairness and Responsible AI

“Trust in AI systems is becoming, if not already, the biggest barrier for enterprises — as they start to move from exploring AI or potentially piloting or doing some proof of concept works into deploying AI into a production system”

For years, we’ve been hearing about major AI initiatives in international enterprises. Companies afraid of being left behind in the AI revolution pushed its implementation a whopping 270 percent from 2015 to 2019, according to Gartner report that surveyed more than 3,000 executives in 89 countries. Along with industry, AI is also enabling our modern smart homes, and has even found its way into gaming and leisure activities.

AI’s increasing presence has attracted no small amount of criticism, and often with good reason. Last year, an AI-powered “DeepNude” web project that enabled users to remove peoples’ clothing in images (trained mostly on women) drew sharp criticism and was taken down by developers. A few months ago “Genderify,” an AI-powered tool designed to identify a person’s gender by analyzing their name, username or email address, triggered a backlash on social media and was also shut down. Loan and job application and predictive policing algorithms have also come under attack.

Tech giants, in response to growing public concerns over AI systems invading privacy and perpetuating racial and gender biases, have in recent years worked to address fairness issues not only in datasets but also in algorithms and model architecture design. In addition to the research efforts to reduce model biases, Google has also implemented a company-wide ML Fairness and Responsible AI ecosystem.

“Trust in AI systems is becoming, if not already, the biggest barrier for enterprises — as they start to move from exploring AI or potentially piloting or doing some proof of concept works into deploying AI into a production system,” said Tracy Pizzo Frey, director of strategy of Google Cloud AI & Industry Solutions, in an ML Fairness press briefing this week.

Tracy Frey.png
Tracy Pizzo Frey: Director of Strategy, Google Cloud AI & Industry Solutions

Frey says the first step to systematically applying all the critical components of AI responsibility and the sometimes contradictory ethical theories to Google’s AI development and applications was to build a set of company-wide AI principles. Google researchers began developing the company’s AI Principles in the summer of 2017, spent a year plus crafting and iterating, and published them in June 2018.

The AI Principles’ first part comprises seven commitments — that AI should be socially beneficial, avoid creating or reinforcing unfair biases, be built and tested for safety, be accountable to people, incorporate privacy design principles, uphold high standards of scientific excellence, and that the AI technologies developed “be made available for uses that accord with these principles.”

In the second part, Google designates four areas where it will not design or deploy AI. These include technologies that cause or are likely to cause overall harm, weapons or other technologies whose principal purpose or implementation is to cause or directly facilitate injury to people, technologies that gather or use information for surveillance violating internationally accepted norms, and technologies whose purpose contravenes widely accepted principles of international law and human rights.

“Our AI Principles serve as the universal standard for how we approach the development of the technology,” Frey told Synced. “Anything that we feel may be in conflict with those principles will undergo an AI Principle review.”

To ensure the principles are actually implemented, Google ML Fairness and Responsible AI Product Lead Tulsee Doshi and her team acquire centralized AI principal knowledge, expertise, guidance, and educational content that they use to guide projects and provide advice to research and product teams, etc.

A hub of Google’s Responsible AI ecosystem, Doshi’s team regularly reports to and seeks counsel from senior executive executives and advisors on sensitive cases and topics. Her team’s responsibilities also include ethics advisory, societal research, technical research engagements and foundational ML research.

The ultimate goal of the Fairness and Responsible AI team is to share their expertise across all Google product areas, and to this end they constantly collect feedback from product teams. After the team shares their expertise, the product teams then take ownership of whether and how to implement and manage any AI Principles processes.

“Any Googler can actually request an AI Principle review for a new product, for a research paper, for a partnership, or any sort of idea that they have,” Doshi explained at the press briefing.

During the process, a central review team will identify the relevant AI principles and Google experts who have expertise in that area. After identifying the potential benefits and harms of a particular project, the review team and product team will determine whether the project should launch, and if so, what ethical and thoughtful practices should be taken into consideration.

The Cloud team adopted a review process similar to the AI principles review even before the official release of the Google AI Principles, Frey says. “We’ve been at this work now for two and a half years, and we’ve learned a lot about both what we are able to do, the limits of it, and how we can iterate over time.”

The Cloud team has some unique and layered considerations, Frey explains. They don’t necessarily build a product and offer it directly to consumers. Instead, they sell AI technologies that may be packaged or pieced together by an enterprise into a product designed to provide a solution for their own needs. The Cloud team therefore needed build their own governance processes based on the AI Principles.

In a bid to provide rigorous evaluations in the often flexible environments where new technologies take shape, Frey’s team created two connected but purposefully distinct review bodies. The first covers early-stage customer engagements, and the second serves as a thorough product review which performs risk and opportunity assessments across each principle.

Last year, a Googler ran an image of themselves in the Cloud Vision API and was misgendered in the result. The issue was brought up to the Cloud Vision team, which launched an AI Principle review and investigation, concluding it violated their second AI principle “avoid creating or reinforcing unfair bias.”

This year, the Cloud team made a decision to remove the gender labels altogether in the API. Frey says they believe “the impact of misidentification exacerbates or creates unfair assumptions that can restrict or harm those who do not look stereotypically male or female, or who are a gender non-conforming person.”

We can expect to see more of such changes, which reflect the dynamic nature of society and our understanding. In April, Google AI announced a new approach that uses a dramatically different paradigm to address gender bias by rewriting or post-editing an initial translation, while a Google AI research paper published earlier this month studied correlations related to gender on language models BERT and ALBERT and formulated a related series of best practices.

“It’s so important that we are able to then step back and say this result no longer serves us, and it no longer serves our community. And so it’s imperative that we change it,” Frey stresses.


Reporter: Yuan Yuan | Editor: Michael Sarazen


B4.png

Synced Report | A Survey of China’s Artificial Intelligence Solutions in Response to the COVID-19 Pandemic — 87 Case Studies from 700+ AI Vendors

This report offers a look at how China has leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon KindleAlong with this report, we also introduced a database covering additional 1428 artificial intelligence solutions from 12 pandemic scenarios.

Click here to find more reports from us.


AI Weekly.png

We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.

8 comments on “A Look at Google’s Efforts to Earn Public Trust Through ML Fairness and Responsible AI