AI Community Machine Learning & Data Science Nature Language Tech

AI-Powered ‘Genderify’ Platform Shut Down After Bias-Based Backlash

Genderify is an AI-powered tool designed to identify a person's gender by analyzing their name, username or email address.

Just hours after making waves and triggering a backlash on social media, Genderify — an AI-powered tool designed to identify a person’s gender by analyzing their name, username or email address — has been completely shut down.

Launched last week on the new-product showcase website Product Hunt, the platform was pitched as a “unique solution that’s the only one of its kind available in the market,” enabling businesses to “obtain data that will help you with analytics, enhancing your customer data, segmenting your marketing database, demographic statistics,” according to Genderify creator Arevik Gasparyan.

Spirited criticism of Genderify quickly took off on Twitter, with many decrying what they perceived as built-in biases. Entering the word “scientist” for example returned a 95.7 percent probability for the person being male and only a 4.3 percent chance for female. Ali Alkhatib, research fellow at the Center for Applied Data Ethics, tweeted that when he typed in “professor,” Genderify predicted a 98.4 percent probability for male, while the word “stupid” returned a 61.7 percent female prediction. In other cases, adding a “Dr” prefix to frequently-used female names resulted in male-skewed assessments.

EeC6CszUEAEdD-0.png

The Genderify website included a section explaining how it collected its data based on sources such as governmental and social network information. Before the shutdown the Genderify team tweeted “Since AI trained on the existing data, this is an excellent example to show how bias is the data available around us.”

dn729.png

Issues surrounding gender and other biases in machine learning (ML) systems are not new and have raised concerns as more and more potentially biased systems are being turned into real-world applications. AI Now Institute Co-founder Meredith Whittaker seemed shocked that Genderify had made it to a public release, tweeting, “No fucking way. Are we being trolled? Is this a psyop meant to distract the tech+justice world? Is it cringey tech April fool’s day already? Or, is it that naming the problem over and over again doesn’t automatically fix it if power and profit depend on its remaining unfixed?”

Last month, the Director of Machine Learning Research at NVIDIA and California Institute of Technology Professor Anima Anandkumar tweeted her concerns when San Francisco-based research institute OpenAI released an API that runs GPT-3 models which she said produced texts that were “shockingly biased.”

OpenAI responded that “generative models can display both overt and diffuse harmful outputs, such as racist, sexist, or otherwise pernicious language,” and that “this is an industry-wide issue, making it easy for individual organizations to abdicate or defer responsibility.” The company stressed that “OpenAI will not,” and released API usage guidelines with heuristics for safely developing applications. The OpenAI team also pledged to review applications before they go live.

There is an adage in the computer science community: “garbage in, garbage out.” Models fed by biased data will tend to produce biased predictions, and the concern is that many such flawed models may be turned into applications and brought to market without proper review.

In the wake of the Genderify debacle, many in the ML community are reflecting on what went wrong and how to fix it. University of Southern California Research Programmer Emil Hvitfeldt launched a GitHub project, Genderify Pro, that argues “assigning genders is inherently inaccurate” and “if it is important to know someone’s gender, ask them.”


Reporter: Yuan Yuan | Editor: Michael Sarazen


Synced Report | A Survey of China’s Artificial Intelligence Solutions in Response to the COVID-19 Pandemic — 87 Case Studies from 700+ AI Vendors

This report offers a look at how the Chinese government and business owners have leveraged artificial intelligence technologies in the battle against COVID-19. It is also available on Amazon Kindle.

Click here to find more reports from us.


We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

23 comments on “AI-Powered ‘Genderify’ Platform Shut Down After Bias-Based Backlash

  1. Pingback: AI-Powered ‘Genderify’ Platform Shut Down After Bias-Based Backlash – Full-Stack Feed

  2. Pingback: AI-Powered ‘Genderify’ Platform Shut Down After Bias-Based Backlash - INFOSHRI

  3. Pingback: AI-Powered 'Genderify' Platform Shut Down After Bias-Based Backlash – Paper TL

  4. Pingback: The 6 unholy AI systems thou shalt not develop – TECHOSMO

  5. Pingback: The 6 unholy AI systems thou shalt not develop - Techio

  6. Pingback: The 6 unholy AI systems thou shalt not develop – Gadgets Specialist

  7. Pingback: The 6 unholy AI systems thou shalt not develop – Ranzware Tech NEWS

  8. Pingback: The 6 unholy AI systems thou shalt not develop - Bestgamingpro

  9. Pingback: Os 6 sistemas profanos de IA que você não desenvolverá

  10. Pingback: EU imposes first-ever sanction over cyberattacks against Russia, China and North Korea- Technology News, Firstpost – elmenulocal

  11. Pingback: The 6 unholy AI systems thou shalt not develop – DLSServe

  12. Pingback: The 6 unholy AI systems thou shalt not develop - Galazio.club

  13. Pingback: The 6 unholy AI systems thou shalt not develop | The Next Web | AnotherFN.com - Another FN

  14. Pingback: The 6 unholy AI systems thou shalt not develop – NewsForTime

  15. Pingback: The 6 unholy AI systems thou shalt not develop – PCジサクテック

  16. Pingback: The 6 unholy AI systems thou shalt not develop - Science and Tech News

  17. Pingback: Los seis sistemas de inteligencia artificial que no desarrollarás | diedos

  18. Pingback: The 6 unholy AI systems thou shalt not develop | diedos

  19. Pingback: Paul Bratcher | The 6 unholy AI systems thou shalt not develop

  20. Pingback: 2020 in Review: 10 AI Failures | Synced

  21. Pingback: [自然语言处理,机器学习]2020 年度 AI 信息摘要-i格子网

  22. Pingback: 2020 年度 AI 信息摘要-i格子网

  23. Pingback: AI-Powered ‘Genderify’ Platform Shut Down After Bias-Based Backlash – The Data Ethics Repository

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: