AI Industry Machine Learning & Data Science US & Canada

Yann LeCun Quits Twitter Amid Acrimonious Exchanges on AI Bias

Turing Award Winner and Facebook Chief AI Scientist Yann LeCun has announced his exit from Twitter after getting involved in a long and often acrimonious dispute regarding racial biases in AI.

This is an updated version.

Turing Award Winner and Facebook Chief AI Scientist Yann LeCun has announced his exit from popular social networking platform Twitter after getting involved in a long and often acrimonious dispute regarding racial biases in AI.

Unlike most other artificial intelligence researchers, LeCun has often aired his political views on social media platforms, and has previously engaged in public feuds with colleagues such as Gary Marcus. This time however LeCun’s penchant for debate saw him run afoul of what he termed “the linguistic codes of modern social justice.”

It all started on June 20 with a tweet regarding the new Duke University PULSE AI photo recreation model that had depixelated a low-resolution input image of Barack Obama into a photo of a white male. Penn State University Associate Professor Brad Wyble tweeted “This image speaks volumes about the dangers of bias in AI.” LeCun responded, “ML systems are biased when data is biased. This face upsampling system makes everyone look white because the network was pretrained on FlickFaceHQ, which mainly contains white people pics. Train the *exact* same system on a dataset from Senegal, and everyone will look African.

Research scientist, co-founder of the “Black in AI” group, and technical co-lead of the Ethical Artificial Intelligence Team at Google Timnit Gebru tweeted in response, “I’m sick of this framing. Tired of it. Many people have tried to explain, many scholars. Listen to us. You can’t just reduce harms caused by ML to dataset bias.” She added, “Even amidst of world wide protests people don’t hear our voices and try to learn from us, they assume they’re experts in everything. Let us lead her and you follow. Just listen. And learn from scholars like @ruha9 [Ruha Benjamin, Associate Professor of African American Studies at Princeton University]. We even brought her to your house, your conference.” (This was a reference to ICLR 2020, where LeCun served as president and Benjamin presented the talk 2020 Vision: Reimagining the Default Settings of Technology & Society.)

Known for her work on racial and gender bias in facial recognition systems and other AI algorithms, Gebru has been advocating for fairness and ethics in AI for years. The Gender Shades project that she leads with MIT Media Lab Computer Scientist Joy Buolamwini revealed that commercial facial recognition software was more likely to misclassified and was less accurate with darker-skinned females compared to lighter-skinned men.

Gebru’s CVPR 2020 talk Computer vision in practice: who is benefiting and who is being harmed? again addressed the role of bias in AI, “I think that now a lot of people have understood that we need to have more diverse datasets, but unfortunately I felt like that’s kind of where the understanding has stopped. It’s like ‘let’s diversify our datasets. And that’s kind of ethics and fairness, right?’ But you can’t ignore social and structural problems.

LeCun replied that his comment was targeting the particular case of the Duke model and dataset. “The consequences of bias are considerably more dire in a deployed product than in an academic paper,” continued LeCun in a lengthy thread of tweets suggesting it’s not ML researchers that need to be more careful selecting data but engineers.

“Again. UNBELIEVABLE. What does it take? If tutorials at your own conference, books and books and talks and talks from experts coming to YOU, to your own house, feeding it to you, Emily and I even cover issues with how the research community approaches data. Nope. Doesn’t matter.” Gebru replied. “This is not even people outside the community, which we say people like him should follow, read, learn from. This is us trying to educate people in our own community. Its a depressing time to be sure. Depressing.”

Others from the AI and activist communities joined the fray, with far too many simply attacking either LeCun or Gebru. On June 25 LeCun offered an olive branch: “I very much admire your work on AI ethics and fairness. I care deeply about about working to make sure biases don’t get amplified by AI and I’m sorry that the way I communicated here became the story.” Gebru replied, “We’re often told things like ‘I’m sorry that’s how it made you feel.’ That doesn’t really own up to the actual thing. I hope you understand *why* *how* you communicated became the story. It became the story because its a pattern of marginalization.”

The week-long back-and-forth between LeCun and Gebru attracted thousands of likes, comments, and retweets, with a number of high-profile AI researchers expressing dissatisfaction with LeCun’s explanations. Google Research scientist David Ha commented, “I respectfully disagree w/Yann here. As long as progress is benchmarked on biased data, such biases will also be reflected in the inductive biases of ML systems Advancing ML with biased benchmarks and asking engineers to simply ‘retrain models with unbiased data’ is not helpful.” Canada CIFAR AI chair Nicolas Le Roux tweeted, “Yann, I know you mean well. I saw many people act like you just did in good faith, and get defensive when people pointed that this was not the proper response, until one day they stopped to listen and reflect and ultimately change their behaviour.”

Amid the heated debate, the Duke PULSE research team updated their paper, adding: “Overall, it seems that sampling from StyleGAN yields white faces much more frequently than faces of people of color.” The researchers referenced an April 2020 paper on demographic bias in artificially generated facial pictures by Salminen et al.: “Results indicate a racial bias among the generated pictures, with close to three-[fourths] (72.6%) of the pictures representing White people. Asian (13.8%) and Black (10.1%) are considerably less frequent, while Indians represent only a minor fraction of the pictures (3.4%).”

The team also added a “model card” to their study. Gebru was part of a team that introduced the model card framework in 2019 to “provide benchmarked evaluation in a variety of conditions, such as across different cultural, demographic, or phenotypic groups (e.g., race, geographic location, sex, Fitzpatrick skin type) and intersectional groups (e.g., age and race, or sex and Fitzpatrick skin type) that are relevant to the intended application domains.”

Slides from Gebru’s CVPR 2020 tutorial Computer vision in practice: who is benefiting and who is being harmed?

The artificial intelligence community has made a number of moves in recent years to encourage diversity and inclusivity, such as the “AI for All” initiative launched by Gebru’s Stanford supervisor Fei-Fei Li, and scheduling the major AI conference ICLR 2020 in Ethiopia (the conference went virtual due COVID-19). This year, NeurIPS, the world’s most prestigious AI conference, required authors to include a statement of the potential broader impact of their submitted papers, “including its ethical aspects and future societal consequences. Authors should take care to discuss both positive and negative outcomes.”

LeCun signed off Twitter on Sunday with the message “I’d like to ask everyone to please stop attacking each other via Twitter or other means. In particular, I’d like everyone to please stop attacking @timnitGebru and everyone who has been critical of my posts… Farewell everyone.”


Journalist: Fangyu Cai | Editor: Michael Sarazen

We know you don’t want to miss any story. Subscribe to our popular Synced Global AI Weekly to get weekly AI updates.

71 comments on “Yann LeCun Quits Twitter Amid Acrimonious Exchanges on AI Bias

  1. Pingback: [N] Yann LeCun Quits Twitter Amid Acrimonious Exchanges on AI Bias – tensor.io

  2. Pingback: Yann LeCun Quits Twitter Amid Acrimonious Exchanges on AI Bias – Full-Stack Feed

  3. Pingback: Yann LeCun Quits Twitter Amid Acrimonious Exchanges on AI Bias – EVENTA

  4. Bobby Beans

    I am not well-educated in regards to ML. What would be an example of a dataset problem and what would be an example of a social &/or structural problem?

    • As I understand, the social and structural problems are that AI learns the social and structural biases that are already present.

      When someone says, this AI misclassify black people to an alarming high rate. And someone says, that’s because it learned from a dataset that is skewed towards mostly representing white people. The problem of the AI being biased is not actually solved, only justified.

      So if you’re an ML researcher, and you publish some paper showing how awesome your AI is at recognising white people. And someone says, but it’s terrible at identifying black people! If your reaction is: “oh sure, but that’s only because I did not care to train it with a dataset that also allows it to identify fairly black people, but it doesn’t matter for my research, as long as I show it’s good, whoever uses it next can put in the effort of not making it a racist prick.”

      Well this is now a socal and structural problem. Because the first person in the pipeline, the researcher says they do not care to make their AI ethical, leave it up to the next person. In turn, the next person also takes the easy way out, and eventually the AI gets in the hand of the police and is used to make arrest. No one ever taking time to address the fact that from the very beginning, it failed to properly identify black people. And no one is to blame. The researcher blames the police force for not retraining it on an unbiased dataset, the police blames the company they bought it for, the company blames the researcher, or says that they don’t have such a dataset to work with in the first place so it is not possible for them to do so, etc. Everyone feels good, and even though the AI was never improved with regards to its bias, we all love along and start using it on a daily basis.

    • John Smith

      I am fairly educated in the field and I agree with LeCun that most of the bias in deep learning based systems comes from data. Gebru on the other hand is a rude individual who hasn’t learned how argue for her position (which is not even clear) and how to support it.

      • David Knowles

        LeCun is intent on improving the technology and creating better technology.

        Gebru seems only interested in pointing out the flaws in the technology an isn’t working on how to improve it.

      • Why do you believe she’s a rude person who can’t argue? Because she’s a black woman? That’s the only possible reason you could think that considering how polite and well argued her position was.

  5. Pingback: Yann LeCun quits Twitter amid acrimonious exchanges on AI bias – Hacker News Robot

  6. Pingback: Yann LeCun Quits Twitter Amid Acrimonious Exchanges on AI Bias | Synced - Buzzing Startups

  7. Pingback: Yann LeCun Quits Twitter Amid Acrimonious Exchanges on AI Bias - Latest Covid 19 Corona Virus News, Corona Updates and Deals

  8. Pingback: Yann LeCun Quits Twitter Amid Acrimonious Exchanges on AI Bias - Ready AI.M

  9. Pingback: Yann LeCun quits Twitter amid acrimonious exchanges on AI bias – HackBase

  10. Pingback: The Cancel Culture in Technology: A New Approach to Sustained and Informed Discussion : Stephen E. Arnold @ Beyond Search

  11. Pingback: Yann LeCun, Facebook's chief AI Scientist and Turing Award winner, quits Twitter after getting involved in an acrimonious debate regarding racial biases in AI (Synced) | 12T Group

  12. Pingback: "Research to increase the resolution of mosaic image by 64 times" developed into a discussion of racial discrimination, researchers who accused of discontinuation of account | Japan Top News

  13. So ML algorithms also think blacks all look the same? OK I agree that’s weird!

  14. I definitely appreciate the point that you made here. But I’m a little torn on what you said versus the need in research to make incremental progress and frankly the need to present incomplete systems as it would be difficult in any field to publish if the restrictions were onerous enough. As a person who does circuits research we don’t have any societal considerations so it is fine if our published results are incomplete and fail important corner cases as long as the fundamental premise is sound. We rightly in my opinion leave that to industry to resolve when they deploy the idea.

    In the machine learning space and especially with face reconstruction/recognition the social issues are far more loaded and I think therein lies the conflict.

  15. I recognize I am the minority voice here amidst your global audience, but as a neutral observer from half way across the globe, I have been observing with concern, for the past few years, as more of Social Sciences agenda starts acquiring centre stage in STEM fields – particularly computers & systems. This episode just serves to remind us again, that no matter who you are, you may be never woke enough for the mob.

    • Rasheed

      Haha yep. Yann Lecun is a Marxist agitator that is also using the “woke” crowd to push communism in the US. I want him deported to Venezuela.

    • Yeah sure, let’s just sell facial recognition tech to governments and police forces without making them actually work properly. I’m sure that will end perfectly! Who cares, it’s only minorities that get screwed after all!

  16. Pingback: Reddit owes its moderators greater than an up to date hate speech coverage - NewsFastly

  17. Pingback: Reddit owes its moderators more than an updated hate speech policy – E-Magazine Site

  18. Pingback: Reddit owes its moderators more than an updated hate speech policy – TECHOSMO

  19. Pingback: Reddit owes its moderators more than an updated hate speech policy – Ranzware Tech NEWS

  20. Pingback: Reddit owes its moderators more than an updated hate speech policy | Armenian American Reporter

  21. Pingback: Reddit owes its moderators more than an updated hate speech policy | Evizible

  22. Pingback: Reddit owes its moderators more than an updated hate speech policy - ISFeed

  23. Pingback: Reddit owes its moderators more than an updated hate speech policy - Anufriev

  24. Pingback: Reddit owes its moderators more than an updated hate speech policy | Powered by Pixel

  25. Pingback: Reddit owes its moderators more than an updated hate speech policy – Technonewsi

  26. Pingback: Reddit owes its moderators more than an updated hate speech policy – Davids Best Products

  27. Pingback: Reddit owes its moderators more than an updated hate speech policy - Techio

  28. Pingback: Reddit owes its moderators more than an updated hate speech policy | PRO Gambler

  29. Pingback: Reddit owes its moderators more than an updated hate speech policy | Computer Information System

  30. Pingback: Reddit owes its moderators more than an updated hate speech policy – The Verge – Z33 News

  31. Pingback: Reddit owes its moderators more than an updated hate speech policy - UK Day To Day

  32. Pingback: Reddit owes its moderators more than an updated hate speech policy – The Verge – ZITUB.COM

  33. Pingback: One house reality tv show : Reddit owes its moderators more than an updated hate speech policy – One House Zimbabwe Reality TV Show Updates

  34. Pingback: Reddit owes its moderators more than an updated hate speech policy – The Verge – Daily Post Hub

  35. Pingback: Reddit owes its moderators more than an updated hate speech policy | Internet Marketing News and Tips

  36. Skin color is a spectrum…and it changes. Anyone in a northern climate can watch a white person’s skin turn colors/tan in the sun when outside for months in the summer and then go back to pasty white in the winter. It’s crazy to think AI needs to be smart enough for the mob. They are so hypocritical. You can’t code hypocrisy into algorithms. It gets confused. Kind of like the cognitive dissonance going on in society today!

    • Michael Ernest

      If you can incorrectly define a value that represents true or false in a statement — something programmers do every day — it hardly matters whether it’s intentional (“hypocritical”) or not. The result is still wrong. It is reasonable to ask how it could happen and it is the heart of open inquiry to allow for any tenable hypothesis.

      LeCun chose to assign guilt to the data and model and to excuse the math/algorithm from further discussion.

      He then defended his point by reducing its context to just the example at hand (the Duke model and dataset), which is to say he offered to throw just a subset of his original target under the same bus. Then he expressed a conditional regret for any hurt feelings and called for peace. Not getting any love for that, he withdrew altogether.

      Is that not a textbook exercise of how systemic marginalization and gaslighting works? Is that not a clear attempt at bargaining for any context in which he could claim to have been right all along?

      That’s unfortunate. I for one hope LeCun gets to take a few deep breaths and consider his own social conditioning. Did it lead him to repeat the very pattern of marginalization Gebru is asking him to see and identify for what it is? Maybe, maybe not, but it’s a bad situation right now.

      Even so, it isn’t nearly so bad, and certainly not as perverse, as anyone suggesting that a white person getting a summer tan is practically the same thing as being a person of color. That’s some deeply-rooted and proudly ignorant racism and I doubt there’s much more to be done about it than point it out.

  37. Pingback: Reddit owes its moderators more than an updated hate speech policy – Investing Beginners

  38. Good. Yann is a disgusting little Marxist agitator and the real reason I think he stepped down is because BLM activists were demanding his resignation at Facebook, so now he’s closing his twitter account to hide from the mob he once supported. Hilarious.

  39. Rasheed

    Good! Now the racist Yan Lecun needs to step down from Facebook and give his job to a woman of color.

  40. Pingback: Tech roundup 80: a journal published by a bot - Javi López G.

  41. Pingback: AIの差別をめぐり“AIのゴッドファーザー”が炎上し、ツイッターをやめる | 新聞紙学的

  42. Pingback: Reddit owes its moderators more than an updated hate speech policy – Yahopet World News

  43. Pingback: Yann LeCun Quits Twitter Amid Acrimonious Exchanges on AI Bias

  44. Pingback: Reddit owes its moderators more than an updated hate speech policy – WATCHIETY

  45. Pingback: Reddit owes its moderators greater than an up to date hate speech coverage - AdMaPlace

  46. Pingback: Who is the woman in white mask ——Gender&Race bias in AI recognition system – 站点标题

  47. Pingback: The Toxic ML Twitter | NEO Share

  48. Pingback: The Toxic ML Twitter – Repost by Ramsey Elbasheer – Ramsey Elbasheer | Archiving & ML

  49. Pingback: Dataset Can Solely Be Unbalanced, Not - CoBlog

  50. Pingback: Building trust in AI : still a long way to go… 🎵 – Luc Goutermanoff

Leave a Reply to MG Cancel reply

Your email address will not be published. Required fields are marked *