Joseph Redmon, creator of the popular object detection algorithm YOLO (You Only Look Once), tweeted last week that he had ceased his computer vision research to avoid enabling potential misuse of the tech — citing in particular “military applications and privacy concerns.”
His comment emerged from a Twitter discussion on last Wednesday’s announcement of revised NeurIPS 2020 paper submission guidelines, which now ask authors to add a section on the broader impact of their work “including possible societal consequences — both positive and negative.”
University of Cambridge probabilistic machine learning PhD student Maria Skoularidou tweeted “I think that broader impacts statements might also help authors rethink/realise whether their work is worth submitting,” prompting University of Toronto Assistant Professor of Computer Science and Vector Institute Co-founder Roger Grosse to challenge Skoularidou to provide “an example of a situation where you think someone should decide not to submit their paper due to Broader Impacts reasons?”
That’s where Redmon stepped in to offer his own experience. Despite enjoying his work, Redmon tweeted, he had stopped his CV research because he found that the related ethical issues “became impossible to ignore.”
A current graduate student at the University of Washington’s programming languages and software engineering lab, Redmon proposed the YOLO model in a CVPR 2016 paper that won the OpenCV People’s Choice Award. YOLO was hailed as a milestone in object detection research and led to better, faster and more accurate computer vision algorithms. Redmon’s updated YOLO9000 earned a Best Paper Honorable Mention at CVPR 2017, and he was also part of the team that proposed XNOR-Net using Binary Convolutional Neural Networks for ImageNet classification.
Grosse argued that predicting the societal impacts of AI is a tough area that requires expertise and should be dealt with by professional researchers and organizations instead of the paper authors themselves. This drew a prompt Redmon counter: “‘We shouldn’t have to think about the societal impact of our work because it’s hard and other people can do it for us’ is a really bad argument.”
Stanford Computer Science Master’s student and former Google Brain intern Kevin Zakka meanwhile chimed in that rather than abandoning his research out of fear of potential misuse, Redmon might have used his respected position in the CV community to raise awareness. Others suggested Redmon confine his work for example to the medical imaging domain.
Redmon said he felt certain degree humiliation for ever believing “science was apolitical and research objectively moral and good no matter what the subject is.” He said he’d come to realize that facial recognition technologies have more downside than upside, and that they would not be developed if enough researchers thought about the broader impact of the enormous downside risks.
Ethical discussions around AI are not new and will undoubtedly intensify as the technologies move from labs to the streets. This new attention from a high-profile conference like NeurIPS and Redmon’s recent reveal suggest that experts in particular fields will join the broader ML community and the general public in this ongoing process.
Journalist: Yuan Yuan | Editor: Michael Sarazen