Now a group of researchers from the Seattle-based Allen Institute for Artificial Intelligence (AI2) have shown how trigger words and phrases can “inflict targeted errors” on natural language processing (NLP) model outputs, prompting them to generate racist and hostile content.
For an AI system to acquire knowledge the way humans generally do it would need to interact with its surroundings and extract information through its own attention and analysis choices. That’s the idea behind a new paper from Microsoft Research, Polytechnique Montreal, MILA and and the University of Montreal.
In collaboration with Partnership on AI, Microsoft, and academics from top universities, Facebook today announced the Deepfake Detection Challenge (DFDC) with the aim of finding innovative deepfake detection solutions to help the media industry spot videos that have been morphed by AI models.
A recently released Chinese deepfake mobile application, “ZAO,” enables just about anyone to easily swap faces with popular characters such as Sheldon Cooper in The Big Bang Theory, Marilyn Monroe’s Lorelei Lee in Gentleman Prefer Blondes, Jack Dawson in The Titanic, and so on.