Now a group of researchers from the Seattle-based Allen Institute for Artificial Intelligence (AI2) have shown how trigger words and phrases can “inflict targeted errors” on natural language processing (NLP) model outputs, prompting them to generate racist and hostile content.
For an AI system to acquire knowledge the way humans generally do it would need to interact with its surroundings and extract information through its own attention and analysis choices. That’s the idea behind a new paper from Microsoft Research, Polytechnique Montreal, MILA and and the University of Montreal.
A recently released Chinese deepfake mobile application, “ZAO,” enables just about anyone to easily swap faces with popular characters such as Sheldon Cooper in The Big Bang Theory, Marilyn Monroe’s Lorelei Lee in Gentleman Prefer Blondes, Jack Dawson in The Titanic, and so on.
Israeli research company AI21 Labs today published the paper SenseBERT: Driving Some Sense into BERT, which proposes a new model that significantly improves lexical disambiguation abilities and has obtained state-of-the-art results on the complex Word in Context (WiC) language task.