Now, a group of NLP researchers and enthusiasts, including graduates from Tsinghua University, Peking University, and Zhejiang University, have introduced ChineseGLUE, a benchmark designed to encourage the development and assessment of Chinese language models.
Now a group of researchers from the Seattle-based Allen Institute for Artificial Intelligence (AI2) have shown how trigger words and phrases can “inflict targeted errors” on natural language processing (NLP) model outputs, prompting them to generate racist and hostile content.
Since Google Research introduced its Bidirectional Transformer (BERT) in 2018 the model has gained unprecedented popularity among researchers. Now, a group of researchers from the National Cheng Kung University Tainan in Taiwan are challenging BERT’s efficacy.
Although natural language processing (NLP) has been around for decades, the recent and rapid rise of deep learning algorithms together with the increasing availability of massive amounts of text data are creating new and appealing opportunities for the tech across many industry sectors, including in the investment world.
If we ask one of today’s AI-powered voice assistants like Alexa and Siri to tell a joke, it might very well come up with something that puts a smile on our face. If however we then asked “Why do you think that joke is funny?” the bot would be stuck for a response. AI researchers want to change that.
Thanks to the CUDA architecture  developed by NVIDIA, developers can exploit GPUs’ parallel computing power to perform general computation without extra efforts. Our objective is to evaluate the performance achieved by TensorFlow, PyTorch, and MXNet on Titan RTX.