Now a group of researchers from the Seattle-based Allen Institute for Artificial Intelligence (AI2) have shown how trigger words and phrases can “inflict targeted errors” on natural language processing (NLP) model outputs, prompting them to generate racist and hostile content.
Since Google Research introduced its Bidirectional Transformer (BERT) in 2018 the model has gained unprecedented popularity among researchers. Now, a group of researchers from the National Cheng Kung University Tainan in Taiwan are challenging BERT’s efficacy.
Although natural language processing (NLP) has been around for decades, the recent and rapid rise of deep learning algorithms together with the increasing availability of massive amounts of text data are creating new and appealing opportunities for the tech across many industry sectors, including in the investment world.
If we ask one of today’s AI-powered voice assistants like Alexa and Siri to tell a joke, it might very well come up with something that puts a smile on our face. If however we then asked “Why do you think that joke is funny?” the bot would be stuck for a response. AI researchers want to change that.
Thanks to the CUDA architecture  developed by NVIDIA, developers can exploit GPUs’ parallel computing power to perform general computation without extra efforts. Our objective is to evaluate the performance achieved by TensorFlow, PyTorch, and MXNet on Titan RTX.
Baidu has released ERNIE (Enhanced Representation through kNowledge IntEgration), a new knowledge integration language representation model which outperforms Google’s state-of-the-art BERT (Bidirectional Encoder Representations from Transformers) in Chinese language tasks.
Natural language processing has made significant progress in the past year, but few frameworks focus directly on NLP or sequence modeling. Google Brain recently released Lingvo, a deep learning framework based on TensorFlow. Synced invited Ni Lao, Chief Science Officer at Mosaix, to share his thoughts on Lingvo.
The Conference on Computer Vision and Pattern Recognition (CVPR) is one of the world’s top computer vision (CV) conferences. CVPR 2019 runs June 15 through June 21 in Long Beach, California, and the list of accepted papers for the prestigious gathering has now been released.
Papers With Code is a unique and useful resource that presents trending ML research along with the code to implement it. The site was created by Atlas ML CEO Robert Stojnic, aka “rstoj” on Reddit’s machine learning board. The latest version of Papers With Code has added 950+ unique machine learning tasks, 500+ State-of-the-Art result leaderboards and 8500+ papers with code.
The amount of news information a person can routinely access these days would have been unimaginable a hundred years ago. But we still have just 24 hours in a day, and only a single pair of eyes to read, and so the question arises: how to get as much valuable news as possible in a limited time?
Natural Language Processing (NLP) is a hot research area in artificial intelligence and computer science. The technology teaches machines to understand human language so they can more effectively communicate with us. NLP research is integrated with linguistics, context analysis and semantics.
Tencent AI Lab has announced an open-source NLP dataset comprising vector representations for eight million Chinese words and phrases. The dataset aims to provide large-scale and high-quality support for deep learning-based Chinese language NLP research in both academic and industrial applications.
MORE Health is a Silicon Valley-based company that provides access to top international physicians for patients faced with critical illnesses such as cancer or heart disease. The company was founded in 2013, and recently took a leap forward by partnering with Houston-based Melax Technologies…
At the annual Google Cloud Next conference which kicked off July 24 in San Francisco the company unveiled a series of AI-based product releases and enhancements for its analytics and machine learning tools, additional applications on G Suite, and new IoT products.
The country is aiming to build the world’s top hub for AI innovations and talents, and yesterday, prestigious Tsinghua University issued a progress report. The 2018 China Artificial Intelligence Development Report provides an insightful, data-intensive overview of the current status of AI in China.