ProtTrans Delivers SOTA Pretrained Language Models for Proteins
Researchers launched the ProtTrans Project, which provides an outstanding model for protein pretraining.
AI Technology & Industry Review
Researchers launched the ProtTrans Project, which provides an outstanding model for protein pretraining.
PLATO-2, a open-domain chatbot model, can talk about anything in Chinese and English and engage in deep conversations.
Researchers add syntactic biases to determine whether and where they can help BERT achieve better understanding.
OpenAI announced the upgraded GPT-3 with a whopping 175 billion parameters.
Google Research team proposes the automatic metric BLEURT which is based on the highly successful Google language model BERT.
Artificial intelligence (AI) technologies are now widely used in tasks such as reimbursement specifications, automated financial statement generation, and content extraction.
This is the first chatbot to blend a diverse set of conversational skills — including empathy, knowledge, and personality — together in one system.
Researchers “posit that the universes of knowledge and experience available to NLP models can be defined by successively larger world scopes: from a single corpus to a fully embodied and social context.”
XTREME, a multi-task benchmark that evaluates cross-lingual generalization capabilities of multilingual representations across 40 languages and nine tasks.
Researchers from Bocconi University have prepared an online overview of the commonalities and differences between language-specific BERT models and mBERT.
A recent Google Brain paper looks into Google’s hugely successful transformer network — BERT — and how it represents linguistic information internally.
Deep learning models are getting larger and larger to meet the demand for better and better performance. Meanwhile, the timeContinue Reading
One of a new breed of open-domain chatbots designed to engage in conversations across any topic, Meena’s free and natural conversational abilities are closing the gap on human performance.
A recent paper published by Microsoft researchers proposes a new vision-language pretrained model for image-text joint embedding, ImageBERT, which which achieves SOTA performance on both the MSCOCO and Flickr30k datasets.
A recent paper accepted by ICLR 2020 proposes a new transformer model called “Reformer” which achieves impressive performance even when running on only a single GPU.
Pryzant and other Stanford researchers partnered with researchers from Kyoto University and Georgia Institute of Technology to develop a novel natural language model that can identify and neutralize biased framings, presuppositions, attitudes, etc. in text.
Synced invited Samuel R. Bowman, an Assistant Professor at New York University who works on artificial neural network models for natural language understanding, to share his thoughts on the “Text-to-Text Transfer Transformer” (T5) framework.
Today, the research division of the Chinese search giant released their updated ERNIE 2.0, a pretrained language understanding model with significant improvements.
The annual meeting of the Association for Computational Linguistics (ACL) is world-leading conference in the field of natural language processing, Yesterday, conference organizers sent out author notifications on accepted papers for the 57th ACL gathering, which will take place in Florence, Italy from July 28 to August 2.
“The Internet is just an appetizer, whereas AI is the real entrée. The latter is not a part of the internet, not the second stage of the internet; it is technological revolution comparable to the industrial revolution.” – Robin Li, CEO of Baidu.