New Study Suggests Self-Attention Layers Could Replace Convolutional Layers on Vision Tasks
Inspired by the performance of attention mechanisms in NLP, researchers have explored the possibility of applying them to vision tasks.
AI Technology & Industry Review
Inspired by the performance of attention mechanisms in NLP, researchers have explored the possibility of applying them to vision tasks.
A group of researchers from The Katholieke Universiteit Leuven and The Technical University of Berlin recently introduced a Dutch RoBERTa-based language model, RobBERT.
In the conclusion to our year-end series, Synced spotlights ten datasets that were open-sourced in 2019.
Due to the nuanced character choices and other unique literal and aesthetical characteristics, automatic generation of Chinese poetry is challenging for AI, and high-quality poems can hardly be generated by end-to-end methods.
ERNIE has achieved new state-of-the-art performance on GLUE and become the world’s first model to score over 90 in terms of the macro-average score (90.1).
Researchers from The Chinese University of Hong Kong, Tencent AI Lab and University of Macau have proposed a new neuron interaction based representation composition for NMT.
In a blog post today OpenAI today announced the final staged release of its 1.5 billion parameter language model GPT-2, along with all associated code and model weights.
Now, a group of NLP researchers and enthusiasts, including graduates from Tsinghua University, Peking University, and Zhejiang University, have introduced ChineseGLUE, a benchmark designed to encourage the development and assessment of Chinese language models.
Synced Global AI Weekly October 27th
The recent rapid development of pretrained language models has produced significant performance improvements on downstream NLP tasks.
Researchers from the Huazhong University of Science and Technology and Huawei Noah’s Ark Lab have introduced TinyBERT, a smaller and faster version of Google’s popular large-scale pre-trained language processing model BERT.
The Re•Work AI in Insurance Summit in New York City was held September 5-6 and saw 60 speakers from insurance-related companies cover a wide range of topics — from detecting claims fraud to applying machine learning to underwriting and maximizing revenue.
Now a group of researchers from the Seattle-based Allen Institute for Artificial Intelligence (AI2) have shown how trigger words and phrases can “inflict targeted errors” on natural language processing (NLP) model outputs, prompting them to generate racist and hostile content.
Synced Global AI Weekly August 18th
The traditional retail industry is facing challenges as the rapid development and continuous improvement of AI tools and techniques ushers in the era of New Retail.
Since Google Research introduced its Bidirectional Transformer (BERT) in 2018 the model has gained unprecedented popularity among researchers. Now, a group of researchers from the National Cheng Kung University Tainan in Taiwan are challenging BERT’s efficacy.
Although natural language processing (NLP) has been around for decades, the recent and rapid rise of deep learning algorithms together with the increasing availability of massive amounts of text data are creating new and appealing opportunities for the tech across many industry sectors, including in the investment world.
In the late 2000s Fortune Global 500 healthcare companies ramped up AI deployment in the industry, from in-hospital diagnosis and treatment to drug supply chain and out-of-hospital scenarios.
While it is exhilarating to see AI researchers pushing the performance of cutting-edge models to new heights, the costs of such processes are also rising at a dizzying rate.
A team of researchers from Carnegie Mellon University and Google Brain have now proposed XLNet, a new language model which outperforms BERT on 20 language tasks including SQuAD, GLUE, and RACE; and has achieved SOTA results on 18 of these tasks.
The traditional retail industry is undergoing a significant reinvention and upgrade as more and more brick and mortar stores boost business by adopting e-commerce platforms powered by cutting-edge tech.
Citadel Chief AI Officer Li Deng has been named a Fellow of the Canadian Academy of Engineering (CAE) in recognition of his notable achievements in deep learning and speech recognition.
Instead of having users simply swipe through headshots, many new dating apps and online platforms are leveraging artificial intelligence to introduce a variety of novel approaches to smart matchmaking.
Researchers from Tsinghua University and Huawei Noah’s Ark Lab recently proposed a new model that incorporates knowledge graphs (KG) into training on large-scale corpora for language representation.
Microsoft Research Asia (MSRA) has been dubbed the “Whampoa Academy for AI” in reference the elite Chinese military school. MSRA is a bootcamp for NLP research and has trained more than 500 interns, 20 PhDs and 20 postdocs over the past two decades.
10 AI News You Must Know from April W 3 – W 4
If we ask one of today’s AI-powered voice assistants like Alexa and Siri to tell a joke, it might very well come up with something that puts a smile on our face. If however we then asked “Why do you think that joke is funny?” the bot would be stuck for a response. AI researchers want to change that.
Thanks to the CUDA architecture [1] developed by NVIDIA, developers can exploit GPUs’ parallel computing power to perform general computation without extra efforts. Our objective is to evaluate the performance achieved by TensorFlow, PyTorch, and MXNet on Titan RTX.
Baidu has released ERNIE (Enhanced Representation through kNowledge IntEgration), a new knowledge integration language representation model which outperforms Google’s state-of-the-art BERT (Bidirectional Encoder Representations from Transformers) in Chinese language tasks.
AI-empowered technologies such as natural language processing (NLP) are increasingly active in the labour-intensive world of call centres — concentrated offices used for sending or receiving a large volume of requests by telephone.
Natural language processing has made significant progress in the past year, but few frameworks focus directly on NLP or sequence modeling. Google Brain recently released Lingvo, a deep learning framework based on TensorFlow. Synced invited Ni Lao, Chief Science Officer at Mosaix, to share his thoughts on Lingvo.
The Conference on Computer Vision and Pattern Recognition (CVPR) is one of the world’s top computer vision (CV) conferences. CVPR 2019 runs June 15 through June 21 in Long Beach, California, and the list of accepted papers for the prestigious gathering has now been released.
Synced spoke with AI pioneer Professor Yoshua Bengio at the Computing in the 21st Century Conference in Beijing, where he discussed his recent research and the current state of AI.
Microsoft researchers have released technical details of an AI system that combines both approaches. The new Multi-Task Deep Neural Network (MT-DNN) is a natural language processing (NLP) model that outperforms Google BERT in nine of eleven benchmark NLP tasks.
Welcome to the Year of the Pig! Lunar New Year is China’s biggest holiday, with this year’s celebrations picking up during the “Little Year” period in late January, peaking February 4 for New Year’s Eve, and continuing through February 19.
Papers With Code is a unique and useful resource that presents trending ML research along with the code to implement it. The site was created by Atlas ML CEO Robert Stojnic, aka “rstoj” on Reddit’s machine learning board. The latest version of Papers With Code has added 950+ unique machine learning tasks, 500+ State-of-the-Art result leaderboards and 8500+ papers with code.
Researchers from NLP startup Shannon.AI have published a study proposing Glyce — a set of Chinese glyph-vectors for Chinese character representations. Glyce has already mastered 13 core Chinese natural language processing tasks.
Synced Global AI Weekly February 3rd
The amount of news information a person can routinely access these days would have been unimaginable a hundred years ago. But we still have just 24 hours in a day, and only a single pair of eyes to read, and so the question arises: how to get as much valuable news as possible in a limited time?







































