AI Technology

Weekly Papers | Quoc V. Le and Kaiming He Look at Vision; Intelligence, Psychology and AI; Evolving the Hearthstone Meta and More!

Synced has surveyed last week’s crop of papers in the fields of machine learning, computer vision, computation and language, and beyond, and identified seven studies that we believe may be of special interest to our readers.

Synced has surveyed last week’s crop of papers in the fields of machine learning, computer vision, computation and language, and beyond, and identified seven studies that we believe may be of special interest to our readers.

Quoc V. Le and Kaiming He have each published exciting new research on ImageNet. The Creator of the Keras open-source neural network library François Chollet meanwhile is exploring approaches for defining and evaluating intelligent systems and humanlike artificial systems. Chollet proposes an Abstraction and Reasoning Corpus (ARC) to benchmark AI systems’ humanlikeness.

We have also seen some intriguing studies involving niche tasks, such as leveraging deep learning for stock selection and using an evolutionary algorithm for representing balance changes in the strategic game Hearthstone.

  1. Self-training with Noisy Student improves ImageNet classification
  2. A Comparative Analysis of XGBoost
  3. Momentum Contrast for Unsupervised Visual Representation Learning
  4. Deep Learning for Stock Selection Based on High Frequency Price-Volume Data
  5. Evolving the Hearthstone Meta
  6. The Measure of Intelligence
  7. Emerging Cross-lingual Structure in Pretrained Language Models

Paper One

Paper: Self-training with Noisy Student improves ImageNet classification

Authors: Qizhe Xie, Minh-Thang Luong, Quoc V. Le from Google Research Brain Team, and Eduard Hovy from Carnegie Mellon University

Abstract: We present a simple self-training method that achieves 87.4% top-1 accuracy on ImageNet, which is 1.0% better than the state-of-the-art model that requires 3.5B weakly labeled Instagram images. On robustness test sets, it improves ImageNet-A top-1 accuracy from 16.6% to 74.2%, reduces ImageNet-C mean corruption error from 45.7 to 31.2, and reduces ImageNet-P mean flip rate from 27.8 to 16.1.
To achieve this result, we first train an EfficientNet model on labeled ImageNet images and use it as a teacher to generate pseudo labels on 300M unlabeled images. We then train a larger EfficientNet as a student model on the combination of labeled and pseudo labeled images. We iterate this process by putting back the student as the teacher. During the generation of the pseudo labels, the teacher is not noised so that the pseudo labels are as good as possible. But during the learning of the student, we inject noise such as data augmentation, dropout, stochastic depth to the student so that the noised student is forced to learn harder from the pseudo labels.

image.png
image.png

Paper Two

Paper: A Comparative Analysis of XGBoost

Authors: Candice Bentéjac from University of Bordeaux, Anna Csörgő from Pázmány Péter Catholic University, Gonzalo Martínez-Muñoz from Universidad Autónoma de Madrid

Abstract: XGBoost is a scalable ensemble technique based on gradient boosting that has demonstrated to be a reliable and efficient machine learning challenge solver. This work proposes a practical analysis of how this novel technique works in terms of training speed, generalization performance and parameter setup. In addition, a comprehensive comparison between XGBoost, random forests and gradient boosting has been performed using carefully tuned models as well as using the default settings. The results of this comparison may indicate that XGBoost is not necessarily the best choice under all circumstances. Finally an extensive analysis of XGBoost parametrization tuning process is carried out.

image.png
image.png

Paper Three

Paper: Momentum Contrast for Unsupervised Visual Representation Learning

Authors: Kaiming He, Haoqi Fan, Yuxin Wu, Saining Xie, Ross Girshic from Facebook AI Research (FAIR)

Abstract: We present Momentum Contrast (MoCo) for unsupervised visual representation learning. From a perspective on contrastive learning as dictionary look-up, we build a dynamic dictionary with a queue and a moving-averaged encoder. This enables building a large and consistent dictionary on-the-fly that facilitates contrastive unsupervised learning. MoCo provides competitive results under the common linear protocol on ImageNet classification. More importantly, the representations learned by MoCo transfer well to downstream tasks. MoCo can outperform its supervised pre-training counterpart in 7 detection/segmentation tasks on PASCAL VOC, COCO, and other datasets, sometimes surpassing it by large margins. This suggests that the gap between unsupervised and supervised representation learning has been largely closed in many vision tasks.

image.png
image.png
image.png

Paper Four

Paper: Deep Learning for Stock Selection Based on High Frequency Price-Volume Data

Authors: Junming Yang, Yaoqi Li, Xuanyu Chen, Jiahang Cao, Kangkang Jiang from Likelihood Technology, University of California Irvine, and Sun Yat-sen University

Abstract: Training a practical and effective model for stock selection has been a greatly concerned problem in the field of artificial intelligence. Even though some of the models from previous works have achieved good performance in the U.S. market by using low-frequency data and features, training a suitable model with high-frequency stock data is still a problem worth exploring. Based on the high-frequency price data of the past several days, we construct two separate models-Convolution Neural Network and Long Short-Term Memory-which can predict the expected return rate of stocks on the current day, and select the stocks with the highest expected yield at the opening to maximize the total return. In our CNN model, we propose improvements on the CNNpred model presented by E. Hoseinzade and S. Haratizadeh in their paper which deals with low-frequency features. Such improvements enable our CNN model to exploit the convolution layer’s ability to extract high-level factors and avoid excessive loss of original information at the same time. Our LSTM model utilizes Recurrent Neural Network’advantages in handling time series data. Despite considerable transaction fees due to the daily changes of our stock position, annualized net rate of return is 62.27% for our CNN model, and 50.31% for our LSTM model.

Paper Five

Paper: Evolving the Hearthstone Meta

Authors: Fernando de Mesentier Silva, Scott Lee, and Matthew C. Fontaine are independent researchers; Julian Togelius is from New York University, and Amy K. Hoover is from New Jersey Institute of Technology

Abstract: Balancing an ever growing strategic game of high complexity, such as Hearthstone is a complex task. The target of making strategies diverse and customizable results in a delicate intricate system. Tuning over 2000 cards to generate the desired outcome without disrupting the existing environment becomes a laborious challenge. In this paper, we discuss the impacts that changes to existing cards can have on strategy in Hearthstone. By analyzing the win rate on match-ups across different decks, being played by different strategies, we propose to compare their performance before and after changes are made to improve or worsen different cards. Then, using an evolutionary algorithm, we search for a combination of changes to the card attributes that cause the decks to approach equal, 50% win rates. We then expand our evolutionary algorithm to a multi-objective solution to search for this result, while making the minimum amount of changes, and as a consequence disruption, to the existing cards. Lastly, we propose and evaluate metrics to serve as heuristics with which to decide which cards to target with balance changes.

image.png
Hearthstone’s gaming interface
image.png

Paper Six

Paper: The Measure of Intelligence

Authors: François Chollet from Google

Abstract: To make deliberate progress towards more intelligent and more human-like artificial systems, we need to be following an appropriate feedback signal: we need to be able to define and evaluate intelligence in a way that enables comparisons between two systems, as well as comparisons with humans. Over the past hundred years, there has been an abundance of attempts to define and measure intelligence, across both the fields of psychology and AI. We summarize and critically assess these definitions and evaluation approaches, while making apparent the two historical conceptions of intelligence that have implicitly guided them. We note that in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to “buy” arbitrary levels of skills for a system, in a way that masks the system’s own generalization power. We then articulate a new formal definition of intelligence based on Algorithmic Information Theory, describing intelligence as skill-acquisition efficiency and highlighting the concepts of scope, generalization difficulty, priors, and experience. Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.

Paper Seven

Paper: Emerging Cross-lingual Structure in Pretrained Language Models

Authors: Shijie Wu from Johns Hopkins University, Alexis Conneau, Haoran Li, Luke Zettlemoyer, Veselin Stoyanov from Facebook AI

Abstract: We study the problem of multilingual masked language modeling, i.e. the training of a single model on concatenated text from multiple languages, and present a detailed study of several factors that influence why these models are so effective for cross-lingual transfer. We show, contrary to what was previously hypothesized, that transfer is possible even when there is no shared vocabulary across the monolingual corpora and also when the text comes from very different domains. The only requirement is that there are some shared parameters in the top layers of the multi-lingual encoder. To better understand this result, we also show that representations from independently trained models in different languages can be aligned post-hoc quite effectively, strongly suggesting that, much like for non-contextual word embeddings, there are universal latent symmetries in the learned embedding spaces. For multilingual masked language modeling, these symmetries seem to be automatically discovered and aligned during the joint training process.


Journalist: Fangyu Cai | Editor: Michael Sarazen

1 comment on “Weekly Papers | Quoc V. Le and Kaiming He Look at Vision; Intelligence, Psychology and AI; Evolving the Hearthstone Meta and More!

  1. Pingback: Weekly Papers | Quoc V. Le and Kaiming He Look at Vision; Intelligence, Psychology and AI; Evolving the Hearthstone Meta and More! – Synced – Machine.Vision

Leave a Reply

Your email address will not be published.

%d bloggers like this: