Tag: Deep Learning

AI Machine Learning & Data Science Research

HPC-AI Tech Raises 22 Million USD in Series A Funding to Fuel Team Expansion and Business Growth

Singapore – HPC-AI Tech, a pioneering company specializing in efficient large AI model training, is delighted to announce the successful completion of its Series A funding round, securing a total of 22 Million USD.

AI Machine Learning & Data Science Research

Princeton U’s DataMUX Enables DNNs to Simultaneously and Accurately Process up to 40 Input Instances With Limited Computational Overhead

In the new paper DataMUX: Data Multiplexing for Neural Networks, a Princeton University research team proposes Data Multiplexing (DataMUX). The novel technique enables neural networks to process multiple inputs simultaneously and generate accurate predictions, increasing model throughput with minimal additional memory requirements.

AI Machine Learning & Data Science Research

Introducing Alpa: A Compiler Architecture for Automated Model-Parallel Distributed Training That Outperforms Hand-Tuned Strategies

A research team from UC Berkeley, Amazon Web Services, Google, Shanghai Jiao Tong University and Duke University proposes Alpa, a compiler system for distributed deep learning on GPU clusters that automatically generates parallelization plans that match or outperform hand-tuned model-parallel training systems even on the models they were designed for.

AI Machine Learning & Data Science Research

Less is More: Understanding Neural Network Decisions via Simplified Yet Informative Inputs

A research team from University Medical Center Freiburg, ML Collective, and Google Brain introduces SimpleBits — an information-reduction method that learns to synthesize simplified inputs that contain less information yet remain informative for the task, providing a new approach for exploring the basis of network decisions.

AI Machine Learning & Data Science Nature Language Tech Research

Peng Cheng Laboratory & Baidu Release PCL-BAIDU Wenxin: The World’s First Knowledge-Enhanced 100-Billion-Scale Pretrained Language Model

Peng Cheng Laboratory (PCL) and Baidu release PCL-BAIDU Wenxin, the world’s first knowledge-enhanced 100-billion-scale pretrained language model and the largest Chinese-language monolithic model with 260 billion parameters. PCL-BAIDU Wenxin achieves state-of-the-art results on more than 60 tasks and significantly advances more than 30 benchmarks for zero-shot and few-shot learning.

AI Machine Learning & Data Science Research

Washington U & Google Study Reveals How Attention Matrices Are Formed in Encoder-Decoder Architectures

In the new paper Understanding How Encoder-Decoder Architectures Attend, researchers from the University of Washington, Google Blueshift Team and Google Brain Team propose a method for decomposing hidden states over a sequence into temporal- and input-driven components, revealing how attention matrices are formed in encoder-decoder networks.

AI Machine Learning & Data Science Research

Deeper Is Not Necessarily Better: Princeton U & Intel’s 12-Layer Parallel Networks Achieve Performance Competitive With SOTA Deep Networks

In the new paper Non-deep Networks, a research team from Princeton University and Intel Labs argues it is possible to achieve high performance with “non-deep” neural networks, presenting ParNet (Parallel Networks), a novel 12-layer architecture that achieves performance competitive with its state-of-the-art deep counterparts.

AI Machine Learning & Data Science Research

100+ Stanford Researchers Publish 200+ Page Paper on the AI Paradigm Shift Introduced by Large-Scale Models

In a 200+ page paper, Percy Liang, Fei-Fei Li, and over 100 other researchers from the Stanford University Center for Research on Foundation Models (CRFM) systematically describe the opportunities and risks of large-scale pretrained “foundation” models. The unique study aims to provide a clearer understanding of how these models work, when and how they fail, and the various capabilities provided by their emergent properties.

AI Machine Learning & Data Science Research

Logic Explained Deep Neural Networks: A General Approach to Explainable AI

A research team from Università di Firenze, Università di Siena, University of Cambridge and Universitè Côte d’Azur proposes a general approach to explainable artificial intelligence (XAI) in neural architectures, designing interpretable deep learning models called Logic Explained Networks (LENs). The novel approach yields better performance than established white-box models while providing more compact and meaningful explanations.

AI Machine Learning & Data Science Research

DeepMind’s Perceiver IO: A General Architecture for a Wide Variety of Inputs & Outputs

A DeepMind research team proposes Perceiver IO, a single network that can easily integrate and transform arbitrary information for arbitrary tasks while scaling linearly with both input and output sizes. The general architecture achieves outstanding results on tasks with highly structured output spaces, such as natural language and visual understanding.

AI Machine Learning & Data Science Popular Research

ETH Zürich Identifies Priors That Boost Bayesian Deep Learning Models

A research team from ETH Zürich presents an overview of priors for (deep) Gaussian processes, variational autoencoders and Bayesian neural networks. The researchers propose that well-chosen priors can achieve theoretical and empirical properties such as uncertainty estimation, model selection and optimal decision support; and provide guidance on how to choose them.