Tag: GPT

AI Machine Learning & Data Science Research

AI Needs a Therapist: Columbia U & IBM’s SafeguardGPT Leverages Psychotherapy & RL to Build Healthy AI Systems

In the new paper Towards Healthy AI: Large Language Models Need Therapists Too, a team from Columbia University and IBM Research proposes SafeguardGPT, a framework that incorporates psychotherapy and reinforcement learning to correct the potentially harmful behaviours of AI chatbots.

AI Machine Learning & Data Science Nature Language Tech Research

OpenAI, Open Research & UPenn Paper Considers How GPTs Will Impact the US Labour Market

In the new paper GPTs are GPTs: An Early Look at the Labor Market Impact Potential of Large Language Models, a research team from OpenAI, OpenResearch, and the University of Pennsylvania investigates the potential impact of LLMs like GPT on the US labour market, shedding light on the economic, social, and policy implications.

AI Machine Learning & Data Science Research

Columbia U’s ViperGPT Solves Complex Visual Queries via Python Execution

In the new paper ViperGPT: Visual Inference via Python Execution for Reasoning, a Columbia University research team presents ViperGPT, a framework for solving complex visual queries by integrating code-generation models into vision via a Python interpreter. The proposed approach requires no further training and achieves state-of-the-art results.

AI Machine Learning & Data Science Research

Introducing SpikeGPT: UCSC & Kuaishou’s LLM With Spiking Neural Networks Slashes Language Generation Costs

In the new paper SpikeGPT: Generative Pre-trained Language Model with Spiking Neural Networks, a research team from the University of California and Kuaishou Technology presents SpikeGPT, the first generative spiking neural network language model. The team’s largest, 260M parameter version achieves DNN-level performance while maintaining the energy efficiency of spike-based computations.

AI Machine Learning & Data Science Nature Language Tech Popular Research

MIT, Northeastern & Technion Propose ROME for Efficient Locating and Editing of Factual Associations in GPT Models

In the new paper Locating and Editing Factual Associations in GPT, a research team from MIT CSAIL, Northeastern University and Technion IIT examines how information flows during knowledge recall in large autoregressive transformers and introduces Rank-One Model Editing (ROME), a simple, zero-shot principled model editor capable of locating and editing factual associations in such models.