AI Machine Learning & Data Science Research

Revolutionizing Optimization: DeepMind Leverages Large Language Models as Intelligent Optimizers

In a new paper Large Language Models as Optimizers, a Google DeepMind research team introduces Optimization by PROmpting (OPRO), an effective method that leverages large language models (LLMs) as optimizers, which can generate optimization solutions conditioned on the natural language that describes the optimization task.

Optimization plays a pivotal role in a diverse array of real-world applications. Nevertheless, traditional optimization algorithms often demand substantial manual intervention to tailor them to specific tasks, grappling with the intricacies posed by the decision space and performance landscape.

To tackle this challenge head-on, a Google DeepMind research team has introduced a groundbreaking approach in their recent paper titled “Large Language Models as Optimizers.” This innovative method, known as Optimization by PROmpting (OPRO), harnesses the power of large language models (LLMs) as optimizers. These LLMs are capable of generating optimization solutions based on the natural language that describes the optimization task.

The remarkable ability of LLMs to comprehend natural language opens up exciting possibilities for generating optimization solutions based on a problem’s verbal description. Instead of adhering to traditional approaches, which typically define optimization problems formally and employ programmed solvers to derive update steps, this research takes a distinctive path. Here, the researchers guide the optimization process by instructing the LLM to iteratively generate new solutions based on natural language descriptions and previously discovered solutions.

To provide an overview of the OPRO framework, a meta-prompt is employed, containing both the description of the optimization problem and previously evaluated solutions. This meta-prompt serves as input, empowering the LLM to generate candidate solutions based on the provided information. Subsequently, these newly generated solutions are assessed and integrated into the meta-prompt for subsequent optimization iterations. This iterative optimization process persists until the LLM can no longer propose solutions with higher scores or reaches the maximum number of optimization steps. In essence, the ultimate objective is to formulate a prompt that maximizes task accuracy.

In their empirical investigation, the research team evaluated the OPRO framework across various LLMs, including text-bison, Palm 2-L, gpt-3.5-turbo, and gpt-4. On small-scale traveling salesman problems, OPRO demonstrated performance on par with hand-crafted heuristic algorithms, surpassing human-designed prompts by a substantial margin on GSM8K and Big-Bench Hard, even achieving over a 50% improvement.

The paper Large Language Models as Optimizers on arXiv.

Author: Hecate He | Editor: Chain Zhang

We know you don’t want to miss any news or research breakthroughs. Subscribe to our popular newsletter Synced Global AI Weekly to get weekly AI updates.

1 comment on “Revolutionizing Optimization: DeepMind Leverages Large Language Models as Intelligent Optimizers

  1. Olivia

    Competitive advantages are increasingly achieved not through better products, but through more efficient and more cost-effective processes. To do this, it is necessary that the same language is used within the business community and that time-consuming operations due to errors can be forgotten. I advise you to delve in more detail into the topic of the main advantages of large language models here

Leave a Reply

Your email address will not be published. Required fields are marked *

%d bloggers like this: