Site icon Synced

Can GRPO be 10x Efficient? Kwai AI’s SRPO Suggests Yes with SRPO

The remarkable success of OpenAI’s o1 series and DeepSeek-R1 has unequivocally demonstrated the power of large-scale reinforcement learning (RL) in eliciting sophisticated reasoning behaviors and significantly enhancing the capabilities of large language models (LLMs).

However, the core training methodologies behind these groundbreaking reasoning models often remain veiled in their technical reports. Recent community efforts have predominantly focused on mathematical reasoning, leaving the challenge of cross-domain generalization largely unexplored. Furthermore, standard Reinforcement Learning from Preference Optimization (GRPO) training is plagued by common issues such as performance bottlenecks, inefficient sample utilization, and difficulties in cultivating specialized reasoning skills when dealing with mixed-domain datasets. These challenges complicate the effective scaling of RL methods for LLMs.

Addressing these limitations, researchers from the Kwaipilot team at Kuaishou have introduced a novel reinforcement learning framework: Two-Staged history-Resampling Policy Optimization (SRPO). This innovative approach is designed to systematically tackle the aforementioned training challenges across multiple dimensions. The team has publicly released a technical report detailing the intricacies of their training method and has also open-sourced the SRPO-Qwen-32B model.

Notably, this work marks the first instance of achieving DeepSeek-R1-Zero-level performance concurrently in both mathematical and code domains. By leveraging the same base model as DeepSeek (Qwen2.5-32B) and employing a purely reinforcement learning training approach, SRPO has achieved impressive results on the AIME24 (50) and LiveCodeBench (41.6) benchmarks, surpassing the performance of DeepSeek-R1-Zero-32B.

Even more remarkably, SRPO achieves this level of performance with only one-tenth of the training steps required by R1-Zero.

Challenges with Vanilla GRPO

In their initial explorations, the Kwaipilot team experimented with the standard GRPO algorithm. However, they quickly encountered bottlenecks that prevented the model from reaching the desired R1-Zero performance levels. These issues included:

Two-Staged Training

To address the inherent response length conflicts between mathematical and code domains, the Kwaipilot team implemented a two-stage training paradigm:

Comparative Analysis of Training Strategies

The impact of different training data strategies on response length was analyzed, revealing the following insights:

History Resampling

The Kwaipilot team observed that during the mid-to-late stages of training, nearly 50% of the sampled groups within a batch produced identical rewards. This often occurred when the model consistently succeeded on easier problems, leading to minimal reward variance and ineffective gradient updates.

To address this inefficiency and improve the quality of the gradient signal, they introduced History Resampling. During training, they recorded the reward outcomes of all rollouts within each epoch. At the end of an epoch, they reconstructed the dataset for the next epoch based on the following criteria:

Compared to the Dynamic Sampling method proposed in DAPO, History Resampling significantly improved computational efficiency and resulted in more stable response length growth.

Data

The Kwaipilot team performed meticulous data cleaning and filtering on publicly available Code&Math datasets. They applied heuristic rules to filter out irrelevant URLs, formatting noise, and ensured the completeness of core fields (question and answer ground truth) in the original data. Following the data cleaning approach of PRIME for mathematical data, they removed multi-part questions, pure proof-based problems, and those requiring image or table understanding. For code data, they excluded problems dependent on specific environments, file I/O, or network interactions, focusing on algorithmic logic.

Before data ingestion, they conducted correctness verification for both math and code problems to ensure the accuracy and solvability of the answers, discarding those with incorrect or ambiguous solutions. Subsequently, they assessed the difficulty of each problem, categorizing them into easy, medium, and hard levels based on their pass rate (Pass@k).

Experimental Results

This section details the experimental results obtained using the SRPO method. The Kwaipilot team focused on observing the changes in reward and metrics such as response length during training.

Training Process

The figure above illustrates the complete reward curve and response length curve during SRPO training. After the initial reward growth began to plateau, the training transitioned into the second stage. At the beginning of the second stage, the overall reward decreased due to the model’s prior lack of training on code, followed by a steady increase in reward during subsequent training. Integrating code data did not significantly increase the response length, which aligned with their expectations. Simultaneously, benchmark results indicated a continuous and stable improvement in both the mathematical and coding abilities of the model, demonstrating the effectiveness of the new method.

Specifically, History Resampling ensured that gradient updates remained effective at each training step, directly increasing the proportion of informative gradients. This enhanced sampling efficiency led to stable reward growth, clearly showcasing the improved training efficiency achieved by the resampling strategy.

Reasoning Behaviors

The Kwaipilot team identified three representative reflective patterns: recheck, hesitation, and exploration. They statistically analyzed responses containing these patterns and recorded the average response length for each. During RL training, they observed a gradual increase in the frequency of the model’s self-reflection, correction, and backtracking, indicating the emergence of a “self-verification” ability. They posit that the emergence of “reflection,” akin to human cognitive processes, in the model during RL is an adaptive behavior resulting from the policy optimization process.

As shown in the figure above, the model exhibited almost no proactive checking and reflection of previous reasoning steps in the early stages of training. However, as training progressed, the model displayed significant reflective and backtracking behaviors, forming response patterns such as step-by-step reasoning, numerical substitution, step-by-step verification, and self-optimization.

Interestingly, they also discovered that the model learned to spontaneously use program code for verification when solving mathematical problems. It would first provide a solution process through mathematical reasoning and then proactively write program code to verify the correctness of the solution. These instances demonstrated the model’s ability to leverage procedural thinking for self-correction and multiple attempts, further indicating that in the later stages of training, the model had mastered broad thinking and the integrated application of various code-based reasoning approaches for problem-solving.

The Paper SRPO: A Cross-Domain Implementation of Large-Scale Reinforcement Learning on LLM is on arXiv

Try with the SRPO-Qwen-32B Model on HuggingFace

Exit mobile version