Abstract
ReBalance is a training-free framework that balances reasoning in large models by using confidence indicators to detect and correct overthinking and underthinking behaviors through dynamic steering vectors.
Large Reasoning Models (LRMs) have shown remarkable reasoning capabilities, yet they often suffer from overthinking, expending redundant computational steps on simple problems, or underthinking, failing to explore sufficient reasoning paths despite inherent capabilities. These issues lead to inefficiencies and potential inaccuracies, limiting practical deployment in resource-constrained settings. Existing methods to mitigate overthinking, such as suppressing reflective keywords or adjusting reasoning length, may inadvertently induce underthinking, compromising accuracy. Therefore, we propose ReBalance, a training-free framework that achieves efficient reasoning with balanced thinking. ReBalance leverages confidence as a continuous indicator of reasoning dynamics, identifying overthinking through high confidence variance and underthinking via consistent overconfidence. By aggregating hidden states from a small-scale dataset into reasoning mode prototypes, we compute a steering vector to guide LRMs' reasoning trajectories. A dynamic control function modulates this vector's strength and direction based on real-time confidence, pruning redundancy during overthinking, and promoting exploration during underthinking. Extensive experiments conducted on four models ranging from 0.5B to 32B, and across nine benchmarks in math reasoning, general question answering, and coding tasks demonstrate that ReBalance effectively reduces output redundancy while improving accuracy, offering a general, training-free, and plug-and-play strategy for efficient and robust LRM deployment. Code is available at https://github.com/yu-lin-li/ReBalance .
Community
โ๏ธ Balanced Thinking: Mitigating Overthinking without Inducing Underthinking in Large Reasoning Models
Ever noticed how large reasoning models (LRMs) sometimes get lost in their thoughts, allocating redundant reasoning steps to simple problems? ๐ค This phenomenon, known as overthinking, has prompted recent efforts to shorten reasoning chains. However, we empirically reveal that many of them inadvertently give rise to another issue of underthinking, where models fail to sufficiently explore valid reasoning paths despite possessing the inherent capability to solve the problem. ๐ฑ
In this work, we introduce a fresh and interesting perspective called balanced thinking. By closely monitoring model confidence, we discovered a way to dynamically balance LRMs between overthinking and underthinking. Imagine a reasoning "GPS" that senses when the model drifts into redundant thoughts or rushes past critical details, smoothly guiding it back to optimal reasoning paths! ๐๐ก
Our method, ReBalance, effectively reduces output redundancy while improving accuracy, as demonstrated on four models (0.5Bโ32B) across nine benchmarks spanning math reasoning, general QA, and coding tasks, offering a general, training-free, and plug-and-play strategy for efficient and robust LRM deployment.
Dive into our findings and see how balanced thinking is shaping smarter, faster, and more efficient AI reasoning! ๐๐
Excited to discuss, hear your thoughts, and answer questions! ๐
Models citing this paper 1
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper
Collections including this paper 0
No Collection including this paper