RL for Reasoning by Adaptively Revealing Rationales
AuthorsMohammad Hossein Amani†, Aryo Lotfi†, Nicolas Mario Baldwin†, Samy Bengio, Mehrdad Farajtabar, Emmanuel Abbé*, Robert West*†
RL for Reasoning by Adaptively Revealing Rationales
AuthorsMohammad Hossein Amani†, Aryo Lotfi†, Nicolas Mario Baldwin†, Samy Bengio, Mehrdad Farajtabar, Emmanuel Abbé*, Robert West*†
We propose that reinforcement learning (RL) from partial expert demonstrations is not merely a training heuristic, but a promising framework for solving complex sequence generation tasks. Supervised fine-tuning (SFT) relies on dense ground-truth labels, which become increasingly costly as sequence length grows. RL, on the other hand, struggles with sparse rewards and a combinatorially large output space. We address this by introducing adaptive backtracking (AdaBack), a per-sample curriculum learning algorithm that reveals only a partial prefix of the target output during training. The supervision length is adjusted dynamically for each sample based on the model’s past reward signal, allowing it to incrementally learn to complete reasoning chains by conditioning on correct partial solutions. We investigate this intermediate regime between SFT and RL and argue that per-sample curriculum learning is more than a trade-off between efficiency and generality, it can succeed in tasks with long sequences of latent dependencies where SFT and RL both fail to generalize. Using a synthetic task with latent parity constraints, we show that our adaptive curriculum over partial answers reliably solves problems that are otherwise intractable. On mathematical reasoning benchmarks (MATH, GSM8k), we find that curriculum learning enables models to solve problems that RL alone cannot, acquiring new reasoning capabilities through incremental exposure to partial solutions.
Interleaved Reasoning for Large Language Models via Reinforcement Learning
May 28, 2025research area Knowledge Bases and Search, research area Speech and Natural Language Processing
Long chain-of-thought (CoT) significantly enhances large language models’ (LLM) reasoning capabilities. However, the extensive reasoning traces lead to inefficiencies and an increased time-to-first-token (TTFT). We propose a novel training paradigm that uses reinforcement learning (RL) to guide reasoning LLMs to interleave thinking and answering for multi-hop questions. We observe that models inherently possess the ability to perform interleaved…
Dynamic Curriculum Learning Via Data Parameters for Noise Robust Keyword Spotting
June 1, 2021research area Speech and Natural Language Processingconference ICASSP
We propose dynamic curriculum learning via data parameters for noise robust keyword spotting. Data parameter learning has recently been introduced for image processing, where weight parameters, so-called data parameters, for target classes and instances are introduced and optimized along with model parameters. The data parameters scale logits and control importance over classes and instances during training, which enables automatic curriculum…