GRAD: Generative Retrieval-Aligned Demonstration Sampler for Efficient Few-Shot Reasoning

Published in EMNLP 2025 (Findings), 2025

Abstract

Large Language Models (LLMs) achieve strong performance across diverse tasks, but their effectiveness often depends on the quality of the provided context. Retrieval-Augmented Generation (RAG) enriches prompts with external information, but its reliance on static databases constrains adaptability and can result in irrelevant demonstrations. In this work, we propose a Generative Retrieval-Aligned Demonstrator (GRAD), a dynamic demonstration-based approach where an LLM model is trained to generate input-specific concise demonstrations. By tailoring demonstrations to each input, our method offers better contextual support than traditional RAG approaches.

We demonstrate the superiority of GRAD under budget constraints, where we limit both the number of tokens used per demonstration and the number of tokens used for the final output. Trained solely on a math dataset, GRAD consistently outperforms strong baselines on Qwen2.5-14B across mathematical reasoning and advanced STEM questions, highlighting GRAD’s robust generalization to out-of-distribution (OOD) domains such as physics, chemistry, and computer science. Furthermore, we show that demonstrations generated by trained smaller models can effectively guide larger target models, reducing training costs while maintaining competitive accuracy.

Overall, this work introduces a scalable demonstration generator model presenting the first step toward a dynamic few-shot learning paradigm in resource-constrained settings. We release the code used for the project.

Key Contributions

  • Dynamic Demonstrations: GRAD generates input-specific demonstrations tailored to each query, offering better contextual support than static RAG approaches.
  • Budget-Constrained Performance: Demonstrates superior performance under token constraints for both demonstrations and final outputs.
  • Strong OOD Generalization: Trained on math datasets, GRAD generalizes effectively to physics, chemistry, and computer science domains.
  • Cross-Model Transfer: Smaller trained models can guide larger target models, reducing training costs while maintaining accuracy.
  • Scalable Few-Shot Learning: Introduces a practical framework for dynamic few-shot learning in resource-constrained settings.

Publication Details