Researchers have uncovered a paradox in supervised fine-tuning of large reasoning models: lower training loss does not guarantee better generalization performance when using different sources of Chain-of-Thought trajectories. This finding is crucial for developers as it highlights the importance of reasoning patterns over mere training efficiency, suggesting that divergent exploration can impede model effectiveness. Filtering out inefficient trajectory branches from \texttt{DeepSeek-R1-0528} data improves generalization by up to 5.1% on certain benchmarks, indicating a practical approach for enhancing model performance.
Read the full article at arXiv cs.CL (NLP)
Want to create content about this topic? Use Nemati AI tools to generate articles, social posts, and more.

![[AINews] The Unreasonable Effectiveness of Closing the Loop](/_next/image?url=https%3A%2F%2Fmedia.nemati.ai%2Fmedia%2Fblog%2Fimages%2Farticles%2F600e22851bc7453b.webp&w=3840&q=75)



