Research
Research on LLM Reasoning Traces Reusing
Zhengxi Li*, Fuyuan Lyu*, Qiyuan Zhang, Ye Yuan, Haolun Wu, Xue Liu.
In ICLR 2026 Third Workshop on Test-Time Updates.
Studied whether small language models can reliably reuse large language models’ reasoning steps, using accuracy and token usage as core indicators across varying reasoning lengths and prompting mechanisms. Designed and implemented pipelines that supply SLM with varying amounts of LLM reasoning steps to evaluate SLM accuracy, experimenting across various models and datasets to analyze the performance-cost trade-off curve.