Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Critical Tokens Matter: Token-Level Contrastive Estimation Enhances LLM’s Reasoning Capability

Authors: Zicheng Lin, Tian Liang, Jiahao Xu, Qiuzhi Liu, Xing Wang, Ruilin Luo, Chufan Shi, Siheng Li, Yujiu Yang, Zhaopeng Tu

ICML 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive experiments on datasets such as GSM8K and MATH500, we show that identifying and replacing critical tokens significantly improves model accuracy. Experimental results on GSM8K and MATH500 benchmarks with the widely used models Llama-3 (8B and 70B) and Deepseek-math (7B) demonstrate the effectiveness of the proposed approach, c DPO.
Researcher Affiliation Collaboration 1Tsinghua University 2Tencent.
Pseudocode No The paper describes methods and a pipeline diagram (Figure 3) but does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository for the methodology described.
Open Datasets Yes We used two widely recognized math reasoning datasets: GSM8K (Cobbe et al., 2021) and MATH (Hendrycks et al., 2021).
Dataset Splits Yes For training, we sampled from all questions in the training set to generate the data. For evaluation, we utilized the MATH500 subset, which is uniformly sampled and has a distribution of difficulty levels and subjects that matches the full MATH test set, as demonstrated in Lightman et al. (2023).
Hardware Specification No The paper mentions the models used (e.g., Llama-3-8B, Deepseek-math-7B) but does not provide specific details about the hardware (e.g., GPU models, CPU types) on which the experiments were conducted.
Software Dependencies No The paper mentions using 'Lo RA adapters' for training and references the implementation of 'TDPO' but does not provide specific version numbers for any software dependencies like programming languages, libraries, or frameworks used in their experiments.
Experiment Setup Yes We trained both positive and negative models for 1 epoch with a learning rate of 3e-4. For preference optimization training, we set γ = 1.0 and trained for 3 epochs with a learning rate of 2e-5 for all baseline methods. For our c DPO approach, since the token-level scores range between 0 and 1 (whereas in DPO, the scores were all 1), we simply increased the learning rate to 4e-5. For our proposed c DPO, each problem was sampled N = 64 times, selecting the top-p = 50% of incorrect trajectories to train the negative model q( ). During estimation, the hyperparameter β was set to 1.0.