Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Synthetic Text Generation for Training Large Language Models via Gradient Matching

Authors: Dang Nguyen, Zeman Li, Mohammadhossein Bateni, Vahab Mirrokni, Meisam Razaviyayn, Baharan Mirzasoleiman

ICML 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on various classification tasks confirm the effectiveness of our proposed approach. Our code is available at https://github.com/BigML-CS-UCLA/GRADMM.
Researcher Affiliation Collaboration 1Department of Computer Science, University of California, Los Angeles 2Google Research 3Department of Industrial & Systems Engineering, University of Southern California. Correspondence to: Dang Nguyen <EMAIL>.
Pseudocode Yes Algorithm 1 GRADient matching w. ADMM (GRADMM)
Open Source Code Yes Experiments on various classification tasks confirm the effectiveness of our proposed approach. Our code is available at https://github.com/BigML-CS-UCLA/GRADMM.
Open Datasets Yes We apply GRADMM to different text classification datasets including SST-2 movie reviews (Socher et al., 2013), Tweet emotions (Mohammad et al., 2018), and Rotten tomatoes (Pang & Lee, 2005b).
Dataset Splits No The paper describes generating synthetic data based on a specific number of examples from validation data (e.g., "generating 100 synthetic examples based on only 5, 10, 20, 50 examples randomly selected from the validation data"), but it does not specify the train/test/validation splits for the original benchmark datasets used (SST-2, Tweet emotions, Rotten tomatoes, IMDB, Sentence polarity).
Hardware Specification Yes At the same time, it reduces the generation memory by 2.6x (from 44.6G to 17.3G) and reduces the generation time by 2.3x (from 4.6 hours to 2 hours on one H100 GPU).
Software Dependencies No The paper mentions using the "Phi model" and "Adam optimizer (Kingma, 2014)" but does not provide specific version numbers for any software libraries or dependencies used in the implementation.
Experiment Setup Yes We fine-tune each model for 200 steps with Adam optimizer (Kingma, 2014) and batch size of 16. The learning rate follows a linear scheduler with the initial learning rate selected from {7e 6, 1e 5, 1.5e 5}.