Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Better than Your Teacher: LLM Agents that learn from Privileged AI Feedback

Authors: Sanjiban Choudhury, Paloma Sodhi

ICLR 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate LEAP on multiple decision-making benchmarks, including text-based games (ALFWorld), web navigation (Web Shop), and interactive coding (Intercode Bash). Our experiments show that LEAP (1) outperforms behavior cloning and Re Act baselines (2) enables weak student models (e.g., Llama3-8B) to exceed performance of strong teacher models (GPT-4o), and (3) allows weak models to self-improve using privileged versions of themselves.
Researcher Affiliation Academia Sanjiban Choudhury1, , Paloma Sodhi 1Cornell University, NY, USA. Equal contribution. Correspondence to: Sanjiban <EMAIL>, Paloma <EMAIL>
Pseudocode Yes Algorithm 1 LEAP: Iterative Learning with Privileged Expert Teacher
Open Source Code Yes Our code is available at https://leap-llm.github.io.
Open Datasets Yes Experimental validation on diverse interactive decision-making benchmarks: Alf World (Shridhar et al., 2020b), Web Shop (Yao et al., 2022a), and Inter Code (Yang et al., 2024).
Dataset Splits Yes ALFWorld ... 139 in-distribution and 134 out-of-distribution games. Web Shop ... 12,086 training and 500 test tasks. Intercode Bash ... we train on the first two and test on the next two
Hardware Specification Yes All training runs were on machines with either 2 or 4 RTX A6000 GPUs, each with 48 GB of memory per GPU.
Software Dependencies No The paper mentions software components like 'Lo RA', 'AdamW', and 'cosine' (for LR scheduler), and models like 'Llama3-8B' and 'GPT-4o'. It also references the 'Python package implementing LEAP' and 'Open AI model calls'. However, it does not provide specific version numbers for any of these software dependencies (e.g., Python version, PyTorch version, OpenAI API SDK version).
Experiment Setup Yes Tables 8 and 9 contain hyperparameters for SFT training and DPO/KTO training using Lo RA for the different datasets. For inference using Llama3-8b and Llama3-70b, we use a temperature setting of 0.3 and a maximum token length of 256.