Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
LARM: Large Auto-Regressive Model for Long-Horizon Embodied Intelligence
Authors: Zhuoling Li, Xiaogang Xu, Zhenhua Xu, Ser-Nam Lim, Hengshuang Zhao
ICML 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5. Experiments 5.1. Environment 5.2. Main Results 5.3. Ablation Study Table 1. Performance comparison with previous methods based on Mine Dojo. Table 2. Performance comparison based on Mineflayer. Table 3. Ablation Study on Reward Design. |
| Researcher Affiliation | Academia | 1The University of Hong Kong 2The Chinese University of Hong Kong 3Tsinghua University 4University of Central Florida. |
| Pseudocode | Yes | Algorithm 1 Referee RL |
| Open Source Code | No | The paper provides a project webpage URL (https://lizhuoling.github.io/LARM_webpage/) but does not explicitly state that source code for the methodology is released or provide a direct link to a code repository. |
| Open Datasets | Yes | We validate our method in both Mine Dojo (Fan et al., 2022) and Mineflayer (Prismarine JS., 2013) environments. We have tried pre-training LARM using a 34G webpage dataset crawled from Wiki (Fan et al., 2022) |
| Dataset Splits | No | To compute success rates, we test LARM for 30 times on every task. The experimental results are reported in Table 1. To reduce randomness, we run LARM for 30 times. |
| Hardware Specification | Yes | Evaluated with an RTX4090 GPU, LARM runs with a speed of 0.58 second per inference For learning to complete the most challenging task in this work (craft an enchanted diamond tool), about 42 hours of exploration is taken using a single RTX4090 GPU. |
| Software Dependencies | No | The paper mentions several frameworks and models (PPO, GPT-4, Tiny LLaVA, CLIP, LLaMA) but does not provide specific version numbers for any software dependencies required to reproduce the experiments. |
| Experiment Setup | No | The training details like the optimizer choice follow PPO (Schulman et al., 2017). For learning to complete the most challenging task in this work (craft an enchanted diamond tool), about 42 hours of exploration is taken using a single RTX4090 GPU. |