Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
MetaUrban: An Embodied AI Simulation Platform for Urban Micromobility
Authors: Wayne Wu, Honglin He, Jack He, Yiran Wang, Chenda Duan, Zhizheng Liu, Quanyi Li, Bolei Zhou
ICLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive evaluation across mobile machines, demonstrating that heterogeneous mechanical structures significantly influence the learning and execution of AI policies. We perform a thorough ablation study, showing that the compositional nature of the simulated environments can substantially improve the generalizability and safety of the trained mobile agents. |
| Researcher Affiliation | Academia | Wayne Wu , Honglin He , Jack He, Yiran Wang, Chenda Duan, Zhizheng Liu, Quanyi Li, Bolei Zhou University of California, Los Angeles |
| Pseudocode | No | The paper describes processes like 'Hierarchical Layout Generation', 'Scalable Obstacle Retrieval', and 'Cohabitant Populating' narratively and with block diagrams (e.g., Figure 3), but it does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Meta Urban will be made publicly available to provide research opportunities and foster safe and trustworthy embodied AI and micromobility in cities. The code and data have been released. |
| Open Datasets | Yes | Based on Meta Urban, we construct a large-scale dataset, Meta Urban-12K, that includes 12,800 training scenes and 1,000 test scenes... Meta Urban will be made publicly available to provide research opportunities and foster safe and trustworthy embodied AI and micromobility in cities. The code and data have been released. |
| Dataset Splits | Yes | Based on the Meta Urban simulator, we construct the Meta Urban-12K dataset, including 12,800 interactive urban scenes for training (Meta Urban-train) and 1,000 scenes for testing (Meta Urban-test). We further construct an unseen test set (Meta Urban-unseen) with 100 scenes for zero-shot experiments... we construct a training set of 1,000 scenes with the same distribution of Meta Urban-unseen, termed Meta Urban-finetune. |
| Hardware Specification | Yes | The total training time is 12 hours, and 5M environment steps for Point Nav on a single Nvidia A5000 GPU... All experiments are conducted on a single Nvidia V100 GPU and in a single process. |
| Software Dependencies | No | Meta Urban uses Py Bullet as its physical engine... Meta Urban uses Panda3D (Goslin & Mine, 2004) for rendering... Meta Urban fully supports ROS 2. The paper names software components but does not provide specific version numbers for them. |
| Experiment Setup | Yes | Table 4: Hyper-parameters of RL and Safe RL for Point Nav. PPO/PPO-Lag/PPO-ET Hyper-parameters Value Environmental horizon T 1,000 Learning rate 5e-5 Discount factor γ 0.99 GAE parameter λ 0.95 Clip parameter ϵ 0.2 Train batch size 25,600 SGD minibatch size 256 Value loss coefficient 1.0 Entropy loss coefficient 0.0 Cost limit 1 |