Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
AdaFlow: Imitation Learning with Variance-Adaptive Flow-Based Policies
Authors: Xixi Hu, Qiang Liu, Xingchao Liu, Bo Liu
NeurIPS 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our comprehensive empirical evaluation shows that Ada Flow achieves high performance with fast inference speed. |
| Researcher Affiliation | Academia | The University of Texas at Austin EMAIL |
| Pseudocode | Yes | Algorithm 1 Ada Flow: Execution |
| Open Source Code | No | The paper does not provide an explicit statement or a link to open-source code for the described methodology. |
| Open Datasets | Yes | We conducted comprehensive experiments across decision making tasks, including navigation and robot manipulation, utilizing benchmarks such as LIBERO [27] and Robo Mimic [10]. |
| Dataset Splits | No | The paper does not explicitly provide training/test/validation dataset splits (percentages, sample counts, or specific predefined splits) for reproduction. |
| Hardware Specification | No | The paper mentions 'GPU hours' and 'resource-intensive' training but does not provide specific details about the GPU models, CPU, or other hardware specifications used for the experiments. |
| Software Dependencies | No | The paper mentions optimizer and learning rate scheduler types but does not provide specific version numbers for software libraries or dependencies like deep learning frameworks (e.g., PyTorch, TensorFlow) or specific Python packages. |
| Experiment Setup | Yes | Table 7: Hyperparameters used for training Ada Flow and baseline models. |