Incremental Learning of Retrievable Skills For Efficient Continual Task Adaptation

Authors: Daehee Lee, Minjong Yoo, Woo Kyung Kim, Wonje Choi, Honguk Woo

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate Is Ci L and several adapter-based continual learning baselines across scenario variations based on complex, long-horizon tasks in the Franka-Kitchen and Meta-World environments to assess sample efficiency, task adaptation, and privacy considerations.
Researcher Affiliation Academia Sungkyunkwan University Carnegie Mellon University {dulgi7245, mjyoo2, kwk2696, wjchoi1995, hwoo}@skku.edu
Pseudocode Yes Algorithm 1 Is Ci L Skill Incremental Learning
Open Source Code Yes Yes, we provide the codes for supplementary material.
Open Datasets Yes To investigate the sample efficiency and adaptation performance, we construct complex Ci L scenarios using diverse long-horizon tasks [29, 30, 31].
Dataset Splits No The paper mentions training and test data, but does not explicitly describe a validation dataset split or its specific use for hyperparameter tuning or early stopping.
Hardware Specification Yes Our experimental platform is powered by an AMD 5975wx CPU and 2x RTX 4090 GPUs.
Software Dependencies Yes We utilized jax 0.4.24, jaxlib 0.4.19, and flax 0.8.2 for our implementation.
Experiment Setup Yes Table 11: Pre-trained model configure and Table 12: Continual imitation learning default hyperparameters.