Few-Shot Continual Active Learning by a Robot

Authors: Ali Ayub, Carter Fendley

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our approach on the CORe-50 dataset and on a real humanoid robot for the object classification task.
Researcher Affiliation Collaboration Ali Ayub University of Waterloo Waterloo, ON N2L3G1, Canada a9ayub@uwaterloo.ca Carter Fendley Capital One New York, NY 10017, USA ccf5164@psu.edu
Pseudocode No The paper describes the algorithms in prose and through diagrams, but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper explicitly states the release of a dataset, but does not provide a direct link or explicit statement about the availability of the source code for the methodology described.
Open Datasets Yes We evaluate our approach on the CORe-50 dataset and on a real humanoid robot for the object classification task... Finally, as a part of this work, we also release the object dataset collected by our robot as a benchmark for future evaluations for Fo CAL (available here: https://tinyurl.com/2vuwv8ye).
Dataset Splits Yes Hyperparameters P and δ were chosen using cross-validation and were set to 0.2 and 0.7, respectively for all increments.
Hardware Specification No The paper mentions using a "real humanoid robot" (Pepper) for experiments, but it does not specify the computing hardware (e.g., specific GPU models, CPU models, or cloud resources) used for training the models or running the main experiments.
Software Dependencies No The paper mentions software like "Pytorch deep learning framework [24]", "Res Net-18 [25]", and "Retina Net [29]", but it does not specify version numbers for these software components or libraries.
Experiment Setup Yes Hyperparameters P and δ were chosen using cross-validation and were set to 0.2 and 0.7, respectively for all increments... The shallow network was trained for 25 epochs using the cross-entropy loss optimized with stochastic gradient descent (with 0.9 as momentum). A fixed learning rate of 0.01 and minibatches of size 64 were used for training.