Knowledge Flow: Improve Upon Your Teachers

Authors: Iou-Jen Liu, Jian Peng, Alexander Schwing

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate knowledge flow on a variety of tasks from reinforcement learning to fully-supervised learning.
Researcher Affiliation Academia University of Illinois at Urbana-Champaign
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about the release of its source code or a link to a code repository.
Open Datasets Yes We evaluate knowledge flow on reinforcement learning using Atari games that were used by Rusu et al. (2016b); Fernando et al. (2017). For supervised learning, we use a variety of image classification benchmarks, including CIFAR10 (Krizhevsky, 2009), CIFAR-100 (Krizhevsky, 2009), STL-10 (Coates et al., 2011), and EMNIST (Cohen et al., 2017).
Dataset Splits Yes The parameters λ1 for the dependent cost and λ2 for the KL cost are determined using the validation set of each dataset.
Hardware Specification No As A3C, we run 16 agents on 16 CPU cores in parallel. (This is not specific enough to meet the criteria for "Yes")
Software Dependencies No The paper does not provide specific software names with version numbers for replication.
Experiment Setup Yes The learning rate is set to 10 4 and gradually decreased to zero for all experiments. To select λ1 and λ2 in our framework, we follow progressive neural net (Rusu et al., 2016b): randomly sample λ1 {0.05, 0.1, 0.5} and λ2 {0.001, 0.01, 0.05}.