A Unified Meta-Learning Framework for Dynamic Transfer Learning

Authors: Jun Wu, Jingrui He

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on various image data sets demonstrate the effectiveness of the proposed L2E framework. ... Extensive experiments on public data sets confirm the effectiveness of our proposed L2E framework.
Researcher Affiliation Academia Jun Wu , Jingrui He University of Illinois Urbana-Champaign {junwu3, jingrui}@illinois.edu
Pseudocode No The paper describes the proposed framework using equations and textual explanations, and Figure 2 illustrates the framework, but no formal pseudocode or algorithm block is provided.
Open Source Code Yes 3https://github.com/jwu4sml/L2E
Open Datasets Yes We used three publicly available image data sets: Office-31 (with 3 tasks: Amazon, Webcam and DSLR), Image-CLEF (with 4 tasks: B, C, I and P) and Caltran.
Dataset Splits No The paper mentions splitting 'training data from every historical source or target task into one training set Dtr k and one validation set Dval k' for meta-training, which indicates the use of validation data. However, it does not specify the exact split percentages, sample counts, or the methodology for creating these splits to allow for reproduction.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, memory, or cloud computing resources used for the experiments.
Software Dependencies No The paper mentions 'We adopted the Res Net-18 [He et al., 2016] pretrained on Image Net as the base network,' but does not list any specific software dependencies with version numbers (e.g., deep learning frameworks like PyTorch or TensorFlow, or other libraries).
Experiment Setup Yes and set γ = 0.1 and p = 80 for all the experiments