Active Multi-Task Representation Learning
Authors: Yifang Chen, Kevin Jamieson, Simon Du
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We also provide experiments on real-world computer vision datasets to illustrate the effectiveness of our proposed method on both linear and convolutional neural network representation classes.In this section, we empirically evaluate our active learning algorithm for multi-task by deriving tasks from the corrupted MNIST dataset (MNIST-C) proposed in Mu & Gilmer (2019). |
| Researcher Affiliation | Academia | 1Paul G. Allen School of Computer Science & Engineering, University of Washington. Correspondence to: Yifang Chen <yifangc@cs.washington.edu>, Simon S. Du <ssdu@cs.washington.edu>, Kevin Jamieson <jamieson@cs.washington.edu>. |
| Pseudocode | Yes | Algorithm 1 Multi-task sampling strategy with Known ν" and "Algorithm 2 Active Task Relevance Sampling |
| Open Source Code | No | No explicit statement or link regarding the release of source code for the described methodology was found. |
| Open Datasets | Yes | In this section, we empirically evaluate our active learning algorithm for multi-task by deriving tasks from the corrupted MNIST dataset (MNIST-C) proposed in Mu & Gilmer (2019). |
| Dataset Splits | No | The paper mentions using a 'small, fixed amount of data' (e.g., 500 or 200 samples) from the target task. However, it does not specify how this target data is split into training, validation, and test sets, or if a dedicated validation set is used for hyperparameter tuning during the experiments. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments are provided in the paper. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., libraries, frameworks, or programming languages with their versions) are mentioned in the paper. |
| Experiment Setup | No | The paper states 'We run our algorithm iteratively for 4 epochs.' and describes the sample reuse strategy across epochs. However, it does not provide specific hyperparameters such as learning rates, batch sizes, optimizer settings, or details on model initialization for either the linear or CNN models. |