Multi-task Learning with Labeled and Unlabeled Tasks

Authors: Anastasia Pentina, Christoph H. Lampert

ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We also illustrate the effectiveness of the algorithm by experiments on synthetic and real data.
Researcher Affiliation Academia Anastasia Pentina 1 Christoph H. Lampert 1, 1IST Austria. Correspondence to: Anastasia Pentina <apentina@ist.ac.at>.
Pseudocode Yes Algorithm 1. 1. estimate pairwise discrepancies between the tasks based on the unlabeled data 2. choose the tasks I to be labeled (in the active case) and the weights α1, . . . , αT by minimizing (17) 3. receive labels for the labeled tasks I 4. for every task t train a classifier by minimizing (3) using the obtained weights αt.
Open Source Code No The paper links to a dataset (http://cvml.ist.ac.at/productreviews/) but does not provide an explicit statement or link for the source code of the described methodology.
Open Datasets Yes We curate a Multitask dataset of product reviews2 from the corpus of Amazon product data3 (Mc Auley et al., 2015a;b). 2http://cvml.ist.ac.at/productreviews/ 3http://jmcauley.ucsd.edu/data/amazon/
Dataset Splits Yes Regularization constants for all methods we selected from the set {0} {10 17, 10 16 . . . 108} by 5 5-fold cross validation.
Hardware Specification No The paper does not provide any specific hardware details such as CPU/GPU models, memory, or cloud instance types used for the experiments.
Software Dependencies No The paper mentions software components like 'Python', 'Glo Ve word embedding', and various algorithms, but it does not specify version numbers for any of these software dependencies.
Experiment Setup Yes We use n = 1000 unlabeled and m = 100 labeled examples per task. ... We use n = 500 unlabeled samples per task and label a subset of m = 400 examples for each of the selected tasks. ... Regularization constants for all methods we selected from the set {0} {10 17, 10 16 . . . 108} by 5 5-fold cross validation.