Online Multi-Task Learning via Sparse Dictionary Optimization

Authors: Paul Ruvolo, Eric Eaton

AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluated four different algorithms for lifelong learning: a) ELLA, as defined in (Ruvolo and Eaton 2013), b) ELLA-SVD, c) ELLA Incremental, and d) ELLA Dual Update. Each algorithm was tested on four multi-task data sets: Synthetic Regression Tasks, Student Exam Score Prediction, Land Mine Detection, Facial Expression Recognition. The results of our evaluation are given in Figure 2.
Researcher Affiliation Academia Paul Ruvolo Franklin W. Olin College of Engineering Department of Engineering paul.ruvolo@olin.edu Eric Eaton University of Pennsylvania Computer and Information Science Department eeaton@cis.upenn.edu
Pseudocode Yes Algorithm 1 K-SVD (Aharon et al. 2006) and Algorithm 2 MTL-SVD
Open Source Code No The paper does not contain an unambiguous statement about releasing source code for the described methodology or a direct link to a code repository.
Open Datasets Yes London Schools data set... We use the same feature encoding as used by Kumar & Daum e (2012), where four school-specific and three student-specific categorical variables are encoded as a collection of binary features. Land Mine Detection... The data set contains a total of 14,820 data instances divided into 29 different geographical regions. We treat each geographical region as a different task. (Xue et al. 2007) Facial Expression Recognition... This data set is from a recent facial expression recognition challenge (Valstar et al. 2011). We use the same feature encoding as Ruvolo & Eaton (2013)
Dataset Splits No For each task, the training data was divided into both a training and a held-out test set (with 50% of the data designated for each). While a train/test split is mentioned, a separate validation split is not specified.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions using 'linear or logistic regression' as the base learner and solving a problem 'using the Lasso (Tibshirani 1996)', but it does not provide specific software names with version numbers (e.g., 'Python 3.8', 'PyTorch 1.9').
Experiment Setup Yes The λ and k parameters were independently selected for each method via a gridsearch over all combinations of λ {e−5, . . . , e5} and k {1, . . . , 10}; µ was set to e−5 for all algorithms.