Generalized Dictionary for Multitask Learning with Boosting
Authors: Boyu Wang, Joelle Pineau
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on both synthetic and benchmark real-world datasets confirm the effectiveness of the proposed approach for multitask learning. |
| Researcher Affiliation | Academia | Boyu Wang and Joelle Pineau School of Computer Science Mc Gill University, Montreal, Canada boyu.wang@mail.mcgill.ca, jpineau@cs.mcgill.ca |
| Pseudocode | Yes | Algorithm 1 Generalized Dictionary for Multitask Learning Input: {S1, . . . , ST }, max Iter, the number of iterations K, the number of basis hypotheses M, regularization parameter µ |
| Open Source Code | No | The paper does not provide an explicit statement about releasing the source code for the described methodology or a direct link to a code repository. |
| Open Datasets | Yes | We now evaluate GDMTLB algorithm against several stateof-the-art algorithms on both synthetic and real-world datasets. Competitive methods include... London school data [Argyriou et al., 2007]; and two for classification: landmine data [Xue et al., 2007], and BCI Competition data2. 2http://www.bbci.de/competition/iv/. |
| Dataset Splits | No | The paper mentions 'In all experiments, the hyper-parameters (e.g., M, µ, different dictionary initializations) are selected by cross-validation.' and 'Each dataset is evaluated by using 10 randomly generated 50/50 splits of the data between training and test set'. It does not specify a separate, explicit validation dataset split. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'Regression tree is used as the weak learner of GDMTLB for regression, and logistic regression is used as the weak learner for classification.' but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | No | The paper states 'In all experiments, the hyper-parameters (e.g., M, µ, different dictionary initializations) are selected by cross-validation.' but does not provide specific values for these or other training configurations in the main text. |