Encoding Tree Sparsity in Multi-Task Learning: A Probabilistic Framework

Authors: Lei Han, Yu Zhang, Guojie Song, Kunqing Xie

AAAI 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments conducted on both synthetic and real-world problems show the effectiveness of our model compared with state-of-the-art baselines.
Researcher Affiliation Academia 1Key Laboratory of Machine Perception (Ministry of Education), EECS, Peking University, China 2Department of Computer Science, Hong Kong Baptist University, Hong Kong 3The Institute of Research and Continuing Education, Hong Kong Baptist University (Shenzhen)
Pseudocode No The paper describes the EM algorithm in detail, but it is presented in a narrative format with mathematical equations rather than a structured pseudocode or algorithm block.
Open Source Code No The paper does not provide concrete access to source code for the methodology described, nor does it explicitly state that code is made available.
Open Datasets Yes We report results on microarray data (Wille et al. 2004).1 The data is a gene expression dataset... 1http://www.ncbi.nlm.nih.gov/pmc/articles/PMC545783/ In this problem, we test the performance of various methods on the Cifar-100 dataset.2 This dataset consists of 50000 color images belonging to 100 classes... 2http://www.cs.toronto.edu/ kriz/cifar.html
Dataset Splits Yes We generate n samples for training as well as n samples for testing. The tuning parameters in all models are selected via another validation set with n samples. We perform 10 random splits, each of which uses 60%, 20% and 20% samples for training, testing and validation separately.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup No The paper mentions that tuning parameters are selected via cross-validation and that alpha is a tuning parameter for the MM algorithm, but does not provide specific hyperparameter values (e.g., learning rates, batch sizes, number of epochs, or optimizer settings) for the experimental setup.