On Multiplicative Multitask Feature Learning
Authors: Xin Wang, Jinbo Bi, Shipeng Yu, Jiangwen Sun
NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we empirically evaluate the performance of the proposed multiplicative MTFL with the four parameter settings listed in Table 1 on synthetic and real-world data for both classification and regression problems. |
| Researcher Affiliation | Collaboration | Xin Wang , Jinbo Bi , Shipeng Yu , Jiangwen Sun Dept. of Computer Science & Engineering Health Services Innovation Center University of Connecticut Siemens Healthcare Storrs, CT 06269 Malvern, PA 19355 wangxin,jinbo,javon@engr.uconn.edu shipeng.yu@siemens.com |
| Pseudocode | Yes | Algorithm 1 Alternating optimization for multiplicative MTFL |
| Open Source Code | No | No explicit statement about providing open-source code for the methodology, or a link to a code repository, was found. |
| Open Datasets | Yes | Two benchmark data sets, the Sarcos [1] and the USPS data sets [10], were used for regression and classification tests respectively. The Sarcos data set has 48,933 observations and each observation (example) has 21 features. ... USPS handwritten digits data set has 2000 examples and 10 classes as the digits from 0 to 9. |
| Dataset Splits | Yes | We used respectively 25%, 33% and 50% of the available data in each data set for training and the rest data for test. We repeated the random split 15 times and reported the averaged performance. For each split, the regularization parameters of each method were tuned by a 3-fold cross validation within the training data. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory amounts) used for running experiments were provided. |
| Software Dependencies | No | The paper mentions "CPLEX solvers" but does not provide a specific version number, which is required for reproducibility. |
| Experiment Setup | No | While the paper mentions that tuning parameters γ1, γ2 are used and tuned by 3-fold cross-validation, it does not provide the specific values of these parameters or other common experimental setup details like learning rates, batch sizes, or optimizer settings. |