Absent Multiple Kernel Learning
Authors: Xinwang Liu, Lei Wang, Jianping Yin, Yong Dou, Jian Zhang
AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments are conducted on five MKL benchmark data sets to compare the proposed algorithm with existing imputation-based methods. As observed, our algorithm achieves superior performance and the improvement is more significant with the increasing missing ratio. |
| Researcher Affiliation | Academia | Xinwang Liu School of Computer National University of Defense Technology Changsha, China, 410073 Lei Wang School of Computer Science and Software Engineering University of Wollongong NSW, Australia, 2522 Jianping Yin, Yong Dou School of Computer National University of Defense Technology Changsha, China, 410073 Jian Zhang Faculty of Engineering and Information Technology University of Technology Sydney NSW, Australia, 2007 |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described in this paper. |
| Open Datasets | Yes | The above algorithms are evaluated on five benchmark MKL data sets, including psort Pos, psort Neg, plant data sets, the protein fold prediction data set and Caltech101. |
| Dataset Splits | Yes | The regularization parameter C for each algorithm is chosen from an appropriately large range [10 1, 1, , 104] by 5-fold cross-validation on the training data. |
| Hardware Specification | No | The paper does not provide specific hardware details used for running its experiments. |
| Software Dependencies | No | The paper mentions 'CVX (CVX Research 2012)' as a package used, but this is only one software component and it does not provide a comprehensive list of specific ancillary software details (e.g., libraries, frameworks) with version numbers necessary for replication. |
| Experiment Setup | Yes | The regularization parameter C for each algorithm is chosen from an appropriately large range [10 1, 1, , 104] by 5-fold cross-validation on the training data. In specific, we randomly generate a row of s and set its first round(ε0 m)3 smallest values as zeros and the rest as ones, respectively. We repeat this process for each row of s. The absent matrix on test data is generated in the same way. The parameter ε0, termed missing ratio in this paper, will affect the performance of the above algorithms. |