Rectify Heterogeneous Models with Semantic Mapping

Authors: Han-Jia Ye, De-Chuan Zhan, Yuan Jiang, Zhi-Hua Zhou

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results over both synthetic and real-world tasks with diverse feature configurations validate the effectiveness and practical utility of the proposed framework. Experiments validate the superiority of REFORM, and its possession of learnware s properties. We apply our REFORM implementations to predict whether an Amazon user is high-quality or not given uses iterations with items. We collect papers from International Conference on Machine Learning.
Researcher Affiliation Academia 1National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China. Correspondence to: De-Chuan Zhan <zhandc@lamda.nju.edu.cn>.
Pseudocode No The paper provides iterative update steps for BADMM in Section 4.2 using bullet points and equations, but it is not explicitly labeled as "Algorithm" or "Pseudocode" block.
Open Source Code No The paper does not provide any explicit statements or links indicating that the source code for the described methodology is publicly available.
Open Datasets Yes We apply our REFORM implementations to predict whether an Amazon user is high-quality or not given uses iterations with items. With Amazon user-item click dataset (Mc Auley et al., 2015; He & Mc Auley, 2016) over Movies and TV sub-category. We investigate the 10-class MNISTFashion (Xiao et al., 2017) dataset with standard partition.
Dataset Splits No The paper mentions "parameter tuned by cross-validation" and provides details for train and test splits ("half of all examples construct the former task... 80% of examples are used for test"), but it does not specify an explicit validation dataset split with percentages or counts.
Hardware Specification No The paper does not specify any particular hardware components such as CPU or GPU models, memory, or specific computing clusters used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python version, library versions) that would be needed to replicate the experiments.
Experiment Setup No The paper mentions that "default parameters are used for all methods" and describes some general aspects of the training process, but it does not provide concrete hyperparameter values (e.g., learning rate, batch size, number of epochs) or specific system-level training settings.