Modal Consistency based Pre-Trained Multi-Model Reuse
Authors: Yang Yang, De-Chuan Zhan, Xiang-Yu Guo, Yuan Jiang
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on synthetic and real-world datasets validate the effectiveness of PM2R when it is compared with state-of-the-art ensemble/multi-modal learning methods under this more realistic setting. |
| Researcher Affiliation | Academia | Yang Yang De-Chuan Zhan Xiang-Yu Guo Yuan Jiang National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China {yangy, zhandc, guoxy, jiangy}@lamda.nju.edu.cn |
| Pseudocode | Yes | Algorithm 1 The pseudo code of PM2Rone; Algorithm 2 The pseudo code of PM2R |
| Open Source Code | No | The paper does not provide any concrete access to source code, such as a specific repository link, an explicit code release statement, or mention of code in supplementary materials. |
| Open Datasets | Yes | Synthetic data are generated according to [Khetan and Oh, 2016]. CASPEAL [Gao et al., 2008] is constructed by Chinese Academy of Sciences (CAS). We utilize the WIKI [Rothe et al., 2015] dataset, which is also a face dataset with the same input size, for models pre-training, and predict with Pos., Exp., Acc. sets separately. |
| Dataset Splits | No | The paper mentions that 'The test sets are drawn from the users side data with bootstrap, and repeated for 30 times for each task.' but does not specify exact split percentages or sample counts for training, validation, and test sets, nor does it reference predefined splits with citations for full reproducibility. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running its experiments. It only mentions general concepts like 'pre-trained models' and 'deep models'. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment. |
| Experiment Setup | No | The paper mentions general aspects of model generation (e.g., 'train 12 random forest models with different number of trees, 24 support vector models with different kernel methods or costs using these features respectively, besides, 4 deep models are also included'), but it does not specify concrete hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed system-level training settings required for replication. |