Handling Learnwares Developed from Heterogeneous Feature Spaces without Auxiliary Data
Authors: Peng Tan, Zhi-Hao Tan, Yuan Jiang, Zhi-Hua Zhou
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on real-world tasks validate the efficacy of our method. |
| Researcher Affiliation | Academia | Peng Tan , Zhi-Hao Tan , Yuan Jiang and Zhi-Hua Zhou National Key Laboratory for Novel Software Technology Nanjing University, Nanjing 210023, China {tanp, tanzh, jiangy, zhouzh}@lamda.nju.edu.cn |
| Pseudocode | Yes | The overall procedure is sketched in Algorithms 1 and 2. |
| Open Source Code | Yes | https://github.com/LAMDA-TP/Heterogeneous-learnware-without-auxiliary-data |
| Open Datasets | Yes | We conduct empirical experiments on six heterogeneous learnware scenarios involving five real-world tasks: Mfeat [van Breukelen et al., 1998], Anuran [Colonna et al., 2012], Digits [Garris et al., 1997], Kddcup99 [Lippmann et al., 2000] and Covtype [Blackard and Dean, 1999]. |
| Dataset Splits | Yes | The dimension of subspace is chosen by cross validation. |
| Hardware Specification | No | The paper does not specify the hardware used for experiments, such as specific GPU or CPU models. |
| Software Dependencies | No | The paper mentions testing 'several model types like SVM and random forest' but does not specify software dependencies with version numbers (e.g., Python version, library versions). |
| Experiment Setup | Yes | In our experiment, parameters are set as follows: the reduced set size mi is 10... For subspace learning, the trade-off parameters is set as α = 10 5, γ = 1, the max iteration is t = 500 and the learning rate is η = 10 2. The dimension of subspace is chosen by cross validation. We test several model types like SVM and random forest. All experiments are repeated 50 times. |