Semi-Supervised Partial Label Learning via Confidence-Rated Margin Maximization
Authors: Wei Wang, Min-Ling Zhang
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on synthetic as well as real-world data sets clearly validate the effectiveness of the proposed semi-supervised partial label learning approach. |
| Researcher Affiliation | Academia | Wei Wang Min-Ling Zhang School of Computer Science and Engineering, Southeast University, Nanjing 210096, China Key Laboratory of Computer Network and Information Integration (Southeast University), Ministry of Education, China {wang_w, zhangml}@seu.edu.cn |
| Pseudocode | Yes | Table 1: Pseudo-code of PARM. |
| Open Source Code | No | The paper does not provide a specific link to source code or explicitly state that the code for the methodology is being released or is publicly available. |
| Open Datasets | Yes | Table 2 summarizes characteristics of the experimental data sets used in this paper. Following the widely-used experimental protocol in partial label learning [6, 7, 8, 11], synthetic PL data sets are generated from multi-class UCI data sets with controlling parameter r. Furthermore, five real-world PL data sets from different task domains have also been employed for experimental studies, including Lost [8], LYN10, LYN20 [12] for automatic face naming, Mirflickr[13] for web image classification, and Bird Song [4] for bird song classification. |
| Dataset Splits | Yes | On each data set, ten-fold cross validation is performed whose mean accuracy as well as standard deviation are recorded for all comparing approaches. |
| Hardware Specification | No | The paper does not provide specific hardware details (like CPU/GPU models or memory amounts) used for running the experiments. It only mentions: "We thank the Big Data Center of Southeast University for providing the facility support on the numerical calculations in this paper." |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, specific libraries) needed to replicate the experiment. |
| Experiment Setup | Yes | In this paper, σ, k and α are fixed to be 1, 8 and 0.95 respectively. ...the regularization parameters λ and µ for PARM are chosen among {0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1, 5, 10} via cross-validation on training set and γ = 0.01. |