Partial Label Learning with Dissimilarity Propagation guided Candidate Label Shrinkage

Authors: Yuheng Jia, Fuchao Yang, Yongqiang Dong

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on artificial and real-world partial label data sets demonstrate the effectiveness of the proposed PLL method.
Researcher Affiliation Academia Yuheng Jia1, 3 , Fuchao Yang2, Yongqiang Dong1 1 School of Computer Science and Engineering, Southeast University, Nanjing 210096, China 2 College of Software Engineering, Southeast University Nanjing 210096, China 3 Key Laboratory of New Generation Artificial Intelligence Technology and Its Interdisciplinary Applications (Southeast University), Ministry of Education, China yhjia@seu.edu.cn, yangfc@seu.edu.cn, dongyq@seu.edu.cn
Pseudocode Yes Algorithm 1 The Pseudo Code of the Proposed Method
Open Source Code Yes The code is publicly available at https://github.com/Yangfc-ML/DPCLS.
Open Datasets Yes To demonstrate the effectiveness of the proposed model, we compared DPCLS with eight shallow PLL algorithms, which were configured by the suggested parameters in the literature, i.e., CLPL [1], PL-SVM [12], PL-KNN [5], PL-DA [18], IPAL [22], AGGD [16], PL-CLA [13], SDIM [2]. Those methods were evaluated on 10 synthetic data sets and 7 real-world data sets, whose details can be found in Section C of the supplementary file.
Dataset Splits Yes Ten runs of 50%/50% random train/test splits were performed on each data set, and the average classification accuracy and the standard deviation were recorded.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models, processor types, or memory amounts used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment.
Experiment Setup Yes Hyper-parameter Settings of Our Method Parameter λ is used to control the model complexity. ... we set λ=0.05 for our method. Parameters α and β are used to control the importance of the adversarial term and dissimilarity propagation term respectively. According to a number of experiments, we fix β = 0.001 and select α from {0.001, 0.01}. Parameter k controls the number of k-nearest neighbors. Following the previous works [16, 22], we set k=10.