Partial Label Learning via Label Enhancement

Authors: Ning Xu, Jiaqi Lv, Xin Geng5557-5564

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments show that PL-LE performs favorably against state-of-the-art partial label learning approaches.
Researcher Affiliation Academia Ning Xu, Jiaqi Lv, Xin Geng MOE Key Laboratory of Computer Network and Information Integration, China School of Computer Science and Engineering, Southeast University, Nanjing 210096, China {xning, lvjiaqi, xgeng}@seu.edu.cn
Pseudocode No The paper contains mathematical equations and descriptions of procedures, but it does not include a clearly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code No The paper does not provide any links to open-source code for the described methodology, nor does it state that the code is available in supplementary materials or upon request.
Open Datasets Yes Controlled UCI Data Sets Table 1 summarizes the characteristics of six controlled UCI data sets (Bache and Lichman 2013). Real-World Data Sets Table 3 summarizes the characteristics of real-world partial label data sets, which are collected from several application domains including FG-NET (Panis and Lanitis 2015)...Lost (Cour, Sapp, and Taskar 2011)...Soccer Player (Zeng et al. 2013) and Yahoo!News (Guillaumin, Verbeek, and Schmid 2010)...MSRCv2 (Liu and Dietterich 2012)...and Bird Song (Briggs, Fern, and Raich 2012).
Dataset Splits Yes Furthermore, pairwise t-test at 0.05 significance level is conducted based on the results of ten-fold crossvalidation.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory, or cloud instances) used to run the experiments.
Software Dependencies No The paper mentions parameters for its own model and configurations for comparative algorithms (e.g., 'SVM with squared hinge loss'), but it does not specify software dependencies like programming languages, libraries, or frameworks with version numbers (e.g., Python 3.x, TensorFlow 2.x, PyTorch 1.x).
Experiment Setup Yes For PL-LE, the parameter λ is set to 0.01 and the number of neighbors K is set to 20. The parameters C1 and C2 are set to 1 and 1, respectively. The kernel function in PL-LE is Gaussian kernel.