Partial Label Learning with a Partner

Authors: Chongjie Si, Zekun Jiang, Xuehui Wang, Yan Wang, Xiaokang Yang, Wei Shen

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that the performance and disambiguation ability of several well-established stand-alone and deep-learning based PLL approaches can be significantly improved by coupling with this learning paradigm.
Researcher Affiliation Academia Chongjie Si1, Zekun Jiang1, Xuehui Wang1, Yan Wang2, Xiaokang Yang1, Wei Shen1* 1Mo E Key Lab of Artificial Intelligence, AI Institute, Shanghai Jiao Tong University 2Shanghai Key Lab of Multidimensional Information Processing, East China Normal University {chongjiesi, zkjiangzekun.cmu, wangxuehui, xkyang, wei.shen}@sjtu.edu.cn; ywang@cee.ecnu.edu.cn
Pseudocode No The paper describes the proposed approach using mathematical equations and descriptive text, but it does not include any explicitly labeled pseudocode or algorithm blocks in the main text.
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository for the methodology described.
Open Datasets Yes We conduct experiments on six real-world partial label data sets collected from several domains and tasks, including FG-NET (Panis et al. 2016) for facial age estimation, Lost (Cour et al. 2009), Soccer Player (Zeng et al. 2013) and Yahoo!News (Guillaumin, Verbeek, and Schmid 2010) for automatic face naming, MSRCv2 (Liu and Dietterich 2012) for object classification and Mirflickr (Huiskes and Lew 2008) for web image classification. ... We conduct experiments on two benchmarks CIFAR-10 and CIFAR-100 (Krizhevsky, Hinton et al. 2009)...
Dataset Splits No The paper mentions 'Ten runs of 50%/50% random train/test splits are performed', but does not explicitly provide details for a separate validation dataset split.
Hardware Specification No The paper does not provide specific details regarding the hardware (e.g., CPU, GPU models, memory, or cloud instance types) used to run the experiments.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies, libraries, or programming languages used in the experiments.
Experiment Setup Yes For PLCP, we set λ = 0.05, α = 0.5, γ = 2 and k = 1, and the maximum iteration of mutual supervision is set to 5. For PL-CL, PL-AGGD, SURE, LALO and PL-SVM, the kernel function is Gaussian function, which is the same as we adopt. The hyperparameters of B are all set according to the original papers.