Partial Label Learning via Label Influence Function
Authors: Xiuwen Gong, Dong Yuan, Wei Bao
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on various datasets demonstrate the superiorities of the proposed methods in terms of prediction accuracy, which in turn validates the effectiveness of the proposed PLL-IF framework. |
| Researcher Affiliation | Academia | 1Faculty of Engineering, The University of Sydney, NSW, Australia. |
| Pseudocode | Yes | Algorithm 1 CG Algorithm |
| Open Source Code | No | The paper does not contain any statement or link indicating that the source code for their methodology is publicly available. |
| Open Datasets | Yes | We conducted controlled experiments on synthetic PLL datasets configured from six UCI datasets1... and We also conducted experiments on six real-world PLL datasets, which are summarized in Table 2. |
| Dataset Splits | Yes | For the evaluation metric, we perform five-fold cross-validation on each dataset and report the mean accuracy with standard deviation of each method. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments. |
| Software Dependencies | No | The paper mentions software like LIBLINEAR and PyTorch, but does not provide specific version numbers for these or other dependencies required for reproducibility. |
| Experiment Setup | Yes | Specifically, we employ a 3-layer neural network for the proposed PLL-IF+NN, and use the Leaky Re Lu activation function in the two middle layers with 512 and 256 hidden units respectively and employ softmax function in the output layer. The optimizer is Adam (Kingma & Ba, 2015) with the initial learning rate to be 0.0001. The mini-batch size is set to 32 and we train the model 500 epochs with crossentropy loss and update ground-truth label variable every epoch. |