Unreliable Partial Label Learning with Recursive Separation
Authors: Yu Shi, Ning Xu, Hua Yuan, Xin Geng
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our method demonstrates state-of-the-art performance as evidenced by experimental results, particularly in situations of high unreliability. Code and supplementary materials are available at https://github.com/dhiyu/UPLLRS. Experiments show that our method achieve state-of-the-art results on the UPLL datasets. |
| Researcher Affiliation | Academia | School of Computer Science and Engineering, Southeast University, Nanjing 211189, China |
| Pseudocode | Yes | Algorithm 1 Self-adaptive RS Algorithm Input: Separation network f( ; θ) with trainable parameters θ; Unreliable partial label training set D = {(xi, si)}n i=1 and validation set V = {(xi, yi)}k i=1; Small epochs β for each separation step; Separation rate γ; RS patience ϕ and max separation step λ. Output: Reliable subset Dλ R = {(xi, si)}m i=1 and unreliable subset Dλ U = {(xi)}n m i=1 . 1: Let ϕcurr 0 and Acc V 0; 2: for i 1 to λ do 3: Randomly initialize θi 0; 4: for j 1 to β do 5: Train f( ; θi j 1) using dataset Di R; 6: Calculate loss l according Eq. 3; 7: Update parameters from θi j 1 to θi j; 8: if j = β then 9: Sort l by value in descending order; 10: Exclude top-γ instances from Di R and add excluded instances to Di U without labels; 11: end if 12: end for 13: Evaluate f( ; θi j) on dataset V and calculate accuracy Acccurr; 14: if Acccurr < Acc V then 15: ϕcurr ϕcurr + 1; 16: if ϕcurr ϕ then 17: break; 18: end if 19: else 20: Acc V Acccurr, ϕcurr 0; 21: end if 22: end for 23: return Reliable subset Dλ R and unreliable subset Dλ U. |
| Open Source Code | Yes | Code and supplementary materials are available at https://github.com/dhiyu/UPLLRS. |
| Open Datasets | Yes | We utilize two commonly employed image datasets, CIFAR10 and CIFAR-100 [Krizhevsky et al., 2009], as the basis for synthesizing our UPLL dataset. Besides, we also utilize two additional datasets Dermatology and 20Newsgroups from UCI machine learning Repository [Dua and Graff, 2017] to further validate the effectiveness of our proposed method. |
| Dataset Splits | Yes | In our experiments, the datasets are partitioned into training, validation, test set in a 4:1:1 ratio. |
| Hardware Specification | Yes | All experiments are conducted on NVIDIA RTX 3090. |
| Software Dependencies | No | The paper states that the implementation is "based on Py Torch [Paszke et al., 2019] framework" but does not provide specific version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | For the first stage (i.e. self-adaptive RS), a 5-layer perceptron (MLP) is utilized to separate samples with CCE [Lv et al., 2023] loss. The learning rate is 0.1, 0.18, 0.1; small epochs β = 5, 6, 5; separation rate γ = 0.03, 0.005, 0.03; on the CIFAR-10, CIFAR-100 and UCI datasets respectively. Max separation step λ = log1 γ 0.3 . The learning rate is 5e 2 and the weight decay is 1e 3; ξ is set as 2 on CIFAR-10 and 0.3 on CIFAR-100. The optimizer in our experiment is Stochastic Gradient Descent (SGD) [Robbins and Monro, 1951] in which momentum is set as 0.9. For the learning rate scheduler, we use a cosine learning rate decay [Loshchilov and Hutter, 2016]. Otherwise, each model is trained with maximum epochs T = 500 and employs early stopping strategy with patience 25. |