Long-Tailed Partial Label Learning via Dynamic Rebalancing
Authors: Feng Hong, Jiangchao Yao, Zhihan Zhou, Ya Zhang, Yanfeng Wang
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on three benchmark datasets demonstrate the significant gain of RECORDS compared with a range of baselines. The code is publicly available. |
| Researcher Affiliation | Collaboration | Feng Hong1 Jiangchao Yao1,2, Zhihan Zhou1 Ya Zhang1,2 Yanfeng Wang1,2, 1Cooperative Medianet Innovation Center, Shanghai Jiao Tong University 2Shanghai AI Laboratory |
| Pseudocode | Yes | Appendix C: PSEUDO-CODE OF RECORDS We summarize the complete procedure of our RECORDS in Algorithm 1. |
| Open Source Code | Yes | To ensure the reproducibility of experimental results, our code is available at https://github. com/Media Brain-SJTU/RECORDS-LTPLL. |
| Open Datasets | Yes | We evaluate RECORDS on three datasets: CIFAR-10-LT (Liu et al., 2019), CIFAR-100-LT (Liu et al., 2019) and PASCAL VOC. ... PASCAL VOC is a real-world LT-PLL dataset constructed from PASCAL VOC 2007 (Everingham et al.). |
| Dataset Splits | Yes | Dataset partitioning. To demonstrate the performance of the algorithm on categories with different frequencies, we partition the dataset according to the sample size. Following Kang et al. (2020), We split the dataset into three partitions: Many-shot (classes with more than 100 images), Medium-shot (classes with 20-100 images), and Few-shot (classes with less than 20 images). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running experiments. |
| Software Dependencies | No | The paper mentions software components like 'Res Net' and 'SGD' but does not provide specific version numbers for software dependencies or libraries. |
| Experiment Setup | Yes | The mini-batch size is set to 256 and all the methods are trained using SGD with momentum of 0.9 and weight decay of 0.001 as the optimizer. The hyper-parameter m in Equation 5 is set to 0.9 constantly. The initial learning rate is set to 0.01. We train the model for 800 epochs with the cosine learning rate scheduling. |