Pareto Deep Long-Tailed Recognition: A Conflict-Averse Solution
Authors: Zhipeng Zhou, Liu Liu, Peilin Zhao, Wei Gong
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive evaluations demonstrate that our method not only generally improves mainstream pipelines, but also achieves an augmented version to realize state-of-the-art performance across multiple benchmarks. |
| Researcher Affiliation | Collaboration | Zhipeng Zhou1, Liu Liu2, , Peilin Zhao2, Wei Gong1, 1University of Science and Technology of China, 2Tencent AI Lab |
| Pseudocode | Yes | The overall pseudo algorithm is summarized in the Section 6.2 of the Appendix. |
| Open Source Code | Yes | Code is available at https://github.com/zzpustc/PLOT. |
| Open Datasets | Yes | We conduct experiments on popular DLTR benchmarks: CIFAR10-/CIFAR100-LT, Places-LT (Liu et al., 2019), Image Net-LT (Liu et al., 2019) and i Naturalist2018 (Van Horn et al., 2018). |
| Dataset Splits | No | The paper mentions a 'training set' and 'balanced test dataset' in the problem setup for CIFAR-LT datasets and other benchmarks, but does not explicitly provide specific percentages or counts for training, validation, and test splits, nor does it explicitly cite predefined splits used, relying instead on 'general DLTR setting'. |
| Hardware Specification | Yes | while all experiments are carried out on Tesla V100 GPUs. |
| Software Dependencies | Yes | We implement our code with Python 3.8 and Py Torch 1.4.0 |
| Experiment Setup | Yes | We train each model with batch size of 64 (for CIFAR10-LT and CIFAR100-LT) / 128 (for Places-LT) / 256 (for Image Net-LT and i Naturalist), SGD optimizer with momentum of 0.9. Our early stop hyper-parameter E is selected from {10, 30, 50, 80}, while anticipating worst-case optimization hyper-parameter ρ is searched over {1.0e-3, 1.0e-4, 1.0e-5}. |