Feature Directions Matter: Long-Tailed Learning via Rotated Balanced Representation
Authors: Gao Peifeng, Qianqian Xu, Peisong Wen, Zhiyong Yang, Huiyang Shao, Qingming Huang
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, our method is extremely simple in implementation but shows great superiority on several benchmark datasets. We conduct a series of experiments. |
| Researcher Affiliation | Academia | 1School of Computer Science and Technology, UCAS, Beijing, China. 2Key Laboratory of Intelligent Information Processing, Inst. of Comput. Tech., CAS, Beijing, China. 3State Key Laboratory of Info. Security (SKLOIS), Inst. of Info. Engin., CAS, Beijing, China. 4School of Cyber Security, UCAS, Beijing, China. 5BDKM, UCAS, Beijing, China. 6Peng Cheng Laboratory, Shenzhen, China. |
| Pseudocode | No | The paper provides actual PyTorch code implementations in Appendix E (Code 1 and Code 2) rather than high-level pseudocode or a formally labeled algorithm block. |
| Open Source Code | Yes | To show all details of our method, we release the source code of our framework implemented by Pytorch. Our approach is very simple in implementation, only need more than 20 lines code. Here, we provide two versions of implementation, as shown in Code.1 and 2. |
| Open Datasets | Yes | We use several benchmark datasets in our experiments, including CIFAR10/CIFAR100 (Krizhevsky et al., 2009), long-tailed Image Net (Liu et al., 2019a) and long-tailed Places (Liu et al., 2019b). |
| Dataset Splits | Yes | CIFAR10/CIFAR100 both contain 60000 images of size 32x32, where 10000 of them are for testing and 50000 for training... Image Net-LT...the valid set and test set of Image Net-LT have 20 and 50 images for each category respectively. The valid and test sets are balanced and contain 20 and 100 images per class respectively. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments. It only mentions general training settings and models used. |
| Software Dependencies | No | The paper mentions 'Pytorch' in Appendix E for implementation but does not specify a version number or list other software dependencies with version numbers. |
| Experiment Setup | Yes | We utilize SGD optimization for all experiments. For CIFAR10/100-LT, the model is trained for 600 epochs with batch size 256. Besides, the learning rate linearly warm up from 0.05 to 0.1 within the first 8 epochs, and then decays to zero in cosine decay scheme. For Image Net-LT, we train the model for 200 epochs with batch size 64. The learning rate is set as 0.25 and decays to zeros by cosine decay during training. For Places-LT, we train the model with learning rate 3.5e-3 and batch size 64 for 30 epochs. For all datasets, the weight decay and momentum are set as 0.0005 and 0.9. |