Mitigating Memorization of Noisy Labels via Regularization between Representations
Authors: Hao Cheng, Zhaowei Zhu, Xing Sun, Yang Liu
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments and theoretical analyses support our claims. |
| Researcher Affiliation | Collaboration | University of California, Santa Cruz, Tencent You Tu Lab |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/UCSC-REAL/Self Sup_Noisy Label. |
| Open Datasets | Yes | We use Dog Cat, CIFAR10, CIFAR100, CIFAR10N and CIFAR100N and Clothing1M for experiments. |
| Dataset Splits | No | For CIFAR10 and CIFAR100, we follow standard setting that use 50000 images for training and 10000 images for testing. The paper specifies training and testing sets, but does not explicitly mention or detail a validation split. |
| Hardware Specification | No | The paper mentions neural network architectures (Res Net34, Res Net50) but does not provide specific details about the hardware (e.g., GPU models, CPU types) used for experiments. |
| Software Dependencies | No | The paper mentions optimizers like 'Adam' and 'SGD' but does not provide specific version numbers for software dependencies or libraries. |
| Experiment Setup | Yes | learning rate (0.1 at first 50 epochs and 0.01 for last 50 epochs), batchsize (256), optimizer (SGD). Each model is pre-trained by 1000 epochs with Adam optimizer (lr = 1e-3) and batch-size is set to be 512. During fine-tuning, we fine-tune the classifier on noisy dataset with Adam (lr = 1e-3) for 100 epochs and batch-size is set to be 256. |