Label-Retrieval-Augmented Diffusion Models for Learning from Noisy Labels
Authors: Jian Chen, Ruiyi Zhang, Tong Yu, Rohan Sharma, Zhiqiang Xu, Tong Sun, Changyou Chen
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments are conducted for evaluation. Our model achieves new state-of-the-art (SOTA) results on all standard real-world benchmark datasets. |
| Researcher Affiliation | Collaboration | Jian Chen1 Ruiyi Zhang2 Tong Yu2 Rohan Sharma1 Zhiqiang Xu3 Tong Sun2 Changyou Chen1 1University at Buffalo 2Adobe Research 3MBZUAI |
| Pseudocode | Yes | Algorithm 1 Training Input: training set {X, Y}, image encoder fp, fq. |
| Open Source Code | Yes | Code is available at https://github.com/puar-playground/LRA-diffusion |
| Open Datasets | Yes | We conduct simulation experiments on the CIFAR-10 and CIFAR-100 datasets [51] to evaluate our method s performance under various noise types. |
| Dataset Splits | Yes | The dataset includes a clean training set, validation set, and test set with manually refined labels, consisting of approximately 47.6k, 14.3k, and 10k pictures, respectively. |
| Hardware Specification | Yes | All experiments were done on four NVIDIA Titan V GPUs. |
| Software Dependencies | No | No specific software versions (e.g., library names with version numbers) are mentioned in the paper. |
| Experiment Setup | Yes | We train LRA-diffusion models for 200 epochs with Adam optimizer. The batch size is 256. We used a learning rate schedule that included a warmup phase followed by a half-cycle cosine decay. The initial learning rate is set to 0.001. |