Label-Noise Robust Diffusion Models
Authors: Byeonghu Na, Yeongmin Kim, HeeSun Bae, Jung Hyun Lee, Se Jung Kwon, Wanmo Kang, Il-chul Moon
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 EXPERIMENTS |
| Researcher Affiliation | Collaboration | Byeonghu Na1, Yeongmin Kim1, Hee Sun Bae1, Jung Hyun Lee2, Se Jung Kwon2, Wanmo Kang1 & Il-Chul Moon1,3 1KAIST, 2NAVER Cloud, 3summary.ai |
| Pseudocode | Yes | Algorithm 1: Training algorithm with TDSM |
| Open Source Code | Yes | Our code is available at: https://github.com/byeonghu-na/tdsm. |
| Open Datasets | Yes | We evaluate our method on three benchmark datasets commonly used for both image generation and label noise learning: MNIST (Le Cun et al., 2010), CIFAR-10, and CIFAR-100 (Krizhevsky, 2009). |
| Dataset Splits | No | The paper mentions 'training dataset' and 'test dataset' but does not explicitly provide percentages or absolute sample counts for training, validation, and test splits within the main text. |
| Hardware Specification | Yes | We utilized 8 NVIDIA Tesla P40 GPUs and employed CUDA 11.4 and Py Torch 1.12 versions in our experiments. |
| Software Dependencies | Yes | We utilized 8 NVIDIA Tesla P40 GPUs and employed CUDA 11.4 and Py Torch 1.12 versions in our experiments. |
| Experiment Setup | Yes | The score network was trained with a batch size of 512, and the training iterations were set to 400,000 for MNIST and CIFAR-10, 200,000 for CIFAR-100. For Clothing-1M, the score network is trained with a batch size 256 for 200,000 training iterations. |