Combinatorial Inference against Label Noise
Authors: Paul Hongsuck Seo, Geeho Kim, Bohyung Han
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our extensive experiments demonstrate outstanding performance in terms of accuracy and efficiency compared to the stateof-the-art methods under various synthetic noise configurations and in a real-world noisy dataset. |
| Researcher Affiliation | Academia | Paul Hongsuck Seo Computer Vision Lab. POSTECH hsseo@postech.ac.kr Geeho Kim Bohyung Han Computer Vision Lab. & ASRI Seoul National University {snow1234, bhhan}@snu.ac.kr |
| Pseudocode | No | The paper describes algorithms and methods in detail using prose and mathematical equations, but it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks or figures. |
| Open Source Code | No | The paper does not provide any statements about open-sourcing the code for the described methodology or a link to a code repository. |
| Open Datasets | Yes | We conduct a set of experiments on Caltech-UCSD Birds-200-2011 (CUB200) dataset [39] with various noise settings... We also conduct experiments on a real-world noisy benchmark, Web Vision [4]. |
| Dataset Splits | Yes | CUB-200 is a fine-grained classification benchmark with 200 bird species and contains 30 images per class in the training and validation sets. |
| Hardware Specification | No | The paper does not specify the exact hardware used for experiments, such as GPU models, CPU types, or cloud computing instances. |
| Software Dependencies | No | The paper mentions using 'Res Net-50 as the backbone network' and 'deep neural network', but it does not provide specific software names with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | The entire network is fine-tuned for 40 epochs by a mini-batch stochastic gradient descent method with batch size of 32, momentum of 0.9 and weight decaying factor of 5 10 4. The initial learning rate is 0.01 and decayed by a factor of 0.1 at epoch 20 and 30. |