Learning with Bounded Instance and Label-dependent Label Noise
Authors: Jiacheng Cheng, Tongliang Liu, Kotagiri Ramamohanarao, Dacheng Tao
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | At last, empirical evaluations on both synthetic and real-world datasets show effectiveness of our algorithm in learning with BILN. |
| Researcher Affiliation | Collaboration | 1UBTECH Sydney AI Centre, School of Computer Science, Faculty of Engineering, University of Sydney, NSW, Australia. 2University of Science and Technology of China, Hefei, China. 3School of Computing and Information Systems, University of Melbourne, VIC, Australia. |
| Pseudocode | Yes | Algorithm 1 Learning with BILN |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository for the described methodology. |
| Open Datasets | Yes | Second, we conduct evaluations on two public real-world datasets: the image dataset from the UCI repository provided by Gunnar R atsch1 and the USPS handwritten digits dataset2 (Hull, 1994). Footnotes provide URLs: "1http://theoval.cmp.uea.ac.uk/matlab 2http://www.cs.nyu.edu/ roweis/data.html" |
| Dataset Splits | No | The paper states: "In each trial... examples are randomly split, 75% for training and 25% for testing." It does not explicitly mention a separate validation split. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments. |
| Software Dependencies | No | The paper mentions using "logistic regression" and "Gaussian kernel" but does not specify any software dependencies with version numbers (e.g., "Python 3.x", "PyTorch 1.x", "Scikit-learn 0.x"). |
| Experiment Setup | Yes | In our experiments, logistic regression is used for both training ˆf and estimating η. For KMM, we always use the Gaussian kernel k(xi, xj) = exp( σ xi xj 2) and the value of σ is set as σ = 1 for evaluations on synthetic datasets and σ = 0.01 for evaluations on real-world datasets. The setup of parameters ϵ and B is the same as that of Huang et al. (2007), i.e., ϵ = ( m 1)/ m and B = 1000. In this section, each entry in the tables is the result averaged over 1000 trials. |