Self-Paced Robust Learning for Leveraging Clean Labels in Noisy Data
Authors: Xuchao Zhang, Xian Wu, Fanglan Chen, Liang Zhao, Chang-Tien Lu6853-6860
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on synthetic and real-world datasets demonstrate that our proposed approach can achieve a considerable improvement in effectiveness and robustness to existing methods. |
| Researcher Affiliation | Academia | 1Discovery Analytics Center, Virginia Tech, Falls Church, VA, 2University of Notre Dame, Notre Dame, IN, 3George Mason University, Fairfax, VA |
| Pseudocode | Yes | Algorithm 1: SPRL ALGORITHM |
| Open Source Code | Yes | Details of both the source code and datasets used in the experiments can be found in supplementary document. |
| Open Datasets | Yes | For the real-world datasets, we chose the Blog Feedback dataset (Buza 2014) for the robust regression task... For the classification task, we chose the Large Movie Review dataset (Maas et al. 2011) collected from the IMDb website |
| Dataset Splits | No | The paper specifies training and testing sets with exact sample counts ('The first 12,000 data samples were used as training set... The other 10,000 data samples were chosen as the testing set.') but does not explicitly mention a separate validation set or split for hyperparameter tuning. |
| Hardware Specification | Yes | All the experiments were conducted on a 64-bit machine with an Intel(R) Core(TM) quad-core processor (i7CPU@3.6GHz) and 32.0GB of memory. |
| Software Dependencies | No | The paper mentions software components and frameworks in general (e.g., 'off-the-shelf optimizer'), but it does not specify any software names with version numbers (e.g., 'Python 3.8', 'PyTorch 1.9'). |
| Experiment Setup | Yes | For our proposed method, SPRL, we choose the parameter λ as 1 and 3.5 for regression and classification tasks, respectively. The traditional self-paced learning algorithm (SPL) ... was also compared in our experiments with the parameter λ = 0.1 and the step size μ = 1.1. |