Learning to Select Pivotal Samples for Meta Re-weighting
Authors: Yinjun Wu, Adam Stein, Jacob Gardner, Mayur Naik
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical studies demonstrate the performance advantage of our methods over various baseline methods. We further explore whether our methods select reasonable meta samples for re-weighting noisily labeled data and class-imbalanced data by conducting experiments on reweighting MNIST, CIFAR, and Imagenet-10 datasets in the presence of noisy labels or imbalanced class distribution. |
| Researcher Affiliation | Academia | University of Pennsylvania wuyinjun@seas.upenn.edu, steinad@seas.upenn.edu, jacobrg@seas.upenn.edu, mhnaik@cis.upenn.edu |
| Pseudocode | Yes | In the end, we visually present both RBC and GBC equipped with this sampling technique in Figure 2 and include their pseudo-code in Algorithm 3 in Appendix Details of the adapted K-means algorithm . |
| Open Source Code | Yes | All the code is publicly available5. with footnote 5https://github.com/thuwuyinjun/meta sample selections |
| Open Datasets | Yes | We demonstrate the effectiveness of our methods for training deep neural nets on image classification datasets, MNIST (Deng 2012), CIFAR-10 (Krizhevsky, Hinton et al. 2009) and CIFAR-100 (Krizhevsky, Hinton et al. 2009), and Imagenet-10 (Russakovsky et al. 2015)4. |
| Dataset Splits | Yes | This toy dataset is then divided into 600 training, 240 testing, and 160 validation samples using a random partition. We report the validation accuracy for Imagenet-10 since the ground-truth labels of test samples are invisible |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for the experiments. |
| Software Dependencies | No | The paper mentions specific model architectures (Le Net, Res Net-34) but does not provide details on software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | All the hyper-paremters are reported in Appendix Supplemental experiments . |