Leveraging Importance Weights in Subset Selection

Authors: Gui Citovsky, Giulia DeSalvo, Sanjiv Kumar, Srikumar Ramalingam, Afshin Rostamizadeh, Yunjuan Wang

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental IWe S admits significant performance improvement compared to other subset selection algorithms for seven publicly available datasets. Additionally, it is competitive in an active learning setting, where the label information is not available at selection time. We also provide an initial theoretical analysis to support our importance weighting approach, proving generalization and sampling rate bounds.
Researcher Affiliation Collaboration 1Google Research, New York, NY, 10011 2Department of Computer Science, Johns Hopkins University, Baltimore, MD, 21218 {gcitovsky,giuliad,sanjivk,rostami,rsrikumar}@google.com ywang509@jhu.edu
Pseudocode Yes Algorithm 1 Importance Weighted Subset Selection (IWe S)
Open Source Code No The paper does not provide an explicit statement or link indicating that the source code for the methodology is openly available.
Open Datasets Yes Specifically, we consider six multi-class datasets (CIFAR10, CIFAR100 (Krizhevsky & Hinton, 2009), SVHN (Netzer et al., 2011), EUROSAT (Helber et al., 2019), CIFAR10 Corrupted (Hendrycks & Dietterich, 2019), Fashion MNIST (Xiao et al., 2017) and one large-scale multi-label Open Images dataset (Krasin et al., 2017). Further details of each dataset can be found in Table 1 and Table 2 in the appendix.
Dataset Splits Yes For each dataset, we tune the learning rate by choosing the rate from the set {0.001, 0.002, 0.005, 0.01, 0.1} that achieves best model performance on the seed set.
Hardware Specification Yes We train a Res Net101 model implemented using tf-slim on 64 Cloud two core TPU v4 acceleration
Software Dependencies No The paper mentions 'tf-slim' but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes We use batch SGD with the selected learning rate and fix SGD s batch size to 100. At each sampling round r, the model is trained to convergence on all past selected examples for at least 20 epochs. For IWe S, we set the weight capping parameter to 2 for all datasets except for CIFAR10 which we decreased to 1.5 in order to reduce training instability. ... We add a global pooling layer with a fully connected layer of 128 dimensions as the final layers of the networks, which is needed by BADGE and Coreset. The model is initialized with weights that were pre-trained on the validation split using 150K SGD steps, and at each sampling round, the model is trained on all past selected examples with an additional 15K SGD steps. ... For IWe S, the weight capping parameter is set to 10.