Optimizing Black-box Metrics with Iterative Example Weighting
Authors: Gaurush Hiranandani, Jatin Mathur, Harikrishna Narasimhan, Mahdi Milani Fard, Sanmi Koyejo
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on various label noise, domain shift, and fair classification setups confirm that our proposal compares favorably to the state-of-the-art baselines for each application. |
| Researcher Affiliation | Collaboration | 1University of Illinois at Urbana-Champaign, Illinois, USA 2Google Research, USA 3Google Research, Accra. |
| Pseudocode | Yes | Algorithm 1: Elicit Weights for Diagonal Linear Metrics; Algorithm 2: Plug-in with Elicited Weights (PI-EW) for Diagonal Linear Metrics; Algorithm 3: Frank-Wolfe with Elicited Gradients (FW-EG) for General Diagonal Metrics (also depicted in Fig. 1) |
| Open Source Code | Yes | The source code (along with random seeds) is provided on the link below.1 https://github.com/koyejolab/fweg/ |
| Open Datasets | Yes | We train a 10-class image classifier for the CIFAR-10 dataset (Krizhevsky et al., 2009); Our next experiment borrows the proxy label setup from Jiang et al. (2020) on the Adult dataset (Dua & Graff, 2017); The task is to learn a gender recognizer for the Adience face image dataset (Eidinger et al., 2014). |
| Dataset Splits | Yes | We take 2% of original training data as validation data and flip labels in the remaining training set...; We sample 1% validation data from the original training data...; For the validation set, we sample 20% of the 6 8 age bucket images. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for experiments, such as GPU/CPU models or memory. |
| Software Dependencies | No | The paper mentions general software like SGD and ResNet, but does not specify exact version numbers for programming languages, libraries, or frameworks. |
| Experiment Setup | Yes | The learning rate for Fine-tuning is chosen from 1e{ 6,..., 4}. For PI-EW and FW-EG, we tune the parameter ϵ from {1, 0.4, 1e {4,3,2,1}}. The line search for Plug-in is performed with a spacing of 1e 4. |