SCoRe: Submodular Combinatorial Representation Learning

Authors: Anay Majee, Suraj Nandkishor Kothawade, Krishnateja Killamsetty, Rishabh K Iyer

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform experiments on several long-tail vision benchmarks to show the effectiveness of objectives in SCo Re.
Researcher Affiliation Collaboration Anay Majee 1 Suraj Kothawade 2 Krishnateja Killamsetty 3 Rishabh Iyer 1...1Department of Computer Science, The University of Texas at Dallas, Richardson, TX, USA 2Google Research, Sunnyvale, CA, USA 3IBM Research, San Jose, CA, USA. Correspondence to: Anay Majee <anay.majee@utdallas.edu>, Rishabh Iyer <rishabh.iyer@utdallas.edu>.
Pseudocode No No pseudocode or clearly labeled algorithm blocks were found in the paper.
Open Source Code Yes We train all our models on 2 NVIDIA A6000 GPUs with code released at https://github. com/amajee11us/SCo Re.git.
Open Datasets Yes We perform experiments on several long-tail vision benchmarks... CIFAR-10-LT... CIFAR-100-LT... Image Net-LT introduced in (Liu et al., 2019)... Med MNIST (Yang et al., 2023)... IDD (Varma et al., 2019)... LVIS (Gupta et al., 2019)...
Dataset Splits Yes The training dataset comprises a total of 100,000 images, encompassing 1.3 million instances, and the validation set contains 20,000 images.
Hardware Specification Yes We train all our models on 2 NVIDIA A6000 GPUs with code released at https://github. com/amajee11us/SCo Re.git.
Software Dependencies No The paper mentions 'Detectron2 framework' but does not provide specific version numbers for any software dependencies used in the experiments.
Experiment Setup Yes For stage 1 we train a Res Net-50 backbone with a batch size of 512 (1024 after augmentations) with an initial learning rate of 0.4, trained for 1000 epochs with a cosine annealing scheduler and a temperature for the combinatorial objectives to be 0.7.