Inverse Problems Leveraging Pre-trained Contrastive Representations

Authors: Sriram Ravula, Georgios Smyrnis, Matt Jordan, Alexandros G. Dimakis

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate on a subset of Image Net and observe that our method is robust to varying levels of distortion. Our method outperforms end-to-end baselines even with a fraction of the labeled data in a wide range of forward operators. 4 Experiments
Researcher Affiliation Academia Sriram Ravula The University of Texas at Austin Electrical and Computer Engineering sriram.ravula@utexas.edu Georgios Smyrnis The University of Texas at Austin Electrical and Computer Engineering gsmyrnis@utexas.edu Matt Jordan The University of Texas at Austin Computer Science mjordan@cs.utexas.edu Alexandros G. Dimakis The University of Texas at Austin Electrical and Computer Engineering dimakis@austin.utexas.edu
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Code available at https://github.com/Sriram-Ravula/Contrastive-Inversion.
Open Datasets Yes For all experiments, we perform contrastive training for the robust encoder using a 100-class subset of Image Net, which we refer to as Image Net-100, [36, 39] to reduce computational resources.
Dataset Splits Yes We evaluate the quality of the learned robust representations for classifying images from the validation set of Image Net-100, using the same distortions during training and inference.
Hardware Specification No The paper mentions 'computing resources from TACC' but does not provide specific details on hardware components like GPU or CPU models, or memory specifications used for experiments.
Software Dependencies No The paper mentions using 'Pytorch' but does not specify its version number or any other software dependencies with their specific versions.
Experiment Setup Yes The baseline is trained for 25 epochs with a batch size of 64. Our robust encoder is trained for 25 epochs with a batch size of 256, and the linear probe on top of it is trained for 10 epochs.