Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Views Can Be Deceiving: Improved SSL Through Feature Space Augmentation
Authors: Kimia Hamidieh, Haoran Zhang, Swami Sankaranarayanan, Marzyeh Ghassemi
ICLR 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We first empirically show that commonly used augmentations in SSL can cause undesired invariances in the image space, and illustrate this with a simple example. We further show that classical approaches in combating spurious correlations, such as dataset re-sampling during SSL, do not consistently lead to invariant representations. Motivated by these findings, we propose LATETVG to remove spurious information from these representations during pretraining, by regularizing later layers of the encoder via pruning. We find that our method produces representations which outperform the baselines on several benchmarks, without the need for group or label information during SSL. |
| Researcher Affiliation | Collaboration | 1MIT, 2Sony AI EMAIL |
| Pseudocode | Yes | An algorithmic representation of the method can be found in Appendix B. |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code for the described methodology. |
| Open Datasets | Yes | We evaluate methods on five commonly used benchmarks in spurious correlations Celeb A (Liu et al., 2015), CMNIST (Arjovsky et al., 2019), Meta Shift (Liang & Zou, 2022), Spurious CIFAR-10 (Nagarajan et al., 2020), and Waterbirds (Wah et al., 2011) (See Appendix D.1 for dataset descriptions). |
| Dataset Splits | Yes | The training split used during the pre-training stage are unbalanced and contain spuriously correlated data. The group/label counts for each dataset and split is shown in Appendix D.1. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | Yes | The details of the choice of augmentations and training for this step can be found in Appendix E. |