Identifying Interpretable Subspaces in Image Representations
Authors: Neha Kalibhat, Shweta Bhardwaj, C. Bayan Bruss, Hamed Firooz, Maziar Sanjabi, Soheil Feizi
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate FALCON through human evaluation on Amazon Mechanical Turk (AMT). We show participants images and their FALCON concepts to collect ground truths (relevant or not relevant) for each concept of each annotated feature. The results from our study show a precision of 0.86 and recall of 0.84 for the top-5 concepts, indicating that FALCON concepts are agreeably explanatory (See Section 4). |
| Researcher Affiliation | Collaboration | 1University of Maryland, College Park 2Center for Machine Learning, Capital One 3Meta AI. |
| Pseudocode | Yes | Algorithm 1: Pytorch-like pseudocode for discovering interpretable feature groups in a given representation space |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-source code of the methodology described. |
| Open Datasets | Yes | In our experiments, we use Image Net-1K (Russakovsky et al., 2015) validation set for D and LAION400m (Schuhmann et al., 2021) for S, however, the framework of FALCON is general and can be used with other datasets as well. |
| Dataset Splits | Yes | In our experiments, we use Image Net-1K (Russakovsky et al., 2015) validation set for D and LAION400m (Schuhmann et al., 2021) for S, however, the framework of FALCON is general and can be used with other datasets as well. |
| Hardware Specification | No | The paper does not explicitly describe the hardware used to run its experiments. |
| Software Dependencies | No | The paper mentions 'solo-learn package (da Costa et al., 2022) and the official implementation of CLIP (Radford et al., 2021)' but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | We solve optimization by training a linear head for only 10 epochs with a learning rate of 1, using an SGD optimizer. |