Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Alignment of CNN and Human Judgments of Geometric and Topological Concepts
Authors: Neha Upadhyay, Vijay Marupudi, Kamala Varma, Sashank Varma
AAAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Specifically, we measure the sensitivity of five convolutional neural network (CNN) models to 43 GT concepts that aggregate into seven classes. We find evidence that the CNNs are sensitive to some classes (e.g., Euclidean Geometry) but not others (e.g., Geometric Transformations). The models sensitivity is generally lower at lower layers and maximal at the final fully-connected layer. Experiments with models from the Res Net family show that increasing model depth does not necessarily increase sensitivity to GT concepts. |
| Researcher Affiliation | Academia | Neha Upadhyay1, Vijay Marupudi1, Kamala Varma2, Sashank Varma1 1Georgia Institute of Technology 2University of Maryland, College Park |
| Pseudocode | No | The paper describes the methodology for deriving model predictions and evaluating performance, but does not include any pseudocode or algorithm blocks. |
| Open Source Code | No | The text does not include an unambiguous statement or link indicating that the authors' implementation code for the described methodology is open-source. |
| Open Datasets | Yes | Throughout this work, we focus on CNNs that have been pre-trained on Image Net (Deng et al. 2009), a large-scale dataset commonly used as an object classification benchmark. The Dehaene et al. (2006) odd-one-out task includes one stimulus for each of the 43 concepts. |
| Dataset Splits | No | The paper uses CNNs pre-trained on Image Net and evaluates them on stimuli from the Dehaene et al. (2006) odd-one-out task, but it does not specify any training/test/validation dataset splits for the experiments conducted in this paper. |
| Hardware Specification | No | The paper describes various CNN models used in the experiments but does not provide specific details about the hardware (e.g., GPU/CPU models, memory) on which these experiments were run. |
| Software Dependencies | No | We accessed versions of these models that have been pre-trained on Image Net (Deng et al. 2009) through the Keras API (Chollet et al. 2015). This only mentions the Keras API and its publication year, but no specific version number for it or other software dependencies. |
| Experiment Setup | Yes | Each image was re-scaled and cropped to a size of 224 224. We presented each image to a given model and recorded the vector of activations obtained from every layer. For each image, we computed the cosine similarity between its vector representation and the vector representations of each of the other 5 images. |