Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Does Representation Similarity Capture Function Similarity?

Authors: Lucas Hayne, Heejung Jung, R. Carter

TMLR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper, we systematically test representation similarity metrics to evaluate their sensitivity to functional changes induced by ablation. We use network performance changes after ablation as a way to measure the influence of representation on function. These measures of function allow us to test how well similarity metrics capture changes in network performance versus changes to linear decodability.
Researcher Affiliation Academia Lucas Hayne EMAIL University of Colorado Boulder Heejung Jung EMAIL University of Colorado Boulder R. Mc Kell Carter EMAIL University of Colorado Boulder
Pseudocode No The paper describes the methodology in prose and through a diagram (Figure 1), but does not present any structured pseudocode or algorithm blocks.
Open Source Code No The paper references third-party pre-trained model weights from specific GitHub/Keras/PyTorch hubs (e.g., "Alex Net pre-trained weights from http://github.com/heuritech/convnets-keras"), but does not provide any explicit statement or link for the source code implementing the methodology described in the paper itself.
Open Datasets Yes For our experiments, we investigated Alex Net (Krizhevsky et al. (2012)), Mobile Net V2 (Sandler et al. (2018)), and Res Net50 (He et al. (2016)) all pre-trained on Image Net (Deng et al. (2009))
Dataset Splits Yes For Mobile Net V2 and Res Net50, 10 classes were randomly chosen from Image Net and all the images from the validation set were used for our analysis. For Alex Net, 50 random classes were chosen.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments.
Software Dependencies No Alex Net and Mobile Net V2 were built using Keras and included lambda masking layers after each parameterized layer to selectively ablate unit groups. Res Net50 was built using Pytorch with forward hooks applied to the output of each block and used to perform ablation.
Experiment Setup Yes Specifically, for each of the 10 or 50 randomly chosen target classes and layer of the CNNs we tested, Alex Net, Mobile Net V2, and Res Net50, we first projected every neuron onto two dimensions: class selectivity and activation magnitude. Then, we constructed a grid to overlay on the activation space so that each cell of the grid contained the same number of neurons. To supply the set A of representation matrices from Section 2.1, we ablated one cell of neurons at a time by setting the activation values for those neurons to zero.