Explaining Self-Supervised Image Representations with Visual Probing

Authors: Dominika Basaj, Witold Oleszkiewicz, Igor Sieradzki, Michał Górszczak, Barbara Rychalska, Tomasz Trzcinski, Bartosz Zieliński

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Tab. 1 summarizes the results obtained in our experiments.
Researcher Affiliation Collaboration Dominika Basaj1,2 , Witold Oleszkiewicz1 , Igor Sieradzki3 , Michał G orszczak3 , Barbara Rychalska1,4 , Tomasz Trzci nski1,2,3 and Bartosz Zieli nski3,5 1Warsaw University of Technology 2Tooploox 3Faculty of Mathematics and Computer Science, Jagiellonian University 4Synerise 5Ardigen
Pseudocode No The paper does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes The code is at: github.com/Bio NN-Info Tech/visual-probes
Open Datasets Yes We conduct all of our experiments on the Image Net dataset [Deng et al., 2009]
Dataset Splits Yes We conduct all of our experiments on the Image Net dataset [Deng et al., 2009], keeping its standard train/validation split.
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments, such as GPU models, CPU types, or memory.
Software Dependencies No The paper mentions using a 'logistic regression classifier' and 'LBFGS solver' but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes We use a logistic regression classifier with a maximum of 1000 iterations and the LBFGS solver to train all diagnostic classifiers. ... We train 100 classifiers corresponding to 100 visual words. ... we group the possible output into 5 equally-wide bins... A similar procedure is applied to the character bin probing task, except that we use 6 bins in this case. ...we apply the random over-sampling if needed to deal with the imbalanced classes.