Deciphering the Projection Head: Representation Evaluation Self-supervised Learning

Authors: Jiajun Ma, Tianyang Hu, Wenjia Wang

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our extensive experiments with different architectures (including Sim CLR, Mo Co-V2, and Sim Siam) on various datasets demonstrate that the RED-SSL consistently outperforms their baseline counterparts in downstream tasks.
Researcher Affiliation Collaboration 1Hong Kong University of Science and Technology 2Hong Kong University of Science and Technology (Guangzhou) 3Huawei Noah s Ark Lab
Pseudocode No The paper provides mathematical formulas for its proposed loss functions (LRED-Contrastive and LRED-Non-Contrastive) and conceptual diagrams (Figure 3), but it does not include a pseudocode block or a clearly labeled algorithm.
Open Source Code No The paper does not contain any explicit statements about releasing code, nor does it provide a link to a code repository for the methodology described.
Open Datasets Yes Through comprehensive comparison experiments between the baseline SSL methods (Sim CLR, Mo Co V2, Sim Siam) and the RED-version (RED-Sim CLR, REDMo Co-V2, and RED-Sim Siam) in various datasets (CIFAR-10, CIFAR-100 [Krizhevsky, 2009], Image Net [Deng et al., 2009])
Dataset Splits No The paper mentions training, testing, and downstream classification tasks. It does not explicitly specify the use of a separate validation dataset split with percentages, counts, or a standard reference for it.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper mentions models like ResNet and various SSL frameworks (SimCLR, MoCo-V2, SimSiam) and a k-NN classifier. However, it does not specify any software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x, TensorFlow 2.x).
Experiment Setup Yes All are trained for 200 epochs in CIFAR-10, and the encoder is Res Net-18.