An Empirical Study on Disentanglement of Negative-free Contrastive Learning
Authors: Jinkun Cao, Ruiqian Nai, Qing Yang, Jialei Huang, Yang Gao
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this paper, we examine negative-free contrastive learning methods to study the disentanglement property empirically. ... With this proposed metric, we benchmark the disentanglement property of negative-free contrastive learning on both popular synthetic datasets and a real-world dataset Celeb A. |
| Researcher Affiliation | Academia | Jinkun Cao3 Ruiqian Nai1 Qing Yang4 Jialei Huang1 Yang Gao1,2 Tsinghua University1 Shanghai Qi-Zhi Institute2 Carnegie Mellon University3 Shanghai Jiao Tong University4 |
| Pseudocode | No | The paper describes methods and calculations (e.g., MED in Section 3.2) but does not present any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code of this paper is available at https: //github.com/noahcao/disentanglement_lib_med. |
| Open Datasets | Yes | Datasets. Representation disentanglement is usually evaluated on synthetic datasets, such as d Sprites (36), Cars3D (39), Shapes3D (5), and Small NORB (29). Besides those datasets, we also include a real-world dataset Celeb A (32). |
| Dataset Splits | No | The paper mentions 'train' and 'validation' in general ML terms but does not provide specific details on how the datasets were split (e.g., percentages, counts, or explicit splitting methodology) for their experiments. For example, it does not state 'X% for training, Y% for validation, Z% for testing'. |
| Hardware Specification | No | The paper mentions comparing evaluation times 'on the same machine' but does not provide specific details about the hardware used, such as GPU/CPU models, processors, or memory specifications for running experiments. |
| Software Dependencies | No | The paper mentions using 'Dis Lib (34)' for evaluation protocol, but it does not specify concrete software dependencies with version numbers (e.g., programming language, libraries, or frameworks with their specific versions) used for their implementation or experiments. |
| Experiment Setup | No | The paper states, 'The hyperparameters for training are chosen to be close to the original papers, and details will be shared in the source code' (Appendix A.1). While it mentions some details like latent dimension sizes and random seeds, it defers explicit hyperparameter values or detailed training configurations to external source code, meaning they are not provided within the main text or appendices. |