Towards a Unified Framework of Contrastive Learning for Disentangled Representations
Authors: Stefan Matthes, Zhiwei Han, Hao Shen
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The theoretical findings are validated on several benchmark datasets. |
| Researcher Affiliation | Industry | Stefan Matthes, Zhiwei Han, Hao Shen fortiss Gmb H, Munich, Germany {matthes,han,shen}@fortiss.org |
| Pseudocode | No | The paper provides mathematical formulations of the contrastive losses (Eq. 3-6) but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code or provide links to a code repository for the described methodology. |
| Open Datasets | Yes | The KITTI Masks dataset [29] consists of segmentation masks of pedestrians extracted from the autonomous driving benchmark KITTI-MOTS [12]. |
| Dataset Splits | No | The paper mentions using training data and benchmark datasets but does not explicitly provide specific details on training, validation, and test dataset splits (e.g., percentages, sample counts, or predefined split references) needed for reproduction. |
| Hardware Specification | No | The paper states 'To fit our hardware setup, we use a smaller batch size of 5120.' but does not provide any specific hardware details such as GPU models, CPU types, or memory specifications used for the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies or library version numbers (e.g., Python, PyTorch, TensorFlow versions) needed to replicate the experiments. |
| Experiment Setup | Yes | During training, we additionally optimize for α and α in Eq. (2), both of which are parameterized by three-layer neural networks, respectively. |