Factorized Contrastive Learning: Going Beyond Multi-view Redundancy

Authors: Paul Pu Liang, Zihao Deng, Martin Q. Ma, James Y. Zou, Louis-Philippe Morency, Ruslan Salakhutdinov

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experiments We run comprehensive experiments on a suite of synthetic and large-scale real-world datasets with varying requirements of shared and unique task-relevant information, comparing our FACTORCL method to key baselines:
Researcher Affiliation Academia 1Carnegie Mellon University, 2University of Pennsylvania, 3Stanford University
Pseudocode Yes Algorithm 1 Standard multimodal CL.
Open Source Code Yes We release our code and models at https://github.com/pliang279/FactorCL.
Open Datasets Yes We use a large collection of real-world datasets provided in Multi Bench [45], where we expect varying ratios of shared and unique information important for the task, to compare FACTORCL with other CL baselines: 1. MIMIC [38]: ... 2. MOSEI [93]: ... 3. MOSI [91]: ... 4. UR-FUNNY [27]: ... 5. MUSTARD [12]: ... 6. IRFL [88]: ...
Dataset Splits No The paper mentions training models on various datasets (e.g., MIMIC, MOSEI, MOSI, UR-FUNNY, MUSTARD, IRFL) and discusses pre-training and fine-tuning, but does not explicitly specify the training, validation, and test dataset splits used for these experiments.
Hardware Specification Yes All experiments in this paper are run on a single NVIDIA A100 GPU.
Software Dependencies No The paper mentions using optimizers (Adam) and specific models (CLIP-VIT-B/32) but does not provide specific version numbers for software dependencies like programming languages or libraries (e.g., Python, PyTorch, TensorFlow).
Experiment Setup Yes We train the model for 100 epochs using the Adam optimizer with a 1e-4 learning rate.