Your contrastive learning problem is secretly a distribution alignment problem

Authors: Zihao Chen, Chi-Heng Lin, Ran Liu, Jingyun Xiao, Eva Dyer

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our method through extensive experiments on both image classification and noisy data tasks, demonstrating that GCA s unbalanced OT (UOT) formulations improve classification performance by relaxing our constraints on alignment.
Researcher Affiliation Academia Zihao Chen , Chi-Heng Lin, Ran Liu, Jingyun Xiao, Eva L. Dyer School of Electrical & Computer Engineering Georgia Tech, Atlanta, GA
Pseudocode Yes Algorithm 1 Proximal-Point Algorithm for Generalized Contrastive Alignment (GCA)
Open Source Code Yes The implementation of our methods is in https://github.com/nerdslab/gca.
Open Datasets Yes For experiments with SVHN [36] and Image Net100 [15] we use the Res Net-50 encoder as the backbone and use a Res Net-18 encoder as the backbone for CIFAR-10, CIFAR-100 [29] and a corrupted version of CIFAR called CIFAR-10C [25].
Dataset Splits Yes We use the standard train/validation/test split for each dataset unless specified otherwise.
Hardware Specification Yes All training was conducted on NVIDIA RTX 3090 GPUs for 1000 epochs with a batch size of 256 for CIFAR-10/100 and 512 for SVHN/ImageNet-100.
Software Dependencies No The paper does not explicitly state specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions) in the main text or appendices.
Experiment Setup Yes Learning rates and other training details for CIFAR-10, CIFAR-100, SVHN, and Image Net100 are provided in Appendix D.1, while specific training details for CIFAR-10C are included in Appendix D.2. All training was conducted on NVIDIA RTX 3090 GPUs for 1000 epochs with a batch size of 256 for CIFAR-10/100 and 512 for SVHN/ImageNet-100.