Probabilistic Contrastive Learning for Domain Adaptation
Authors: Junjie Li, Yixin Zhang, Zilei Wang, Saihui Hou, Keyu Tu, Man Zhang
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments to validate the effectiveness of PCL and observe consistent performance gains on five tasks, i.e., Unsupervised/Semi-Supervised Domain Adaptation (UDA/SSDA), Semi-Supervised Learning (SSL), UDA Detection and Semantic Segmentation. |
| Researcher Affiliation | Academia | 1Beijing University of Posts and Telecommunications 2University of Science and Technology of China 3Beijing Normal University |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/ljjcoder/Probabilistic Contrastive-Learning. |
| Open Datasets | Yes | We evaluate our method on two standard UDA semantic segmentation tasks: GTA5 [Richter et al., 2016] Cityscapes [Cordts et al., 2016] and SYNTHIA [Ros et al., 2016] Cityscapes. |
| Dataset Splits | No | The paper refers to standard benchmarks (e.g., Domain Net, Office-Home, CIFAR-100) and specific settings like "3-shot" or "1-shot" for semi-supervised tasks, but it does not provide explicit numerical or percentage-based train/validation/test splits, nor does it cite specific predefined split methodologies. |
| Hardware Specification | Yes | Notably, the training cost of our method is much lower than CPSL-D (PCL: 1*3090, 5 days v.s. CPSL-D: 4*V100, 11 days). |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | For the hyperparameter in PCL, we set s = 20 in all experiments. [...] For the hyperparameter in PCL, we set s = 7 in all experiments. |