Contrast and Mix: Temporal Contrastive Video Domain Adaptation with Background Mixing

Authors: Aadarsh Sahoo, Rutav Shah, Rameswar Panda, Kate Saenko, Abir Das

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on several benchmark datasets demonstrate the superiority of our proposed approach over state-of-the-art methods.
Researcher Affiliation Collaboration Aadarsh Sahoo1 Rutav Shah1 Rameswar Panda2 Kate Saenko2,3 Abir Das1 1 IIT Kharagpur, 2 MIT-IBM Watson AI Lab, 3 Boston University
Pseudocode No No pseudocode or algorithm blocks are present in the paper.
Open Source Code Yes Project page: https://cvir.github.io/projects/comix.
Open Datasets Yes We evaluate the performance of our approach using several publicly available benchmark datasets for video domain adaptation, namely UCF-HMDB [7], Jester [53], and Epic-Kitchens [50].
Dataset Splits Yes We use the standard training and testing splits provided by the authors in [7, 53, 50] to conduct our experiments on each dataset.
Hardware Specification Yes We use 6 NVIDIA Tesla V100 GPUs for training all our models.
Software Dependencies No The paper mentions specific models and optimizers (e.g., I3D, SGD) but does not provide specific version numbers for software libraries or frameworks like Python, PyTorch, or TensorFlow.
Experiment Setup Yes We use an initial learning rate of 0.001 for the I3D and 0.01 for the GCN in all our experiments. We use a batch size of 40 equally split over the two domains... The temperature parameter is set to τ = 0.5. ... We use a pseudo-label threshold of 0.7 in all our experiments and smooth the cross-entropy loss with ϵ = 0.1...