Sample Selection for Universal Domain Adaptation

Authors: Omri Lifshitz, Lior Wolf8592-8600

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments show that our scoring scheme based on the aforementioned signals together with the three loss terms improves the state of the art accuracy in the Uni DA scenario. and Experiments Following You et al. (2019), we use four datasets.
Researcher Affiliation Academia Omri Lifshitz, Lior Wolf Tel-Aviv University omri.lifshtz@gmail.com, liorwolf@gmail.com
Pseudocode No The paper describes the methods in text and mathematical formulas, but does not include any explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not explicitly state that the source code for their methodology is open-source or publicly available.
Open Datasets Yes Office Home (Venkateswara et al. 2017) is a dataset made up of 65 different classes from four domains: Artistic (Ar), Clipart (Cl), Product (Pr) and Real-world images (RW). Office31 (Saenko et al. 2010) consists of three domains, each with 31 classes. Vis DA2017 (Peng et al. 2018) is a dataset with a single source and target domain testing the ability to perform transfer learning from synthetic images to natural images. Image Net-Caltech employs Imagenet-1K (Deng et al. 2009) with 1000 different classes and Caltech-256 (Griffin, Holub, and Perona 2006) with 256 classes.
Dataset Splits No The paper describes source and target domains for training and unlabeled data, and a separate evaluation protocol for testing, but does not explicitly provide details about a validation dataset split (percentages, counts, or specific methodology for validation data partitioning).
Hardware Specification No The paper mentions software and model architecture (Pytorch, Res Net-50) but does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The method is implemented in Pytorch using a Res Net-50 model (He et al. 2016)...
Experiment Setup Yes Early on during the development process, we fixed the following hyperparameters across all datasets: γ = 0.6 and s0 = 1.0. We provide parameter sensitivity experiments to demonstrate the robustness of the method to its parameters.