Cross-domain Open-world Discovery

Authors: Shuo Wen, Maria Brbic

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results on image classification benchmark datasets demonstrate that CROW outperforms alternative baselines, achieving an 8% average performance improvement across 75 experimental settings.
Researcher Affiliation Academia Shuo Wen 1 Maria Brbi c 1 1EPFL, Switzerland. Correspondence to: Maria Brbi c <mbrbic@epfl.ch>.
Pseudocode No The paper describes the method with detailed steps but does not include a clearly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code Yes Our code is publicly available1. (with footnote 1https://github.com/mlbio-epfl/crow)
Open Datasets Yes The Office (Saenko et al., 2010) dataset has 31 classes and three domains... The Office Home (Venkateswara et al., 2017) dataset comprises 65 classes... Vis DA (Peng et al., 2017) is a synthetic-to-real (S2R) dataset... Domain Net (Peng et al., 2019) is the largest dataset...
Dataset Splits No Since there is no validation set in our setting, we report the results of the last iteration.
Hardware Specification No The paper mentions using 'CLIP Vi T-L14-336px as the backbone' but does not specify any particular GPU models, CPU types, memory amounts, or other explicit hardware configurations used for running the experiments.
Software Dependencies No The paper mentions using 'Py Torch (Paszke et al., 2019)' but does not provide specific version numbers for PyTorch itself (e.g., 'PyTorch 1.9') or other key software dependencies like Python or CUDA.
Experiment Setup Yes For optimizing, we use the SGD optimizer for all experiments, and the learning rate is set to 0.001 for the classifier and 0.0001 for the feature extractor (CLIP Vi T-L14-336px). We set the batch size to 32 and train all the methods for 1K iterations.