Exploiting Domain-Specific Features to Enhance Domain Generalization

Authors: Manh-Ha Bui, Toan Tran, Anh Tran, Dinh Phung

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To demonstrate the merit of the proposed m DSDI framework, we extensively evaluate m DSDI on several state-of-the-art DG benchmark datasets, including Colored-MNIST, Rotated-MNIST, VLCS, PACS, Office-Home, Terra Incognita, Domain Net in addition to our newly created Background-Colored-MNIST for the ablation study to examine the behavior of our m DSDI.
Researcher Affiliation Collaboration 1 Vin AI Research, Vietnam 2 Monash University, Australia {v.habm1, v.toantm3, v.anhtt152, v.dinhpq2}@vinai.io
Pseudocode Yes Algorithm 1: Training and Inference processes of m DSDI
Open Source Code Yes All source code to reproduce results are available at https://github.com/Vin AIResearch/m DSDI.
Open Datasets Yes Dataset. To evaluate the effectiveness of the proposed method, we utilize 7 commonly used datasets including: Colored-MNIST (18): includes 70000 samples... Rotated-MNIST (19): contains 70000 samples... VLCS (20): includes 10729 samples... PACS (2): contains 9991 images... Office-Home (21): has 15500 daily images... Terra Incognita (22): includes 24778 wild photographs... and Domain Net (23): contains 586575 images...
Dataset Splits Yes We use the training-domain validation set technique as proposed in Domain Bed (24) for model selection. In particular, for all datasets, we first merge the raw training and validation, then, we run the test three times with three different seeds. For each random seed, we randomly split training and validation and choose the model maximizing the accuracy on the validation set, then compute performance on the given test sets.
Hardware Specification No The paper does not specify any particular GPU or CPU models, memory sizes, or types of computing resources used for the experiments. It only mentions using 'backbones MNIST-Conv Net' and 'Res Net-50' which refer to model architectures, not hardware.
Software Dependencies No The paper mentions 'Adam optimizer' and implicitly PyTorch through a GitHub link, but it does not provide specific version numbers for any software dependencies, such as PyTorch version, Python version, or CUDA version.
Experiment Setup Yes Data-processing techniques, model architectures, hyper-parameters, and changes of objective functions during training are presented in detail in Appendix C.3 C.5.