MetaReg: Towards Domain Generalization using Meta-Regularization

Authors: Yogesh Balaji, Swami Sankaranarayanan, Rama Chellappa

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental validations on computer vision and natural language datasets indicate that our method can learn regularizers that achieve good cross-domain generalization. In this section, we describe the experimental validation of our proposed approach. We perform experiments on two benchmark domain generalization datasets Multi-domain image recognition using PACS dataset [18] and sentiment classification using Amazon Reviews dataset [2].
Researcher Affiliation Collaboration Yogesh Balaji Department of Computer Science University of Maryland College Park, MD yogesh@cs.umd.edu Swami Sankaranarayanan Butterfly Network Inc. New York, NY swamiviv@butterflynetinc.com Rama Chellappa Department of Electrical and Computer Engineering University of Maryland College Park, MD rama@umiacs.umd.edu
Pseudocode Yes The entire algorithm is given in Algorithm 1
Open Source Code No The paper does not provide any explicit statements about releasing source code or links to a code repository.
Open Datasets Yes We perform experiments on two benchmark domain generalization datasets Multi-domain image recognition using PACS dataset [18] and sentiment classification using Amazon Reviews dataset [2].
Dataset Splits No The paper mentions 'meta-train set' and 'meta-test set' for training the regularizer, but these are not traditional validation splits for hyperparameter tuning of the main model. It does not provide specific percentages or counts for a validation set.
Hardware Specification No The paper does not specify any particular hardware used for running the experiments, such as GPU or CPU models.
Software Dependencies No The paper mentions models like Alexnet and Resnet but does not specify any software versions for libraries, frameworks, or programming languages (e.g., PyTorch version, Python version).
Experiment Setup Yes All our models are trained using the SGD optimizer with learning rate 5e 4 and a batch size of 64. All models were trained using SGD optimizer with a learning rate of 0.001 and momentum 0.9. The hyper-parameters α1 and α2 are both set as 0.001. All models were trained using an SGD optimizer with learning rate 0.01 and momentum 0.9 for 5000 iterations.