A Dictionary Approach to Domain-Invariant Learning in Deep Networks

Authors: Ze Wang, Xiuyuan Cheng, Guillermo Sapiro, Qiang Qiu

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive real-world face recognition (with domain shifts and simultaneous multi-domains inputs), image classification, and segmentation experiments, and observe that, with the proposed method, invariant representations and performance across domains are consistently achieved without compromising the performance of individual domain.
Researcher Affiliation Academia Purdue University1 Duke University2
Pseudocode No The paper describes the method using text and diagrams but does not provide pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the described methodology.
Open Datasets Yes We adopt a challenging setting by using MNIST as the source domain, and SVHN as the target domain. [...] We adopt the NIR-VIS 2.0 [16], which consists of 17,580 NIR (near infrared) and VIS (visible light) face images of 725 subjects, and perform cross-domain face recognition. [...] We perform experiments on three public digits datasets: MNIST, USPS, and Street View House Numbers (SVHN). [...] Office-31 [27] is one of the most widely used datasets for visual domain adaptation. [...] We perform unsupervised adaptation from the GTA dataset [24] (images generated from video games) to the Cityscapes dataset [4] (real-world images).
Dataset Splits Yes We start the comparisons at 10% of the target domain labeled samples, and end at 0.5% where only 366 labeled samples are available for the target domain.
Hardware Specification No The paper mentions VGG-16 as a base network structure and discusses its parameters and FLOPs, but does not specify the hardware (e.g., GPU models, CPU types) used for running experiments.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies used in the experiments.
Experiment Setup No The paper mentions that 'networks with DAFD are trained end-to-end with a summed loss for domains' and how atoms/coefficients are updated, but it does not provide specific hyperparameters like learning rates, batch sizes, optimizers, or number of epochs.