Cross-Domain Collaborative Normalization via Structural Knowledge

Authors: Haifeng Xia, Zhengming Ding2777-2785

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental empirical study verifies that replacing BN with Co N in popular network backbones effectively improves classification accuracy in most learning tasks across three cross-domain visual benchmarks.
Researcher Affiliation Academia Department of Computer Science, Tulane University hxia@tulane.edu, zding1@tulane.edu
Pseudocode No No structured pseudocode or algorithm blocks were found in the paper.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes Datasets: 1) Image-CLEF collects visual signals from three subsets: Caltech-256 (C), Image Net ILSVRC 2012 (I) and Pascal VOC 2012 (P) with the same number of samples. [...] 2) Office-31 (Saenko et al. 2010) as a benchmark dataset of domain adaptation [...] 3) Office-Home (Venkateswara et al. 2017) consists of four subsets...
Dataset Splits No The paper mentions source and target domains (e.g., "Denote a well-labeled source domain Ds = {(Xs i, ls i)}ns i=1 and a target domain Dt = {Xt i}nt i=1 without any annotation") and total image counts for datasets, but does not explicitly provide specific percentages or counts for training, validation, or test splits. It states "We follow the standard protocols operated with CDAN and DANN," implying use of predefined splits, but these are not detailed within the paper itself.
Hardware Specification No The paper does not provide any specific details regarding the hardware (e.g., GPU models, CPU types, memory specifications) used for running the experiments.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies, libraries, or frameworks used in the experiments.
Experiment Setup No While the paper mentions using standard protocols with CDAN and DANN backbones, it does not provide specific experimental setup details such as concrete hyperparameter values (e.g., learning rate, batch size, epochs) or training configurations.