Cross-Domain Visual Representations via Unsupervised Graph Alignment

Authors: Baoyao Yang, Pong C. Yuen5613-5620

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that the graph-aligned visual representations achieve good performance on both crossdataset recognition and cross-modal re-identification.
Researcher Affiliation Academia Baoyao Yang, Pong C. Yuen Department of Computer Science, Hong Kong Baptist University, Hong Kong byyang@comp.hkbu.edu.hk, pcyuen@comp.hkbu.edu.hk
Pseudocode No The paper describes the proposed network and optimization steps but does not include any formal pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete access (e.g., specific repository link, explicit statement of code release) to the source code for the methodology described.
Open Datasets Yes Experiments of cross-dataset digit recognition are done across the full training set of three benchmarks (MNIST (Le Cun et al. 1998), USPS and SVHN (Netzer et al. 2011) datasets). Cross-dataset object recognition experiments are conducted across the Office-10 and Caltech-10 (Gong et al. 2012) datasets. The graph-aligned representations are validated across modality on Reg DB (Nguyen et al. 2017) dataset.
Dataset Splits No The paper explicitly mentions training and testing phases and splits for some datasets (e.g., "randomly split the Visible and Thermal datasets into two halves for training and testing"), but it does not specify a separate 'validation' dataset split for hyperparameter tuning or model selection.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU/CPU models, memory, or cloud computing instance types.
Software Dependencies No The paper mentions software components and architectures like Le Net and Alex Net, but it does not specify any software dependencies with version numbers (e.g., Python, TensorFlow, PyTorch, or specific library versions).
Experiment Setup Yes Following the network settings in (Tzeng et al. 2017), the source and target CNNs are implemented with the Le Net (Le Cun et al. 1998), and the domain discriminator is implemented with three fully connected layers: two layers with 500 hidden units followed by the final discriminator output. We initialize the Alex Net using the pre-trained Image Net and only update the last three layers for adaptation.