Coupled Marginalized Auto-Encoders for Cross-Domain Multi-View Learning

Authors: Shuyang Wang, Zhengming Ding, Yun Fu

IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on two tasks have demonstrated the superiority of our method over the state-of-the-art methods. and 4 Experiments We evaluate our approach on two applications, e.g., person re-identification and kinship verification.
Researcher Affiliation Academia Department of Electrical & Computer Engineering, Northeastern University, Boston, MA, USA
Pseudocode No The paper describes its method using mathematical equations and text but does not include structured pseudocode or an algorithm block.
Open Source Code No The paper mentions online available code for compared methods and external features but does not provide concrete access or state the availability of source code for the proposed Coupled Marginalized Denoising Auto-encoders framework.
Open Datasets Yes VIPe R Dataset [Gray et al., 2007]... Currently, UB Kin Face [Shao et al., 2011] is the only dataset collected with children, young parents and old parents. The dataset consists of 600 images... http://www1.ece.neu.edu/~yunfu/research/Kinface/Kinface.htm
Dataset Splits Yes half of the dataset, i.e., 316 image pairs, are randomly split for training, and the remaining half for testing. and There are 3 parameters in our model including , λ1 and λ2, which are tuned through 5-fold cross validation. and the 200 groups are randomly split into five folds with 40 pairs each fold, then the two protocols are both performed with five-fold cross validation.
Hardware Specification Yes Our experiments run on a computer with an Intel I7 quad-core 3.4GHZ CPU and 8GB memory.
Software Dependencies No The paper references external descriptors and code used for comparison, but it does not specify the software environment or dependencies with version numbers for its own implementation.
Experiment Setup Yes There are 3 parameters in our model including , λ1 and λ2, which are tuned through 5-fold cross validation. Specifically, we set them as = 1, λ1 = 1.4, λ2 = 0.4 for VIPe R, and = 10, λ1 = 10, λ2 = 0.1 for UB Kin Face dataset.