Deep Embedded Complementary and Interactive Information for Multi-View Classification

Authors: Jinglin Xu, Wenbin Li, Xinwang Liu, Dingwen Zhang, Ji Liu, Junwei Han6494-6501

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on several public datasets demonstrate the rationality and effectiveness of our method.
Researcher Affiliation Collaboration 1Northwestern Polytechnical University, China, 2Nanjing University, China 3National University of Defense Technology, China, 4Xidian University, China, 5Kwai Inc.
Pseudocode No The paper does not contain a specific pseudocode or algorithm block.
Open Source Code No The paper does not provide any statements or links indicating that open-source code for the methodology is available.
Open Datasets Yes Caltech101/20. The dataset (Fei-Fei, Fergus, and Perona 2007) ... AWA. This dataset (Lampert, Nickisch, and Harmeling 2009) ... NUSOBJ. This is a subset of NUS-WIDE (Chua et al. 2009) ... Reuters. It (Amini, Usunier, and Goutte 2009) ... Hand. This dataset (Dheeru and Karra Taniskidou 2017)
Dataset Splits Yes Referring to (Andrew et al. 2013; Wang et al. 2015), we split each dataset into three parts: 70% samples for training, two-thirds of the rest samples for validation, and one-third of that for testing.
Hardware Specification No The paper does not specify any particular hardware (e.g., GPU models, CPU types) used for running the experiments. It only mentions 'trained by Adam' and 'large batchsize' generally.
Software Dependencies No The paper mentions 'Py Torch' and 'Adam' but does not provide specific version numbers for any software components. It only says 'trained by Adam with batch normalization'.
Experiment Setup Yes All the networks in this paper are trained by Adam with batch normalization, where the learning rate is 10 3, β1 = 0.5, β2 = 0.9. In addition, we study the impact of batch size on the classification performance of our Mv NNcor by setting batch size like 32, 64, 128, and 256 respectively. ... Each of fv is a fully-connected network which consists of dv input units and two hidden layers with 400 and 200 units equipped with Re LU activation function. ψ consists of 2002 input units and 200 hidden units with Re LU activation function. φ consists of 200 M input units and 300 hidden units with Re LU activation function, followed by a linear output layer with C units.