Deep Multi-View Concept Learning

Authors: Cai Xu, Ziyu Guan, Wei Zhao, Yunfei Niu, Quan Wang, Zhiheng Wang

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments conducted on image and document datasets show that DMCL performs well and outperforms baseline methods.
Researcher Affiliation Academia State Key Lab of ISN, School of Computer Science and Technology, Xidian University School of Computer Science and Technology, Xidian University College of Computer Science and Technology, Henan Polytechnic University {cxu 3@stu., zyguan@, ywzhao@mail., yfniu@stu., qwang@}xidian.edu.cn, wzhenry@eyou.com
Pseudocode Yes Algorithm 1: Optimization of DMCL; Algorithm 2: Composite Gradient Mapping
Open Source Code No The paper does not provide concrete access to source code, such as a repository link or an explicit statement of code release in supplementary materials.
Open Datasets Yes Reuters [Amini et al., 2009]. It consists of 111740 documents...; Image Net [Deng et al., 2009]. It is a well known realworld image database...
Dataset Splits Yes We use the holdout method [Han et al., 2011] for evaluation and tune model parameters by cross-validation on the training set. For each dataset, we randomly split the data items for each category and use 50% for training while the remaining 50% are reserved for test.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud instances) used for running the experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library names with versions).
Experiment Setup Yes Based on the results, we set α = 100, β = 0.015 and γ = 0.005 in other experiments. ... The layer sizes are set to [300 200 125].