Trusted Multi-View Classification

Authors: Zongbo Han, Changqing Zhang, Huazhu Fu, Joey Tianyi Zhou

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results validate the effectiveness of the proposed model in accuracy, reliability and robustness.
Researcher Affiliation Academia Zongbo Han, Changqing Zhang , College of Intelligence and Computing Tianjin University Tianjin, China {zongbo,zhangchangqing}@tju.edu.cn Huazhu Fu Inception Institute of Artificial Intelligence Abu Dhabi, UAE hzfu@ieee.org Joey Tianyi Zhou Institute of High Performance Computing A*STAR, Singapore joey.tianyi.zhou@gmail.com
Pseudocode Yes The optimization process for the proposed model is summarized in Algorithm 1 (in the Appendix).
Open Source Code No No statement regarding the release of open-source code for the described methodology or a link to a code repository was found.
Open Datasets Yes In this section, we conduct experiments on six real-world datasets: Handwritten1, CUB (Wah et al., 2011), Caltech101 (Fei-Fei et al., 2004), PIE2, Scene15 (Fei-Fei & Perona, 2005) and HMDB (Kuehne et al., 2011).
Dataset Splits Yes The training, validation and test set is set to 8:1:1 respectively for all datasets.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments were found in the paper.
Software Dependencies No No specific software dependencies with version numbers were found.
Experiment Setup Yes For all datasets, we train the network for 200 epochs using Adam (Kingma & Ba, 2014) optimizer with an initial learning rate of 0.001. The learning rate is decayed by a factor of 0.1 every 50 epochs. The weight decay parameter for all the methods is set to 5 × 10−4. The balance factor λt in Eq. 10 is set to 0.001 at the beginning, and is gradually increased by multiplying by 10/200 for each epoch, with the maximum value set to 1.0.