Safe Multi-View Deep Classification
Authors: Wei Liu, Yufei Chen, Xiaodong Yue, Changqing Zhang, Shaorong Xie
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experiments on various kinds of multi-view datasets validate that the proposed SMDC method can achieve precise and safe classification results. In this section, we extensively evaluate the proposed method on real-world multi-view datasets and compare it with existing multi-view classification methods. |
| Researcher Affiliation | Collaboration | 1 College of Electronics and Information Engineering, Tongji University, Shanghai, China 2 School of Computer Engineering and Science, Shanghai University, Shanghai, China 3 Artificial Intelligence Institute of Shanghai University, Shanghai, China 4 College of Intelligence and Computing, Tianjin University, Tianjin, China 5 VLN Lab, NAVI Med Tech Co., Ltd. Shanghai, China |
| Pseudocode | Yes | Algorithm 1: Algorithm for Safe Multi-View Deep Classification (SMDC) |
| Open Source Code | No | The paper does not provide any explicit statements about the release of source code or links to a code repository for the described methodology. |
| Open Datasets | Yes | We conduct experiments on six real-world multi-view datasets as follows: Handwritten (Van Breukelen et al. 1998), Scene15 (Fei Fei and Perona 2005), Animal (Lampert, Nickisch, and Harmeling 2013), Caltech101 (Fei-Fei, Fergus, and Perona 2004), CUB (Wah et al. 2011) and HMDB (Kuehne et al. 2011). |
| Dataset Splits | Yes | We then use 5-fold cross-validation to select the learning rate from 1e 4, 3e 4, 1e 3, 3e 3 . For all datasets, 20% samples are used as test sets. |
| Hardware Specification | Yes | The model is implemented by Py Torch on one NVIDIA A100 with GPU of 40GB memory. |
| Software Dependencies | No | The paper mentions 'The model is implemented by Py Torch', but it does not specify a version number for PyTorch or any other software dependencies, which is required for reproducibility. |
| Experiment Setup | Yes | The Adam optimizer (Kingma and Ba 2014) is used to train the network, where l2-norm regularization is set to 1e 5. We then use 5-fold cross-validation to select the learning rate from 1e 4, 3e 4, 1e 3, 3e 3 . |