Trusted Multi-view Learning with Label Noise
Authors: Cai Xu, Yilin Zhang, Ziyu Guan, Wei Zhao
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically compare TMNR with state-of-the-art trusted multi-view learning and label noise learning baselines on 5 publicly available datasets. Experiment results show that TMNR outperforms baseline methods on accuracy, reliability and robustness. |
| Researcher Affiliation | Academia | School of Computer Science and Technology, Xidian University |
| Pseudocode | Yes | Algorithm 1 TMNR algorithm |
| Open Source Code | Yes | The code and appendix are released at https://github.com/Yilin Zhang107/TMNR. |
| Open Datasets | Yes | UCI1 contains features for handwritten numerals (0-9). The average of pixels in 240 windows, 47 Zernike moments, and 6 morphological features are used as 3 views. PIE2 consists of 680 face images from 68 experimenters. We extracted 3 views from it: intensity, LBP and Gabor. BBC3 includes 685 documents from BBC News that can be categorised into 5 categories and are depicted by 4 views. Caltech1014 contains 8677 images from 101 categories, extracting features as different views with 6 different methods: Gabor, Wavelet Moments, CENTRIST, HOG, GIST, and LBP. we chose the first 20 categories. Leaves1005 consists of 1600 leaf samples from 100 plant species. We extracted shape descriptors, fine-scale edges, and texture histograms as 3 views. |
| Dataset Splits | No | No specific details regarding a validation dataset split were explicitly mentioned. The paper only states 'In all datasets, 20% of the instances are split as the test set.' |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments were mentioned in the paper. |
| Software Dependencies | Yes | We implement all methods on Py Torch 1.13 framework. |
| Experiment Setup | Yes | We utilize the Adam optimizer with a learning rate of 1e-3 and l2-norm regularization set to 1e-5. |