Reliable Conflictive Multi-View Learning

Authors: Cai Xu, Jiajun Si, Ziyu Guan, Wei Zhao, Yue Wu, Xiyue Gao

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments performed on 6 datasets verify the effectiveness of ECML.
Researcher Affiliation Academia School of Computer Science and Technology, Xidian University, China
Pseudocode Yes The model optimization is elaborated in Algorithm 1 (Technical Appendix).
Open Source Code Yes The code is released at https://github.com/jiajunsi/RCML.
Open Datasets Yes Hand Written3 comprises 2000 instances of handwritten numerals ranging from 0 to 9 , with 200 patterns per class. It is represented using six feature sets. CUB4consists of 11788 instances associated with text descriptions of 200 different categories of birds. HMDB5 is a large-scale human action recognition dataset containing 6718 instances from 51 action categories. Scene156 includes 4485 images from 15 indoor and outdoor scene categories. Caltech1017 comprises 8677 images from 101 classes. PIE8 contains 680 instances belonging to 68 classes.
Dataset Splits No The paper defines a training set and a test set: "The training tuples {{xv n}V v=1 , yn} Ntrain n=1 contain Ntrain normal instances. The other N Ntrain normal instances and N conflictive instances form the test set." However, it does not explicitly mention a separate validation set or cross-validation for hyperparameter tuning or model selection.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud computing instance types) used for running the experiments.
Software Dependencies No The paper does not list specific software dependencies with their version numbers (e.g., programming languages like Python or libraries like PyTorch/TensorFlow with their versions).
Experiment Setup Yes The overall loss function for a specific instance {xv n}V v=1 can be calculated as: L = Lacc(αn) + β v=1 Lacc(αv n) + γLcon. where λt = min(1.0, t/T) [0, 1] is the annealing coefficient, t is the index of the current training epoch, and T is the annealing step.