Multi-Level Confidence Learning for Trustworthy Multimodal Classification

Authors: Xiao Zheng, Chang Tang, Zhiguo Wan, Chengyu Hu, Wei Zhang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on four multimodal medical datasets are conducted to validate superior performance of MLCLNet when compared to other state-of-the-art methods.In the section, we compare the proposed MLCLNet with some other state-of-the-art classification methods on four real-world multimodal datasets. Extensive experimental results validate the superiority of our propose network when compared with other counterparts. In addition, ablation studies are also conducted to demonstrate the effectiveness of different modules.
Researcher Affiliation Academia Xiao Zheng1, Chang Tang2, , Zhiguo Wan3, Chengyu Hu2, Wei Zhang4 1School of Computer, National University of Defense Technology, Changsha 410073, China 2School of Computer Science, China University of Geosciences, Wuhan 430074, China 3Zhejiang Lab, Hangzhou 311121, China 4Shandong Computer Science Center (National Supercomputing Center in Jinan), Jinan 250000, China
Pseudocode No The paper describes methods using mathematical formulations and descriptive text, but does not include a structured pseudocode or algorithm block.
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository.
Open Datasets Yes ROSMAP is used for Alzheimer s Disease diagnosis, which contains 351 samples of 2 classes (A Bennett et al. 2012; De Jager et al. 2018). ... BCRA, LGG, and KIPAN can be obtained from The Cancer Genome Atlas program (TCGA) 1. 1https://www.cancer.gov/aboutnci/organization/ccg/research/structuralgenomics/tcga
Dataset Splits No Since each dataset needs to be partitioned into training part and testing part, similar to previous work (Wang et al. 2021; Han et al. 2022a), we run experiments 20 times and report the average results and standard deviation for avoiding bias of data partition.
Hardware Specification Yes We gratefully acknowledge the support of NVIDIA Corporation with the donation of the Titan V GPU used for computation acceleration for this research.
Software Dependencies No The Adam optimizer with learning rate decay is used for network training.
Experiment Setup Yes The Adam optimizer with learning rate decay is used for network training. For each time of experiment on each dataset, we stop the training process at 1200 epochs and output the testing results.