Deep Incomplete Multi-View Learning Network with Insufficient Label Information

Authors: Zhangqi Jiang, Tingjin Luo, Xinyan Liang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results demonstrate the effectiveness of our DIMv LN, attaining noteworthy performance improvements compared to stateof-the-art competitors on several public benchmark datasets.
Researcher Affiliation Academia 1College of Science, National University of Defense Technology, Changsha 410073, Hunan, China 2Institute of Big Data Science and Industry, Shanxi University, Taiyuan 030006, Shanxi, China
Pseudocode Yes Algorithm 1: Training Strategy of DIMv LN
Open Source Code No Code will be available at Git Hub.
Open Datasets Yes We conduct experiments on six public multi-view datasets as follows. Caltech101-20 : It contains 2,386 images of 20 objects. Following (Lin et al. 2022), we select HOG and GIST features as two views. CUB : It includes 11,788 samples belonging to 200 bird species. Following (Zhang et al. 2019), we select the top 10 bird species with two views. Wikipedia : It contains image and text features from 2,866 documents on 29 topics. Following (Wang, Yang, and Li 2016), the top 10 most popular topics are selected for our experiment. ALOI : It collects 110,250 images for 1,000 small objects. Following (Huang, Wang, and Lai 2023), we use a subset that contains 10,800 images of 100 objects with four views. Out-Scene : It contains 4,485 images of 15 scene categories. Following (Huang, Wang, and Lai 2023), we select 8 outdoor categories with total 2,688 images with four views. Animal 1 : It contains 50 animals of 30,475 images and we use the subset of 11,673 images from the first 20 animals with four views.
Dataset Splits Yes Each data can be split into training, validation, and test sets in the ratio of 7:1:2.
Hardware Specification Yes Our model is implemented by Py Torch on one NVIDIA Geforce A100 with GPU of 40GB memory.
Software Dependencies No The paper states, "Our model is implemented by Py Torch", but does not specify the version number of PyTorch or any other software dependencies with their versions.
Experiment Setup Yes Adam optimizer with the initial learning rate of 0.0001 is used for optimization of all datasets. The k-NN graphs are constructed using k-NN algorithm with Euclidean distance metric, where the neighbor number k is fixed to 10 for all datasets. In our experiments, we simply fix the α to 9. In our experiments, these two parameters [λ1 and λ2] are fixed to 1.