Progressive Deep Multi-View Comprehensive Representation Learning

Authors: Cai Xu, Wei Zhao, Jinglong Zhao, Ziyu Guan, Yaming Yang, Long Chen, Xiangyu Song

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments conducted on a synthetic toy dataset and 4 real-world datasets show that PDMF outperforms state-of-the-art baseline methods.
Researcher Affiliation Academia 1 School of Computer Science and Technology, Xidian University, China 2 Xi an University of Posts and Telecommunications, China 3 Swinburne University of Technology, Melbourne, Australia
Pseudocode No The paper describes the method in detail but does not include any pseudocode or algorithm blocks.
Open Source Code Yes The code is released at https://github.com/winterant/PDMF.
Open Datasets Yes Handwritten Dataset 3 consists of features of handwritten numbers. It contains 10 categories (handwritten numbers 0 9 ) with 200 images in each category and 6 types of image features, which are used as 6 views in our experiments. CUB (Caltech-USD Birds) Dataset 4 contains 11788 bird images associated with text descriptions of 200 categories. Scene15 Dataset (Fei-Fei and Perona 2005) contains 4485 images from 15 indoor and outdoor scene categories. Three kinds of features, i.e., 1536D GIST description, 3780D HOG histogram and 4096D LBP feature are extracted as three views. UCIA (UCI Activity) Dataset 5 is a sequential multisensors dataset. It consists of sensor data for 19 different activities such as standing, sitting, etc. It contains 9120 instances with 5 views. The instances contain 9(dimension)*125(timestamps) features for each view.
Dataset Splits Yes Each dataset is randomly divided into training set (80%), validation set (10%) and test set (10%).
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, such as GPU or CPU models.
Software Dependencies No The paper states 'We use stochastic gradient decent and apply Adam for training.' but does not specify versions for any software or libraries.
Experiment Setup Yes The learning rate is set as 1e 5. All the hyperparameters of PDMF and baselines are selected based on the validation set. The averaged performance is reported by running each test case five times.