Jointly Imputing Multi-View Data with Optimal Transport
Authors: Yangyang Wu, Xiaoye Miao, Xinyu Huang, Jianwei Yin
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on several real-world multi-view data sets demonstrate that, Git yields over 35% accuracy gain, compared to the state-of-the-art approaches. |
| Researcher Affiliation | Academia | Yangyang Wu1, Xiaoye Miao1*, Xinyu Huang3, Jianwei Yin1,2 1 Center for Data Science, Zhejiang University, Hangzhou, China 2 College of Computer Science, Zhejiang University, Hangzhou, China 3 Data Science Institute, Columbia University, New York, USA {zjuwuyy, miaoxy}@zju.edu.cn, xh2511@columbia.edu, zjuyjw@cs.zju.edu.cn |
| Pseudocode | Yes | Algorithm 1 gives the pseudo-code of Git. |
| Open Source Code | No | The paper does not provide any explicit statement or link regarding the open-sourcing of the code for the described methodology. |
| Open Datasets | Yes | In the experiments, we use five public real-world multi-view datasets. In particular, the mixed national institute of standards and technology dataset 1 (MNIST) is a widely known benchmark hand-written digit dataset... The Caltech-UCSD Birds-200-2011 dataset 2 (CUB)... The busy city street multi-view video dataset 3 (City Street)... The karolinska directed emotional faces dataset 4 (KDEF)... The Database of Faces dataset (ORL) (Yan et al. 2021). |
| Dataset Splits | Yes | For each dataset, we randomly choose 10% samples for the test, 10% samples for validation, and the rest for training. |
| Hardware Specification | Yes | The experiments were conducted in an Intel Core 2.80GHz server with TITAN Xp 12Gi B (GPU) and 192GB RAM, running Ubuntu 18.04 system. |
| Software Dependencies | No | The paper states 'All algorithms were implemented in Python.' and mentions 'The ADAM algorithm is utilized to train networks.' However, it does not provide specific version numbers for Python or any libraries/frameworks used. |
| Experiment Setup | Yes | For all multi-view imputation methods, the learning rate is 0.001, the dropout rate is 0.1, and the batch size is 16. The ADAM algorithm is utilized to train networks. The training epoch is 50, 30, 30, and 500, over MNIST, CUB, City Street, and KDEF, respectively. In Git, the hyperparameter α is 0.7, β is 100, and the MED module s iteration times k is 2. |