CPM-Nets: Cross Partial Multi-View Networks
Authors: Changqing Zhang, Zongbo Han, yajie cui, Huazhu Fu, Joey Tianyi Zhou, Qinghua Hu
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 3 Experiments, We conduct experiments on the following datasets: ORL, PIE, Yale B, CUB, Handwritten, Animal, We compared the proposed CPM-Nets with the following methods:, From the results in Fig. 2, we have the following observations: |
| Researcher Affiliation | Collaboration | 1College of Intelligence and Computing, Tianjin University, Tianjin, China 2Tianjin Key Lab of Machine Learning, Tianjin, China 3Inception Institute of Artificial Intelligence, Abu Dhabi, UAE 4Institute of High Performance Computing, A*STAR, Singapore |
| Pseudocode | Yes | Algorithm 1: Algorithm for CPM-Nets |
| Open Source Code | No | The paper does not provide concrete access to source code, nor does it state that the code is publicly available. |
| Open Datasets | Yes | ORL 2https://www.cl.cam.ac.uk/research/dtg/attarchive/facedatabase.html, PIE 3http://www.cs.cmu.edu/afs/cs/project/PIE/Multi Pie/Multi-Pie/Home.html, Handwritten 4https://archive.ics.uci.edu/ml/datasets/Multiple+Features, CUB [38] The dataset contains different categories of birds, where the first 10 categories are used and deep visual features from Goog Le Net and text features using doc2vec [39] are used as two views. |
| Dataset Splits | Yes | For all methods, we tune the parameters with 5-fold cross validation. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions models like Goog Le Net, doc2vec, DECAF, and VGG19, but does not provide specific software dependencies with version numbers for the implementation or experiments. |
| Experiment Setup | Yes | For our CPM-Nets, we set the dimensionality (K) of the latent representation from {64, 128, 256} and tune the parameter λ from the set {0.1, 1, 10} for all datasets. We run 10 times for each method to report the mean values and standard deviations. Please refer to the supplementary material for the details of network architectures and parameter settings. |