General Partial Label Learning via Dual Bipartite Graph Autoencoder
Authors: Brian Chen, Bo Wu, Alireza Zareian, Hanwang Zhang, Shih-Fu Chang10502-10509
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on two real-world datasets demonstrate that DB-GAE significantly outperforms the best baseline over absolute 0.159 F1-score and 24.8% accuracy. We compare the proposed DB-GAE to other baselines on both GPLL and PLL benchmarks. Our method outperforms the best baseline with absolute 0.158 F1-score and 24.7% accuracy. Experiments demonstrate that DB-GAE significantly outperforms over strong baselines. |
| Researcher Affiliation | Academia | Brian Chen,1 Bo Wu,1 Alireza Zareian,1 Hanwang Zhang,2 Shih-Fu Chang1 1Columbia University, 2Nanyang Technological University {bc2754, bo.wu, az2407, sc250}@columbia.edu; hanwangzhang@ntu.edu.sg |
| Pseudocode | No | The paper describes the methods using text and mathematical equations, but it does not include any explicitly labeled pseudocode blocks or algorithm figures. |
| Open Source Code | No | The paper states: 'More details of the model architecture, parameter list, and analysis of the time complexity of the model will be included in the ar Xiv version.' This is a promise for future availability or supplementary information but does not provide immediate access or a specific link to the source code. |
| Open Datasets | Yes | We evaluate the performance of our model for the automatic face naming problem on two real-world datasets: MPIIMD (Rohrbach et al. 2017) and M-VAD (Pini et al. 2019). |
| Dataset Splits | Yes | Since MVAD dataset has sufficient training data, we perform 10-fold cross-validation on the partial label learning method with 9:1 train/test split. For our method, we only use the same testing data (10% data) because our method does not require additional training data. |
| Hardware Specification | No | The paper mentions: 'We run 1000 epochs on both datasets with the runtime of 10min on MPII-MD dataset and 5hr 10min on the M-VAD dataset on CPU.' It specifies 'on CPU' but does not provide any specific CPU model, number of cores, or other detailed hardware specifications. |
| Software Dependencies | No | The paper mentions software components and tools like 'Adam optimizer (Kingma and Ba 2014)' and 'Face Net (Schroff, Kalenichenko, and Philbin 2015)', but it does not provide specific version numbers for any of the software dependencies or libraries used for the experiments. |
| Experiment Setup | Yes | In DB-GAE, we use the Adam optimizer (Kingma and Ba 2014) with a learning rate of 0.001. The layer sizes of graph convolution (with Re LU) is 1000 and 100 for the dense layer. We run 1000 epochs on both datasets. |