Learning Assistance from an Adversarial Critic for Multi-Outputs Prediction

Authors: Yue Deng, Yilin Shen, Hongxia Jin

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show the performance and generalization ability of ACA on diverse learning tasks including multi-label classification, attributes prediction and sequence-to-sequence generation.
Researcher Affiliation Industry Yue Deng, Yilin Shen and Hongxia Jin AI Research Center, Samsung Research America {y1.deng, yilin.shen, hongxia.jin}@samsung.com
Pseudocode Yes Algorithm 1: ACA optimization
Open Source Code No The paper does not contain any statement about making its source code publicly available or providing a link to a code repository.
Open Datasets Yes We evaluate the performances of ACA on multiple-label classification (MLC) for documents modeling on bibtex and bookmark datasets[Loza Menc ıa and F urnkranz, 2008]. The bibtex dataset contains 7, 395 samples from 159 classes; and bookmark dataset contains 87, 856 samples within 208 classes. We also include the delicious dataset [Tsoumakas et al., 2008] in our experiment that contains 16, 105 samples in 983 classes.
Dataset Splits Yes We randomly sample 20% of the whole data for testing and the other 80% data are for training and validation.
Hardware Specification No All reported time is calculated by running our algorithm with Tensor Flow on 8 GPUs.
Software Dependencies No We implement all three network structures with Tensor Flow and adopt the ADAM optimizer [Kingma and Ba, 2014] for optimization.
Experiment Setup Yes In ACA, the dimensions for help vector vi and comparability fusion layer h(xi,yi) are both fixed as 64. The recurrent steps T is fixed as 5.