HexaGAN: Generative Adversarial Nets for Real World Classification

Authors: Uiwon Hwang, Dahuin Jung, Sungroh Yoon

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the classification performance (F1-score) of the proposed method with 20% missingness and confirm up to a 5% improvement in comparison with the performance of combinations of state-of-the-art methods. 4. Experiments Here, we present the performance of the proposed method. We used datasets from the UCI machine learning repository (Dheeru & Karra Taniskidou, 2017), including real world datasets (breast, credit, wine) and a synthetic dataset (madelon).
Researcher Affiliation Academia Uiwon Hwang 1 Dahuin Jung 1 Sungroh Yoon 1 2 1Electrical and Computer Engineering, Seoul National University, Seoul, Korea 2ASRI, INMC, Institute of Engineering Research, Seoul National University, Seoul, Korea. Correspondence to: Sungroh Yoon <sryoon@snu.ac.kr>.
Pseudocode Yes Algorithm 1 Missing data imputation input :x data with missing values sampled from Dl and Du; m vector indicating whether elements are missing; z noise vector sampled from U(0, 1) output :ˆx imputed data
Open Source Code No The paper does not provide an explicit statement about open-source code availability or a link to a code repository for the described methodology.
Open Datasets Yes We used datasets from the UCI machine learning repository (Dheeru & Karra Taniskidou, 2017), including real world datasets (breast, credit, wine) and a synthetic dataset (madelon).
Dataset Splits Yes We repeated each experiment 10 times and used 5-fold cross validation.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, or memory) used to run the experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes We used 10 for both hyperparameters λ1 and α1 in our experiments. ... we set λ2 to 10, α2 to 1, and α3 to 0.01 in our experiments. ... where we used 0.1 for α4 in our experiments. We basically assume 20% missingness (MCAR) in the elements and labels of the UCI dataset and 50% in the elements of the MNIST dataset to cause missing data and missing label problems. Every element was scaled to a range of [0,1].