Robust Conditional GAN from Uncertainty-Aware Pairwise Comparisons

Authors: Ligong Han, Ruijiang Gao, Mun Kim, Xin Tao, Bo Liu, Dimitris Metaxas10909-10916

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive experiments, we show both qualitatively and quantitatively that PC-GAN performs comparably with fully-supervised methods and outperforms unsupervised baselines. Code and Supplementary can be found on the project website .
Researcher Affiliation Collaboration Ligong Han,1 Ruijiang Gao,2 Mun Kim,1 Xin Tao,3 Bo Liu,4 Dimitris Metaxas1 1Department of Computer Science, Rutgers University 2Mc Combs School of Business, The University of Texas at Austin 3Tencent You Tu Lab, 4JD Finance America Corporation
Pseudocode No The paper includes diagrams illustrating the model architecture and processes (e.g., Figure 1, 2, 3) but does not provide a separate, structured pseudocode or algorithm block.
Open Source Code Yes Code and Supplementary can be found on the project website . https://github.com/phymhan/pc-gan
Open Datasets Yes Annotated MNIST (Kim 2017) provides annotations of stroke thickness for MNIST (Le Cun et al. 1998) dataset. CACD (Chen, Chen, and Hsu 2014) is a large dataset collected for cross-age face recognition... UTKFace (Zhang and Qi 2017) is also a large-scale face dataset... SCUT-FBP (Xie et al. 2015) is specifically designed for facial beauty perception... Celeb A (Liu et 2015) is a standard large-scale dataset for facial attribute editing.
Dataset Splits No The paper mentions training on '400 for training' for SCUT-FBP and refers to 'train' and 'val' accuracies in Table 1, but it does not provide explicit or detailed train/validation/test dataset splits for reproduction.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models, or cloud computing instance types, used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies or version numbers (e.g., Python, PyTorch, or CUDA versions) required for replication.
Experiment Setup Yes Finally, the full objective can be written as: L(G, D) = LCGAN + λrec Ly rec + λcyc Lcyc, where λs control the relative importance of corresponding losses. ... The tie margins within which two candidates are considered equal are 10, 10, and 0.4 for CACD, UTK, and SCUT-FBP, respectively.