Reciprocal Adversarial Learning via Characteristic Functions
Authors: Shengxi Li, Zeyang Yu, Min Xiang, Danilo Mandic
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 Experimental Results In this section, our RCF-GAN is evaluated in terms of both image generation, reconstruction and interpolation, with our code available at https://github.com/Shengxi Li/rcf_gan. We also show in the supplementary material advanced results including phase and amplitude analysis, ablation study and superior performances under the Res Net structure. Datasets: Three widely applied benchmark datasets were employed in the evaluation: Celeb A (faces of celebrities) [44], CIFAR-10 [45] and LSUN Bedroom (LSUN_B) [46]. The images of the Celeb A and LSUN_B were cropped to the size 64 64, whist the image size of the CIFAR10 was 32 32. |
| Researcher Affiliation | Academia | Shengxi Li Zeyang Yu Min Xiang Danilo Mandic Imperial College London {shengxi.li17, z.yu17, m.xiang13, d.mandic}@imperial.ac.uk |
| Pseudocode | Yes | Algorithm 1: RCF-GAN. In all the experiments in this paper, the generator and the critic are trained once at each iteration. The optional t-net with parameter θt is designated by hθt( ). |
| Open Source Code | Yes | In this section, our RCF-GAN is evaluated in terms of both image generation, reconstruction and interpolation, with our code available at https://github.com/Shengxi Li/rcf_gan. |
| Open Datasets | Yes | Datasets: Three widely applied benchmark datasets were employed in the evaluation: Celeb A (faces of celebrities) [44], CIFAR-10 [45] and LSUN Bedroom (LSUN_B) [46]. |
| Dataset Splits | No | The paper mentions using 'test sets' for evaluation, but does not provide specific details on training, validation, and test dataset splits (e.g., exact percentages, sample counts, or citations to predefined validation splits). |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions using 'Adam' optimizer, but does not provide specific version numbers for any software dependencies, libraries, or programming languages (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | For a fair comparison, all the reported results were compared under the batch sizes of 64 (i.e., bd =bg =bt =bσ =64). Moreover, all variances of Gaussian noise were set to 1, except for the input noise of the generator that was 0.3, because the reciprocal loss had to be minimised given the fact that the output of the critic is restricted to [ 1, 1]. Furthermore, we do not require the Lipschitz constraint, which allows for a relatively larger learning rate (lr =0.0002 for both nets). Moreover, for the CIFAR10 and LSUN_B datasets, the dimension of the embedded domain was set to 128 and for the Celeb A dataset the dimension was 64. Our default RCF-GAN used t-net and layer normalisation, and was trained with the vanilla CF loss (i.e., α = 0.5 in (6)). |