Adversarial Learning with Local Coordinate Coding
Authors: Jiezhang Cao, Yong Guo, Qingyao Wu, Chunhua Shen, Junzhou Huang, Mingkui Tan
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on various real-world datasets demonstrate the effectiveness of the proposed method. |
| Researcher Affiliation | Collaboration | 1School of Software Engineering, South China University of Technology, China 2School of Computer Science, The University of Adelaide, Australia 3Tencent AI Lab, China; University of Texas at Arlington, America. |
| Pseudocode | Yes | Algorithm 1 LCC-GANs Training Method. |
| Open Source Code | No | The paper mentions 'Py Torch is from http://pytorch.org/' which refers to a third-party library, but it does not provide a link or explicit statement for the open-source code of the methodology described in the paper. |
| Open Datasets | Yes | To thoroughly evaluate the proposed method, we conduct experiments on a wide variety of benchmark datasets, including MNIST (Le Cun et al., 1998), Oxford-102 (Nilsback & Zisserman, 2008), LSUN (Yu et al., 2015) and Celeb A (Liu et al., 2015). |
| Dataset Splits | No | The paper mentions training data and minibatch sizes, but it does not explicitly provide specific training, validation, or test dataset splits (e.g., percentages or sample counts for each split) for its experiments. |
| Hardware Specification | Yes | All experiments are conducted on a single Nvidia Titan X GPU. |
| Software Dependencies | No | The paper states 'We implement LCC-GANs based on Py Torch.' but does not specify a version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | Specifically, for the optimization, we use Adam (Kingma & Ba, 2015) with a mini-batch size of 64 and a learning rate of 0.0002 to train the generator and the discriminator. We initialize the parameters of both the generator and the discriminator following the strategy in (He et al., 2015). |