Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Learning Implicit Generative Models with the Method of Learned Moments
Authors: Suman Ravuri, Shakir Mohamed, Mihaela Rosca, Oriol Vinyals
ICML 2018 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our method on four datasets: Color MNIST (Metz et al., 2016), Celeb A (Liu et al., 2015), CIFAR-10 (Krizhevsky, 2009), and the daisy portion of Image Net (Russakovsky et al., 2015). We complement the visual inspection of samples with numerical measures to compare this method to existing work. For Celeb A, we use Multi-Scale Structural Similarity (MS-SSIM) (Wang et al., 2003) to show sample similarity within a single class. For CIFAR-10, we include the standard Inception Score (IS) (Salimans et al., 2016) and Fr echet Inception Distance (FID) (Heusel et al., 2017). |
| Researcher Affiliation | Industry | 1Deep Mind London, N1C 4AG, UK . Correspondence to: Suman Ravuri <EMAIL>. |
| Pseudocode | Yes | Algorithm 1 Method of Learned Moments |
| Open Source Code | No | The paper does not provide an unambiguous statement or link for open-source code for the methodology described. |
| Open Datasets | Yes | We evaluate our method on four datasets: Color MNIST (Metz et al., 2016), Celeb A (Liu et al., 2015), CIFAR-10 (Krizhevsky, 2009), and the daisy portion of Image Net (Russakovsky et al., 2015). |
| Dataset Splits | No | The paper mentions 'test set' and 'training' for various datasets but does not provide specific details on the percentage or count splits for training, validation, and testing, nor does it explicitly reference standard predefined splits with sufficient detail or citations that specify these splits. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments. It does not mention any specific hardware setup such as NVIDIA A100, Tesla V100, or Intel Xeon processors. |
| Software Dependencies | No | The paper does not provide specific version numbers for software components or libraries used in the experiments. It mentions 'Adam hyperparameters' but not the software framework (e.g., TensorFlow, PyTorch) or its version. |
| Experiment Setup | Yes | We use convolutional architectures for both our generator and moment networks. ... we use a DCGAN generator. ... We tried four types of Adam hyperparameters, and two architectures. ... For details of the specific architectures and hyperparameters used in all our experiments, please see Appendix A in the supplementary material. |