Pluralistic Image Completion with Gaussian Mixture Models
Authors: Xiaobo Xia, Wenhao Yang, Jie Ren, Yewen Li, Yibing Zhan, Bo Han, Tongliang Liu
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We formally establish the effectiveness of our method and demonstrate it with comprehensive experiments. The implementation is available at https://github.com/tmllab/PICMM. 4 Experiments In this section, we conduct a series of experiments to justify our claims. We first introduce the implementation of our method (Section 4.1). The comprehensive experimental results and comparison with advanced methods are then provided and discussed (Section 4.2). Finally, we conduct an analysis study to present and discuss our method in more detail (Section 4.3). |
| Researcher Affiliation | Collaboration | Xiaobo Xia1 Wenhao Yang2 Jie Ren3 Yewen Li4 Yibing Zhan5 Bo Han6 Tongliang Liu1 1TML Lab, University of Sydney 2Nanjing University 3University of Edinburgh 4Nanyang Technological University 5JD Explore Academy 6Hong Kong Baptist University |
| Pseudocode | Yes | Algorithm 1 Training procedure Input: images Io, Im, and Ic, the number of primitives of GMM k, the initialized encoder f, and decoder g. ... Algorithm 2 Test procedure Input: the image Im, the number of primitives of GMM k, the trained encoder f , and decoder g . |
| Open Source Code | Yes | The implementation is available at https://github.com/tmllab/PICMM. |
| Open Datasets | Yes | Datasets. We evaluated our proposed model on five popularly used datasets, i.e., Celeb A-HQ [18, 28], FFHQ [19], Paris Street View [7], Places2 [62], and Image Net [41]. |
| Dataset Splits | No | The paper lists the datasets used but does not explicitly provide information about specific training, validation, or test splits (e.g., percentages or counts) or refer to standard splits for all datasets. |
| Hardware Specification | Yes | The methods are implemented by Py Torch and evaluated on NVIDIA Tesla A100 GPUs. All inference runs on one NVIDIA Tesla A100 GPU for fairness. |
| Software Dependencies | No | The paper mentions 'Py Torch' but does not specify a version number. |
| Experiment Setup | Yes | During optimization, we use the Adam optimizer [20]. The learning rate is fixed to 10^-4 during the training procedure. ... LC = LR + λALA, (8) where the weight λA is set to 0.05 in all experiments. |