Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Image Inpainting via Generative Multi-column Convolutional Neural Networks
Authors: Yi Wang, Xin Tao, Xiaojuan Qi, Xiaoyong Shen, Jiaya Jia
NeurIPS 2018 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on challenging street view, face, natural objects and scenes manifest that our method produces visual compelling results even without previously common post-processing. (from abstract) and 4 Experiments (Section title). |
| Researcher Affiliation | Collaboration | 1The Chinese University of Hong Kong 2You Tu Lab, Tencent EMAIL EMAIL |
| Pseudocode | No | The paper describes its method in Section 3 and uses network diagrams (Figure 2), but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states 'More inpainting results are in our project website.' but does not explicitly state that the source code for the described methodology is publicly available, nor does it provide a direct link to a code repository. |
| Open Datasets | Yes | We evaluate our method on five datasets of Paris street view [18], Places2 [28], Image Net [19], Celeb A [15], and Celeb A-HQ [12]. |
| Dataset Splits | Yes | We train our models on the training set and evaluate our model on the testing set (for Paris street view) or validation set (for Places2, Image Net, Celeb A, and Celeb A-HQ). |
| Hardware Specification | Yes | The hardware is with an Intel CPU E5 (2.60GHz) and TITAN X GPU. |
| Software Dependencies | Yes | Our implementation is with Tensorflow v1.4.1, CUDNN v6.0, and CUDA v8.0. |
| Experiment Setup | Yes | After our model G converges, we set λmrf = 0.05 and λadv = 0.001 for fine tuning until converge. The training procedure is optimized using Adam solver [13] with learning rate 1e 4. We set β1 = 0.5 and β2 = 0.9. The batch size is 16. |