Generate, Segment, and Refine: Towards Generic Manipulation Segmentation

Authors: Peng Zhou, Bor-Chun Chen, Xintong Han, Mahyar Najibi, Abhinav Shrivastava, Ser-Nam Lim, Larry Davis13058-13065

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Strong experimental results validate our proposal. We evaluate GSR-Net on four public benchmarks and show that it performs better to state-of-the-art methods. Experiments with two different post-processing attacks further demonstrate the robustness of GSR-Net.
Researcher Affiliation Collaboration Peng Zhou,1 Bor-Chun Chen,1 Xintong Han,2 Mahyar Najibi,1 Abhinav Shrivastava,1 Ser Nam Lim,3 Larry S. Davis1 1University of Maryland, College Park, 2Huya Inc, 3Facebook
Pseudocode No The paper describes the methods textually and with mathematical equations, but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not include an explicit statement or link indicating the release of the source code for the described methodology.
Open Datasets Yes We evaluate our performance on four datasets In-The-Wild (Huh et al. 2018), COVER (Wen et al. 2016), CASIA 1.0 (Dong, Wang, and Tan 2010) and Carvalho (De Carvalho et al. 2013).
Dataset Splits No The paper describes training and testing on datasets but does not explicitly provide details on how data was split for validation, such as specific percentages or sample counts for a validation set.
Hardware Specification No The paper does not provide specific details regarding the hardware specifications (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions using VGG-16 based Deep Lab model and various frameworks but does not specify version numbers for any software dependencies or libraries.
Experiment Setup No While the paper describes various components of the model and general training procedures, it does not provide specific numerical values for hyperparameters such as learning rates, batch sizes, number of epochs, optimizer settings, or the values of λgrad, λedge, and λadv mentioned in the loss function.