Generating Triples With Adversarial Networks for Scene Graph Construction

Authors: Matthew Klawonn, Eric Heim

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show that our model improves upon prior work in scene graph generation on state-of-the-art data sets and accepted metrics. Further, we demonstrate that our model is capable of handling a larger vocabulary size than prior work has attempted.Empirical Evaluation One goal of our evaluation is to compare our method to the current state-of-the-art in scene graph generation. As (Xu et al. 2017) sets the current state-of-the-art, we compare to their method, using metrics their work established, and on the dataset they evaluated on.
Researcher Affiliation Collaboration Matthew Klawonn Rensselaer Polytechnic Institute Dept. of Computer Science Troy, NY 12180 klawom@rpi.edu Eric Heim Air Force Research Laboratory Information Directorate Rome, NY 13441 eric.heim.1@us.af.mil
Pseudocode No The paper does not contain any sections or figures explicitly labeled as "Pseudocode" or "Algorithm".
Open Source Code No The paper does not provide any explicit statements about releasing source code, nor does it include links to a code repository.
Open Datasets Yes All data comes from the Visual Genome (VG) dataset (Krishna et al. 2016), since this is the largest and highest quality dataset containing image-scene graph pairs available today and the same data that (Xu et al. 2017) use for evaluation.
Dataset Splits No The paper states: "The first split exactly matches that of (Xu et al. 2017), which is a 70-30 train-test split of the dataset..." and mentions a validation set for tuning, but does not specify the percentage or size of the validation split.
Hardware Specification No The paper does not specify any particular hardware (e.g., GPU models, CPU types, or server configurations) used for running the experiments.
Software Dependencies No The paper mentions using "Adam stochastic gradient algorithm" and "layer normalization", but does not provide specific version numbers for any software libraries, frameworks (like PyTorch or TensorFlow), or programming languages.
Experiment Setup Yes Following the example of (Gulrajani et al. 2017), we use the Adam stochastic gradient algorithm (Kingma and Ba 2015) with learning rate 1e 4, β1 = 0.5, β2 = 0.9 to train both the discriminator and generator. Our gradient penalty coefficient λ (Gulrajani et al. 2017) is set to 10. For our graph construction phase, entities that have more than a 80% match using the generalized Io U metric are considered to be duplicate entities.