Resistance Training Using Prior Bias: Toward Unbiased Scene Graph Generation

Authors: Chao Chen, Yibing Zhan, Baosheng Yu, Liu Liu, Yong Luo, Bo Du212-220

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform extensive experiments on a very popular benchmark, VG150, to demonstrate the effectiveness of our method for the unbiased scene graph generation.
Researcher Affiliation Collaboration 1 School of Computer Science, Wuhan University 2 JD Explore Academy 3 The University of Sydney
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/Ch Ch1999/RTPB
Open Datasets Yes We perform extensive experiments on Visual Genome (VG) (Krishna et al. 2016) dataset.
Dataset Splits Yes The original split only has training set (70%) and test set (30%). We follow (Tang et al. 2020) to sample a 5k validation set for parameter tuning.
Hardware Specification Yes We perform our experiments using a single NVIDIA V100 GPU.
Software Dependencies No The paper mentions 'Py Torch (Paszke et al. 2019)' but does not provide specific version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes For the DTrans, the number of object encoder layers is no = 4 and the number of relationship encoder layers is nr = 2. For the proposed resistance bias, we use a = 1 and ϵ = 0.001 if not otherwise stated. ... We train the DTrans model for 18000 iterations with batch size 16.