CogTree: Cognition Tree Loss for Unbiased Scene Graph Generation
Authors: Jing Yu, Yuan Chai, Yujing Wang, Yue Hu, Qi Wu
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our models on the widely compared Visual Genome (VG) split [Xu et al., 2017], with the 150 most frequent object classes and 50 most frequent relationship classes in VG [Krishna et al., 2017]. The VG split only contains training set and test set and we follow [Zellers et al., 2018] to sample a 5K validation set from the training set. |
| Researcher Affiliation | Academia | 1Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China 2Intelligent Computing and Machine Learning Lab, School of ASEE, Beihang University, Beijing, China 3Key Laboratory of Machine Perception, MOE, School of EECS, Peking University, Beijing, China 4University of Adelaide, Australia |
| Pseudocode | No | No structured pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | Yes | The code is available at: https://github.com/CYVincent/Scene Graph-Transformer-Cog Tree. |
| Open Datasets | Yes | We evaluate our models on the widely compared Visual Genome (VG) split [Xu et al., 2017], with the 150 most frequent object classes and 50 most frequent relationship classes in VG [Krishna et al., 2017]. |
| Dataset Splits | Yes | The VG split only contains training set and test set and we follow [Zellers et al., 2018] to sample a 5K validation set from the training set. |
| Hardware Specification | Yes | Experiments are implemented with Py Torch and conducted with NVIDIA Tesla V100 GPUs. |
| Software Dependencies | No | The paper mentions 'Py Torch' but does not specify its version number or any other software dependencies with version numbers. |
| Experiment Setup | Yes | λ is set to 1 and β is set to 0.999. SG-Transformer contains 3 O2O blocks and 2 R2O blocks with 12 attention heads. Models are trained by SGD optimizer with 5 epochs. The mini-batch size is 12 and the learning rate is 1.2 10 3. |