Social Relation Reasoning Based on Triangular Constraints
Authors: Yunfei Guo, Fei Yin, Wei Feng, Xudong Yan, Tao Xue, Shuqi Mei, Cheng-Lin Liu
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that our method outperforms existing approaches significantly, with higher accuracy and better consistency in generating social relation graphs. |
| Researcher Affiliation | Collaboration | 1National Laboratory of Pattern Recognition (NLPR), Institute of Automation of Chinese Academy of Sciences, Beijing 100190, China 3T Lab, Tencent Map, Tencent Technology (Beijing) Co., Ltd., Beijing 100193, China |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described in this paper. |
| Open Datasets | Yes | To evaluate the performance of the proposed method, we conduct extensive experiments on the two popular benchmark datasets: PISC (Li et al. 2017) and PIPA (Sun, Schiele, and Fritz 2017). |
| Dataset Splits | Yes | The coarse setting has a train/val/test split of 13142, 4000 and 4000 images while the fine is 16828, 500 and 1250 images. [...] For fair comparisons, we adopt the standard train/val/test split introduced by (Sun, Schiele, and Fritz 2017), which has a train/val/test split of 5857, 261, and 2452 images. |
| Hardware Specification | Yes | The implementation is on a workstation with a 2.40GHz 56-core CPU, 256G RAM, GTX Titan RTX, and 64-bit Cent OS. |
| Software Dependencies | No | Our implementation is based on Py Torch (Paszke et al. 2019) and MMDetection (Chen et al. 2019) framework. While frameworks are mentioned and cited, specific version numbers for these software dependencies are not provided in the text. |
| Experiment Setup | Yes | To balance the performance and speed, we set the layer number of our TRGAT to 2. Features representing persons and relations are set to 2048-D vectors. In training, we scale the long edge of input images to 600, 720, or 960 randomly while keeping the aspect ratio and 720 in testing. We train our model with the stochastic gradient descent method (SGD). All the experiments are conducted with a batch size of 16 on 2 GPUs. |