Multimodal Event Causality Reasoning with Scene Graph Enhanced Interaction Network

Authors: Jintao Liu, Kaiwen Wei, Chenglong Liu

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results indicate that the proposed SEIN outperforms state-of-the-art methods on the Vis-Causal dataset. Experiments Experimental Settings Dataset. We conduct experiments to evaluate our model on the Vis-Causal dataset (Zhang et al. 2021)...
Researcher Affiliation Academia Jintao Liu, Kaiwen Wei *, Chenglong Liu University of Chinese Academy of Sciences {liujintao201, weikaiwen19, liuchenglong20}@mails.ucas.ac.cn
Pseudocode Yes Algorithm 1: The Training Process of SEIN
Open Source Code No The paper does not provide any explicit statement or link for open-source code for the described methodology.
Open Datasets Yes We conduct experiments to evaluate our model on the Vis-Causal dataset (Zhang et al. 2021), which is widely used for multimodal daily event causality reasoning.
Dataset Splits Yes The statistics of the dataset are listed in Table 1. Table 1: Statistics of the Vis-Causal dataset. Train 800 1609 82731 Valid 100 208 10608 Test 100 191 9053
Hardware Specification Yes All experiments are conducted on NVIDIA Tesla V100 GPU with Pytorch framework.
Software Dependencies No We adopt pre-trained BERT-BASE-UNCASED architecture from Hugging Face s Transformers library as textual encoder. We use Faster R-CNN (Ren et al. 2015) pre-trained on Visual Genome to detect objects and leverage the public Scene Graph Diagnosis toolkit (Tang et al. 2020) to identify relations between each pair of objects. All experiments are conducted on NVIDIA Tesla V100 GPU with Pytorch framework.
Experiment Setup Yes The hyper-parameters λ1, λ2, and λ3 are set to 0.5, 0.3, and 0.1, respectively. The number of paired objects K is set to 10. The number of GCN layers L is set to 2. The model is trained for 25 epochs with a learning rate of 5e-5 and a batch size of 16. The dimension of the hidden representations d is set to 768. We utilize an early stop strategy and Adam optimizer to update model parameters.