Video as Conditional Graph Hierarchy for Multi-Granular Question Answering
Authors: Junbin Xiao, Angela Yao, Zhiyuan Liu, Yicong Li, Wei Ji, Tat-Seng Chua2804-2812
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Despite the simplicity, our extensive experiments demonstrate the superiority of such conditional hierarchical graph architecture, with clear performance improvements over prior methods and also better generalization across different type of questions. |
| Researcher Affiliation | Academia | Department of Computer Science, National University of Singapore |
| Pseudocode | No | The paper describes the model architecture and operations, but it does not present them in a structured pseudocode or algorithm block. |
| Open Source Code | No | The paper does not explicitly state that source code for the methodology is openly available or provide a link to a code repository. |
| Open Datasets | Yes | We experiment on four Video QA datasets that challenge the various aspects of video understanding: TGIF-QA (Jang et al. 2019), MSRVTT-QA and MSVD-QA, NEx T-QA (Xiao et al. 2021). |
| Dataset Splits | No | The paper mentions using "validation sets" (e.g., "We analyze our model on the validation sets of NEx T-QA and MSRVTT-QA") but does not specify the exact split percentages, sample counts, or detailed methodology for creating these splits in the main text. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, memory, or detailed computer specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions various software components and models like 'BERT model', '3D version Res Ne Xt-101', 'Res Net-101', 'Bi-GRU', and 'Adam optimizer', but it does not specify their exact version numbers. |
| Experiment Setup | Yes | For training, we adopt a two-stage scheme by firstly training the model with learning rate lr = 10 4 and then fine-tune the best model obtained in the 1st stage with a smaller lr, e.g., 5 10 5. For both stages, we train the models by using Adam optimizer with batch size of 64 and maximum epoch of 25. The dimension of the models hidden states is d = 512 and the default number of graph layers in QGA is H = 2. |