Hadamard Product for Low-rank Bilinear Pooling

Authors: Jin-Hwa Kim, Kyoung-Woon On, Woosang Lim, Jeonghee Kim, Jung-Woo Ha, Byoung-Tak Zhang

ICLR 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we conduct six experiments to select the proposed model, Multimodal Low-rank Bilinear Attention Networks (MLB). Each experiment controls other factors except one factor to assess the effect on accuracies.
Researcher Affiliation Collaboration Jin-Hwa Kim Interdisciplinary Program in Cognitive Science Seoul National University... Jeonghee Kim & Jung-Woo Ha NAVER LABS Corp. & NAVER Corp... Byoung-Tak Zhang School of Computer Science and Engineering & Interdisciplinary Program in Cognitive Science Seoul National University & Surromind Robotics
Pseudocode No The paper does not contain a clearly labeled 'Pseudocode' or 'Algorithm' block. Figure 1 is a schematic diagram, not pseudocode.
Open Source Code Yes The source code for the experiments is available in Github repository1. 1https://github.com/jnhwkim/Mul Low Bi VQA
Open Datasets Yes The VQA dataset (Antol et al., 2015) is used as a primary dataset, and, for data augmentation, question-answering annotations of Visual Genome (Krishna et al., 2016) are used.
Dataset Splits Yes Validation is performed on the VQA test-dev split, and model comparison is based on the results of the VQA test-standard split.
Hardware Specification No The paper does not explicitly describe the specific hardware used (e.g., GPU models, CPU models, or cloud instance types) for running experiments.
Software Dependencies No The paper mentions software components like GRU, Skip-thought Vector, Bayesian Dropout, and RMSProp, but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes The batch size is 100, and the number of iterations is fixed to 250K. For data augmented models, a simplified early stopping is used, starting from 250K to 350K-iteration for every 25K iterations... RMSProp (Tieleman & Hinton, 2012) is used for optimization. (Also referencing Table 4 for hyperparams: 'Table 4: Hyperparameters used in MLB (single model in Table 2).')