i-Code: An Integrative and Composable Multimodal Learning Framework

Authors: Ziyi Yang, Yuwei Fang, Chenguang Zhu, Reid Pryzant, DongDong Chen, Yu Shi, Yichong Xu, Yao Qian, Mei Gao, Yi-Ling Chen, Liyang Lu, Yujia Xie, Robert Gmyr, Noel Codella, Naoyuki Kanda, Bin Xiao, Lu Yuan, Takuya Yoshioka, Michael Zeng, Xuedong Huang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate how i-Code can outperform state-of-the-art techniques on five multimodal understanding tasks and single-modality benchmarks, improving by as much as 11% and demonstrating the power of integrative multimodal pretraining.
Researcher Affiliation Industry Microsoft Azure Cognitive Services Research {ziyiyang, yuwfan, chezhu}@microsoft.com
Pseudocode No The paper describes the model architecture and training objectives in text and diagrams (Figure 1), but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not explicitly state that the source code for i-Code is available, nor does it provide a link to a repository for its implementation. A footnote refers to a GitHub page for BERT's multilingual data, which is a reference for a specific weighting method, not the paper's overall code.
Open Datasets Yes We choose the recently published video dataset YTTemporal-180M (Zellers et al. 2021)... We use 72.8 million image-caption pairs from the pretraining data of the Florence computer vision foundation model (Yuan et al. 2021)... We use internal 75k-hour transcribed English speech data... For visual and speech pair datasets, we leverage Spoken Moments in Time (SMi T), a video-narration dataset (Monfort et al. 2021).
Dataset Splits Yes 54k clips are held out as the validation set. ... 25K pairs are held out as the validation set. ... Again, 25k pairs are kept as the validation set. ... The validation split contains 5k examples.
Hardware Specification Yes On either paired datasets or videos, we pretrain for 3 epochs on 72 A100 GPUs, with effective batch size 1728.
Software Dependencies No The paper mentions software components and models like 'De BERTa V3 base', 'Co Swin transformer', 'Wav LM-large model', 'Pe Co', 'wav2vec 2.0', 'Hu BERT', and 'Adam W', but does not provide specific version numbers for any of them.
Experiment Setup Yes On either paired datasets or videos, we pretrain for 3 epochs on 72 A100 GPUs, with effective batch size 1728. The learning rate is 2 10 5 for the fusion module and 10 5 for modality encoders with 20000 warm-up steps, and the optimizer is Adam W.