SimMMDG: A Simple and Effective Framework for Multi-modal Domain Generalization
Authors: Hao Dong, Ismail Nejjar, Han Sun, Eleni Chatzi, Olga Fink
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate that our framework is theoretically well-supported and achieves strong performance in multi-modal DG on the EPIC-Kitchens dataset and the novel Human-Animal-Cartoon (HAC) dataset introduced in this paper. |
| Researcher Affiliation | Academia | 1ETH Zürich 2EPFL |
| Pseudocode | No | The paper describes its methodology in natural language and diagrams, but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our source code and HAC dataset are available at https://github.com/donghao51/Sim MMDG. |
| Open Datasets | Yes | We use the EPIC-Kitchens dataset [16] and introduce a novel HAC dataset in this paper, which will be made publicly accessible for further research. ... Our source code and HAC dataset are available at https://github.com/donghao51/Sim MMDG. |
| Dataset Splits | Yes | We train the network for 15 epochs on an RTX 2080 Ti GPU which takes about 20 hours and select the model with the best performance on the validation dataset. |
| Hardware Specification | Yes | Finally, we train the network for 15 epochs on an RTX 2080 Ti GPU which takes about 20 hours and select the model with the best performance on the validation dataset. |
| Software Dependencies | No | The paper mentions using 'MMAction2 toolkit' and 'Adam optimizer' but does not provide specific version numbers for software dependencies like PyTorch, TensorFlow, CUDA, or the MMAction2 toolkit itself. |
| Experiment Setup | Yes | We use the Adam optimizer [39] with a learning rate of 0.0001 and a batch size of 16. The scalar temperature parameter τ is set to 0.1. Additionally, we set αcon = 3.0, αdis = 0.7, and αtrans = 0.1. ... Finally, we train the network for 15 epochs... |