Cooperative Learning of Audio and Video Models from Self-Supervised Synchronization

Authors: Bruno Korbar, Du Tran, Lorenzo Torresani

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In our experiments we study several such applications, including pretraining for action recognition in video, feature extraction for audio classification, as well as multisensory (visual and audio) video categorization. Specifically, we demonstrate that, without further finetuning, the features computed from the last convolutional layer of the audio stream yield performance on par with or better than the state-of-the-art on established audio classification benchmarks (DCASE2014 and ESC-50).
Researcher Affiliation Collaboration Bruno Korbar Dartmouth College bruno.18@dartmouth.edu Du Tran Facebook Research trandu@fb.com Lorenzo Torresani Dartmouth College LT@dartmouth.edu
Pseudocode No The paper describes the architecture with figures but does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about releasing source code or a link to a code repository.
Open Datasets Yes We experimented with training our model on several datasets: Kinetics [12], Sound Net [20], and Audio Set [28]. For this purpose, after AVTS training with contrastive loss on Kinetics, we fine-tune our video subnetwork on two medium-size action recognition benchmarks: UCF101 [25] and HMDB51 [24].
Dataset Splits No The paper mentions '3 train/test splits' for UCF101 and HMDB51 but does not provide explicit details about a separate validation split (e.g., specific percentages or counts).
Hardware Specification No The paper mentions 'a four-GPU machine' but does not specify the exact GPU models, CPU, or other detailed hardware specifications.
Software Dependencies No The paper describes the use of various architectural elements and processing steps (e.g., 'MCx network', 'I3D-RGB', 'FFT filterbank'), but does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup Yes Hyper-parameter η in Eq. 1 is set to 0.99. We train the complete AVTS network end-to-end using stochastic gradient descent with initial learning rate determined via grid search. Training is done on a four-GPU machine with a mini-batch of 16 examples per GPU. The learning rate is scaled by 0.1 each time the loss value fails to decrease for more than 5 epochs. FFT filterbank parameters are set as follows: window length to 0.02, window step to 0.01, FFT size to 1024, and number of filters to 40.