COBE: Contextualized Object Embeddings from Narrated Instructional Video
Authors: Gedas Bertasius, Lorenzo Torresani
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 Experimental Results |
| Researcher Affiliation | Collaboration | Gedas Bertasius1, Lorenzo Torresani1,2 1Facebook AI, 2Dartmouth College |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code for its methodology or links to code repositories. |
| Open Datasets | Yes | We train COBE on the instructional videos of How To100M dataset [14], and then test it on the evaluation sets of How To100M, EPIC-Kitchens [16], and You Cook2 [17] datasets. |
| Dataset Splits | No | The paper states it trains on 'How To100M_BB' and evaluates on 'How To100M_BB_test', 'EPIC-Kitchens', and 'You Cook2_BB'. However, it does not explicitly provide the specific percentages or methodology for creating train/validation/test splits from a single dataset needed to reproduce the data partitioning, nor does it clearly define a separate validation set for hyperparameter tuning. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions using models and frameworks like BERT, Faster R-CNN, ResNeXt-101, and FPN, but does not specify any software dependencies with version numbers (e.g., Python 3.x, PyTorch x.x, CUDA x.x). |
| Experiment Setup | No | The paper describes the model architecture and training components, including the loss function, but does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs), optimizer settings, or explicit configuration steps. |