Learning to Detect Concepts from Webly-Labeled Video Data
Authors: Junwei Liang, Lu Jiang, Deyu Meng, Alexander Hauptmann
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The efficacy and the scalability of WELL have been extensively demonstrated on two public benchmarks, including the largest multimedia dataset and the largest manually-labeled video set. Experimental results show that WELL significantly outperforms the state-of-the-art methods. |
| Researcher Affiliation | Academia | Junwei Liang1, Lu Jiang1, Deyu Meng2, Alexander Hauptmann1 1School of Computer Science, Carnegie Mellon University, PA, USA 2School of Mathematics and Statistics, Xi an Jiaotong University, P. R. China. |
| Pseudocode | Yes | Algorithm 1: Webly-labeled Learning (WELL). |
| Open Source Code | No | The paper does not include any statement about releasing source code for the described methodology, nor does it provide a link to a code repository. |
| Open Datasets | Yes | The efficacy and the scalability of WELL have been extensively demonstrated on two public benchmarks, including by far the largest manually-labeled video set called FCVID [Jiang et al., 2015d] and the largest multimedia dataset called YFCC100M [Thomee et al., 2015]. |
| Dataset Splits | Yes | The exact stopping iteration for each detector is automatically tuned in terms of its performance on a small validation set. On FCVID, the set is a small training subset with manual labels whereas on YFCC100M it is a proportion of noisy training set. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments (e.g., GPU models, CPU types, memory). |
| Software Dependencies | No | The paper mentions using Convolutional Neural Network (CNN) features and Lucene for indexing, but it does not specify any software names with version numbers required to replicate the experiments. |
| Experiment Setup | Yes | The concept detectors are trained based on a hinge loss cost function. Algorithm 1 is used to train the concept models iteratively and the λ stops increasing after 100 iterations. ... The hyper-parameters of all methods including the baseline methods are tuned on the same validation set. |