Semantic Concept Discovery for Large-Scale Zero-Shot Event Detection
Authors: Xiaojun Chang, Yi Yang, Alexander Hauptmann, Eric P Xing, Yao-Liang Yu
IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on recent TRECVID datasets verify the superiority of the proposed approach. |
| Researcher Affiliation | Academia | Xiaojun Chang1,2, Yi Yang1, Alexander G. Hauptmann2, Eric P. Xing3 and Yao-Liang Yu3 1Centre for Quantum Computation and Intelligent Systems, University of Technology Sydney. 2Language Technologies Institute, Carnegie Mellon University. 3Machine Learning Department, Carnegie Mellon University. {cxj273, yee.i.yang}@gmail.com, {alex, epxing, yaoliang}@cs.cmu.edu |
| Pseudocode | Yes | Algorithm 1: The GCG algorithm for the rank aggregation problem (4) |
| Open Source Code | No | The paper does not provide explicit statements or links indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We pre-trained 1534 concept classifiers, using TRECVID SIN dataset (346 classes), Google sports (478 classes) [Karpathy et al., 2014], ucf101 dataset (101 classes) [Soomro et al., 2012] and YFCC dataset (609 classes) [YFC, 2014]. None of these datasets contains event label information. |
| Dataset Splits | No | The paper mentions: 'If validation data is available (such is the case for TRECVID datasets), we can evaluate the above concerns about the concepts by computing their average precision on the validation data.' However, it does not specify the exact validation splits or how it was consistently used across all experiments for reproducibility. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper mentions the use of a 'skip-gram model,' 'Fisher vector representation,' and 'cascade SVM' but does not specify any software libraries or their version numbers (e.g., Python, TensorFlow, PyTorch, scikit-learn versions). |
| Experiment Setup | No | The paper describes high-level methodological steps like dimension reduction and Fisher vector generation ('reduce the dimension of each descriptor by a factor of 2 and then use 256 components to generate the Fisher vectors') and training 'cascade SVM for each concept classifier,' but it does not provide specific experimental setup details such as hyperparameter values (e.g., learning rates, batch sizes, number of epochs) or optimizer settings needed for reproducibility. |