Hypotheses Tree Building for One-Shot Temporal Sentence Localization

Authors: Daizong Liu, Xiang Fang, Pan Zhou, Xing Di, Weining Lu, Yu Cheng

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on two challenging datasets demonstrate that MHST achieves competitive performance compared to existing methods.
Researcher Affiliation Collaboration 1School of Cyber Science and Engineering, Huazhong University of Science and Technology 2Wangxuan Institute of Computer Technology, Peking University 3Nanyang Technological University 4Protago Labs Inc 5Tsinghua University 6Microsoft Research
Pseudocode No The paper does not contain any sections or figures explicitly labeled as 'Pseudocode' or 'Algorithm'.
Open Source Code No The paper does not contain an explicit statement or link indicating the availability of its source code.
Open Datasets Yes Activity Net Captions. This dataset is built from Activity Net v1.3 dataset (Caba Heilbron et al. 2015)... Charades-STA. This dataset is built from the Charades (Sigurdsson et al. 2016) dataset and transformed into temporal sentence localization task by (Gao et al. 2017).
Dataset Splits Yes We follow the public split of the dataset that contains a training set and two validation sets val 1 and val 2. Following common settings, we use val 1 as our validation set and use val 2 as our testing sets.
Hardware Specification No The paper does not provide specific hardware details such as GPU or CPU models, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions using 'PyTorch' and specific pre-trained models (C3D, Glove) but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes For the hyper-parameters, we set the percentage α to 60%, and set the pruning threshold τ as 0.7. The balanced weights λ1, λ2 are set to 1.0,1.0. The step L in L-scan pruning is set to 3. During training, the learning rate is by default 0.00005, and decays by a factor of 10 for every 35 epochs. The batch size is 1 and the maximum training epoch is 100.