Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Reverse Distribution Based Video Moment Retrieval for Effective Bias Elimination

Authors: Lingdu Kong, Xiaochun Yang, Tieying Li, Bin Wang, Xiangmin Zhou

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experiments on bias ratio demonstrate that our Re Dis method achieves state-of-the-art performance in bias elimination, while the results on moment retrieval confirm the effectiveness of our DEA framework across three evaluation methods, two datasets, and three baselines.
Researcher Affiliation Academia 1Northeastern University, China 2National Frontiers Science Center for Industrial Intelligence and Systems Optimization, China 3Key Laboratory of Data Analytics and Optimization for Smart Industry (Northeastern University), Ministry of Education, China 4RMIT University, Australia EMAIL, EMAIL, EMAIL, EMAIL
Pseudocode No The paper describes methods using text and mathematical formulas but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks, nor does it present structured steps in a code-like format.
Open Source Code Yes Code https://github.com/Noob KLD/Re Dis-VMR
Open Datasets Yes Datasets. Charades-STA (Gao et al. 2017): The collection comprises daily indoor activity videos, sourced from the Charades dataset, totaling 6, 672 videos with 16, 128 annotations and 11, 767 moments. Activity Net Captions (Krishna et al. 2017): With 20, 000 videos spanning various fields, this dataset contains an average of 3.65 temporally localized sentences per video.
Dataset Splits No The paper discusses various evaluation settings (Traditional VMR, Resplitting VMR, Re Dis-VMR) and how datasets are modified or re-partitioned (e.g., 'Resplitting method modifies the training and test data based on the distribution of the original training set and test set, transforming them into datasets with out-of-distribution (OOD) distributions.'). However, it does not provide specific details on the train/validation/test splits (e.g., percentages, sample counts, or explicit standard splits) used for its own experimental setup to reproduce the data partitioning.
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running its experiments.
Software Dependencies No The paper mentions the use of various methods and models but does not specify any software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x, CUDA x.x) that are needed to replicate the experiments.
Experiment Setup Yes Effectiveness of Fair Loss. Figure 5 illustrates the variation of m Io U with respect to λ. It can be observed that when λ is between 0.001 and 1, the obtained m Io U evaluation metrics are better than when λ is 0. This indicates that the loss function can effectively reduce the impact of concentration distribution, enabling it to achieve good evaluation metrics in the Re Dis-VMR setting. Additionally, we found that when λ is 1, the model achieves the best evaluation metrics.