Budgeted Sequence Submodular Maximization

Authors: Xuefeng Chen, Liang Feng, Xin Cao, Yifeng Zeng, Yaqing Hou

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on both synthetic and real-world datasets demonstrate the performance of our new algorithms.
Researcher Affiliation Academia 1 College of Computer Science, Chongqing University, China 2 School of Computer Science and Engineering, University of New South Wales, Australia 3 Department of Computer and Information Sciences, Northumbria University, UK 4 College of Computer Science and Technology, Dalian University of Technology, China
Pseudocode Yes As shown in Alg. 1, GBM starts by initializing a candidate edges set Eca and an edge set Ese for storing the selected edges (line 1). ... The procedure of POBM is presented in Alg. 2.
Open Source Code No The paper does not explicitly state that source code for the methodology is openly available or provide a link to a code repository.
Open Datasets Yes We use two real-world datasets, one is the Movielens 1M (MOV) dataset [Harper and Konstan, 2015] and the other one is the Xuetang X (XTX) dataset [Feng et al., 2019].
Dataset Splits No The paper does not explicitly provide specific training/test/validation dataset splits or describe a cross-validation setup.
Hardware Specification Yes We implement all the algorithms in C++ on Windows 10, and run on a desktop with an Intel(R) i7-10700 2.9 GHz CPU and 32 GB memory.
Software Dependencies No The paper mentions implementation in C++ but does not specify any particular software libraries, frameworks, or solvers with version numbers.
Experiment Setup Yes We set n = 50, B = 10 by default. ... We set the number T of iterations of POBM as 2cmin en2, that is suggested by Theorem 3. ... According to this result, we set T = 10n2 for POBM by default. ... we set a time limit Tim Lim = 30s for them to compare their solution quality with that of GBM and OMEGA.