Learning to Optimize Combinatorial Functions

Authors: Nir Rosenfeld, Eric Balkanski, Amir Globerson, Yaron Singer

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section we evaluate the performance of our method on the task of optimally choosing trending items in social media platforms. and Results: Figures 2(a) and 2(b) compare the value (number of adopters) for the chosen output of each method. As can be seen, DOPS clearly outperforms other methods by a margin.
Researcher Affiliation Academia Nir Rosenfeld 1 Eric Balkanski 1 Amir Globerson 2 Yaron Singer 1 1Harvard University 2Tel Aviv University.
Pseudocode Yes Algorithm 1 DOPS(S = {(Si, zi)}M i=1, m, α)
Open Source Code No The paper does not provide any explicit statement or link for open-source code for the methodology described.
Open Datasets Yes We evaluate the performance of our method on a benchmark dataset of propagating Twitter hashtags (Weng et al., 2013).
Dataset Splits No All pairs (Sω, zω) were randomly partitioned into a train set S and a global test set T using a 90:10 split. The paper does not explicitly mention a separate validation set or its split.
Hardware Specification No The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper mentions optimizing using 'standard convex solvers' and 'the cutting-plane method of Joachims et al. (2009)' but does not provide specific version numbers for any software, libraries, or programming languages used in the experiments.
Experiment Setup No The paper states that 'Hyper-parameters were tuned using cross validation for all relevant methods' but does not provide specific hyperparameter values (e.g., learning rate, batch size, number of epochs) or system-level training settings.