Stochastic Submodular Maximization with Performance-Dependent Item Costs

Authors: Takuro Fukunaga, Takuya Konishi, Sumio Fujita, Ken-ichi Kawarabayashi1485-1494

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We verify the performance of the algorithm through numerical experiments. We evaluate the empirical performances of our algorithms through numerical experiments. The experimental results indicate that our algorithms perform better than baseline algorithms in many settings. Section 6 reports on numerical experiments.
Researcher Affiliation Collaboration Takuro Fukunaga,1 Takuya Konishi,2 Sumio Fujita,3 Ken-ichi Kawarabayashi2 1RIKEN Advanced Intelligence Project and JST, PRESTO, takuro.fukunaga@riken.jp 2National Institute of Informatics, {takuya-ko,k keniti}@nii.ac.jp 3Yahoo Japan Corporation, sufujita@yahoo-corp.jp
Pseudocode Yes Algorithm 1 Pre-processing and adaptive policy. Algorithm 2 Greedy algorithm.
Open Source Code No The paper states "To implement the logistic regression, we used scikit-learn (http://scikit-learn.org)", which refers to a third-party library, not code released for their own methodology.
Open Datasets Yes For our experiments, we used the WDBC dataset (569 instances; 32 features) from the UCI machine learning repository (http://archive.ics.uci.edu/ml).
Dataset Splits No The paper mentions using "half of the dataset for the pooled data and the other for the test data" and 5-fold cross-validation for parameter selection, but does not specify a train/validation split for the overall model evaluation.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU model, CPU type, memory) used for running the experiments.
Software Dependencies No The paper mentions using "scikit-learn (http://scikit-learn.org)" but does not specify its version number. No other software with version numbers is mentioned.
Experiment Setup Yes The step size δ was set to (2|I|) 1 in our implementations. The stopping time b was set to 1. C and γ were set to 100 and 0.01 respectively. For all training of the logistic regression, the regularization parameter ρ was selected from {0.1, 0.5, 1.0, 2.0, 10.0} by 5-fold cross-validation.