Regret Ratio Minimization in Multi-Objective Submodular Function Maximization
Authors: Tasuku Soma, Yuichi Yoshida
AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Using real and synthetic data, we empirically demonstrate that our methods achieve a small regret ratio. |
| Researcher Affiliation | Collaboration | Tasuku Soma The University of Tokyo tasuku soma@mist.i.u-tokyo.ac.jp Yuichi Yoshida National Institute of Informatics, and Preferred Infrastructure, Inc. yyoshida@nii.ac.jp |
| Pseudocode | Yes | Algorithm 1 Coordinate-wise maximum method; Algorithm 2 Polytope method |
| Open Source Code | No | The paper does not provide concrete access to source code for the described methodology. |
| Open Datasets | Yes | In our experiment, we used the Movie Lens 100K dataset, consisting of 100,000 ratings from 943 users on 1,682 movies (Grouplens 1998) |
| Dataset Splits | No | The paper mentions using specific datasets but does not provide explicit details about training, validation, or test splits. It directly describes using the Movie Lens 100K dataset and creating a synthetic instance for the budget allocation problem without specifying how data was partitioned for different phases. |
| Hardware Specification | Yes | We conducted experiments on a Linux server with an Intel Xeon E52690 (2.90 GHz) processor and 256 GB of main memory. |
| Software Dependencies | Yes | All the algorithms were implemented in C# and run using Mono 4.2.3. |
| Experiment Setup | No | The paper mentions specific settings like 'set λ = 0.1' and 'adopted the double greedy method', but these are specific parameter choices or algorithmic methods, not a comprehensive description of hyperparameters (e.g., learning rates, batch sizes, epochs) or detailed training configurations typically associated with an experimental setup. |