Optimal Algorithms for Continuous Non-monotone Submodular and DR-Submodular Maximization
Authors: Rad Niazadeh, Tim Roughgarden, Joshua Wang
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We further run experiments to verify the performance of our proposed algorithms in related machine learning applications. |
| Researcher Affiliation | Collaboration | Rad Niazadeh Department of Computer Science Stanford University, Stanford, CA 95130; Tim Roughgarden Department of Computer Science Stanford University, Stanford, CA 95130; Joshua R. Wang Google, Mountain View, CA 94043 |
| Pseudocode | Yes | Algorithm 1: (Vanilla) Continuous Randomized Bi-Greedy; Algorithm 2: Binary-Search Continuous Bi-greedy |
| Open Source Code | No | The paper mentions Algorithms 3 and 4 are in the supplement and describes their implementation, but does not provide a direct link to open-source code for its methodology or explicitly state that the code is open-source. |
| Open Datasets | No | The paper discusses applications like Non-concave Quadratic Programming (NQP) and Softmax Extension for MAP inference of determinantal point processes, but does not name or provide access information for any specific public datasets used in the experiments. |
| Dataset Splits | No | The paper does not provide specific details on how the data was split into training, validation, or test sets, nor does it specify percentages or sample counts for these splits. |
| Hardware Specification | No | The paper states that experiments were implemented in Python but does not provide any specific details about the hardware (e.g., CPU, GPU models, memory, or cloud infrastructure) used to run these experiments. |
| Software Dependencies | No | The paper mentions that experiments were 'implemented in python' but does not specify the version of Python or any other software libraries or dependencies with their version numbers. |
| Experiment Setup | No | The paper states that 'Each experiment consists of twenty repeated trials' and 'n = 100 dimensional functions', but defers 'detailed speciļ¬cs of each experiment' to the supplementary materials, thus not providing concrete hyperparameter values or comprehensive system-level training settings in the main text. |