Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Non-monotone DR-submodular Maximization over General Convex Sets
Authors: Christoph Dürr, Nguyen Kim Thang, Abhinav Srivastav, Léo Tible
IJCAI 2020 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally we benchmark our algorithm on problems arising in machine learning domain with the real-world datasets. |
| Researcher Affiliation | Academia | 1LIP6, Sorbonne University, France 2IBISC, Univ Évry, University Paris-Saclay, France |
| Pseudocode | Yes | Algorithm 1 Frank-Wolfe Algorithm |
| Open Source Code | Yes | The source code is available at https://sites.google.com/ site/abhinavsriva/ijcai-20-code and https://www.ibisc.univ-evry.fr/~thang |
| Open Datasets | No | The paper mentions using 'Advogato network with 6.5K users (vertices) and 61K connections (edges)' and 'synthetic quadratic objectives', but does not provide concrete access information (link, DOI, or specific citation for public access) for either. For synthetic data, it describes generation, but this isn't a public dataset in the sense of being provided externally with access info. |
| Dataset Splits | No | The paper does not provide specific dataset split information for training, validation, or testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions 'MATLAB using CPLEX optimization tool', but does not provide a specific version number for CPLEX or MATLAB. |
| Experiment Setup | Yes | All experiments are performed in MATLAB using CPLEX optimization tool on MAC OS version 10.142. ... We run all the algorithms for 100 iterations. All the results are the average of 20 repeated experiments. ... we set p = 0.0001. |