Submodular Meta-Learning
Authors: Arman Adibi, Aryan Mokhtari, Hamed Hassani
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide two experimental setups to evaluate the performance of our proposed algorithms and compare with other baselines. Each setup involves a different set of tasks which are represented as submodular maximization problems subject to the k-cardinality constraint. |
| Researcher Affiliation | Academia | Arman Adibi ESE Department University of Pennsylvania Philadelphia, PA 19104 aadibi@seas.upenn.edu; Aryan Mokhtari ECE Department University of Texas at Austin Austin, TX 78712 mokhtari@austin.utexas.edu; Hamed Hassani ESE Department University of Pennsylvania Philadelphia, PA 19104 hassani@seas.upenn.edu |
| Pseudocode | Yes | Algorithm 1; Algorithm 2; Algorithm 3 Meta-Greedy; Algorithm 4 Randomized meta-Greedy |
| Open Source Code | No | The paper does not contain any explicit statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We will formalize and solve a facility location problem on the Uber dataset [49]. Our experiments were run on the portion of data corresponding to Uber pick-ups in Manhattan in the period of September 2014. This portion consists of 106 data points each represented as a triplet (latitude, longitude, Date Time)... We use the Movielens dataset [50] which consists of 106 ratings (from 1 to 5) by 6041 users for 4000 movies. |
| Dataset Splits | No | The paper describes partitioning users into training and test phases, e.g., 'We partitioned the 200 users into 100 users for the training phase and 100 other users for the test phase.' However, it does not explicitly mention a separate validation set or provide specific split percentages/counts for training, validation, and test sets that would allow full reproduction of data partitioning. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments (e.g., CPU, GPU models, or memory specifications). |
| Software Dependencies | No | The paper does not list any specific software dependencies with version numbers. |
| Experiment Setup | No | The paper provides details on varying k and l values for the experiments (e.g., 'k = 20, and vary l from 5 to 18', 'k changes from 5 to 30, and l is 80% of k'), but it does not specify common experimental setup details like hyperparameter values (e.g., learning rate, batch size, optimizer settings) or model initialization. |