Differentially Private Decomposable Submodular Maximization
Authors: Anamay Chaturvedi, Huy Lê Nguyễn, Lydia Zakynthinou6984-6992
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We complement our theoretical bounds with experiments demonstrating improved empirical performance. |
| Researcher Affiliation | Academia | Khoury College of Computer Sciences, Northeastern University, Boston, Massachusetts 02115 |
| Pseudocode | Yes | Algorithm 1 Private Continuous Greedy |
| Open Source Code | Yes | The code and dataset used for our experiments are available at https://github.com/Anamay-Chaturvedi/Differentially-privatedecomposable-submodular-optimization |
| Open Datasets | Yes | We use the same dataset of coordinates of Uber pick-ups2. 2https://www.kaggle.com/fivethirtyeight/uber-pickups-in-newyork-city. |
| Dataset Splits | No | The paper does not provide specific train/validation/test dataset splits. It describes a resampling strategy for evaluation. |
| Hardware Specification | No | The paper mentions 'on a personal computer' but does not provide any specific hardware details such as GPU/CPU models or memory specifications. |
| Software Dependencies | No | The paper does not list any specific software dependencies with version numbers. |
| Experiment Setup | Yes | In PCG, we set η = 0.33 and use the closed-form expression for the multilinear relaxation of f D. We set ε = 0.1, δ = 1/m1.5 where m = |D| = 100, with which the privacy parameter used in the differentially private choices of increment is ε0 0.01006. |