Efficient displacement convex optimization with particle gradient descent

Authors: Hadi Daneshmand, Jason D. Lee, Chi Jin

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 7. Experiments We experimentally validate established bounds on the approximation and optimization error. Specifically, we validate the results for the example of the energy distance, which obeys the required conditions for our theoretical results2.
Researcher Affiliation Academia 1 Laboratory for Information and Decision Systems, MIT 2Foundations of Data Science Institute (FODSI) 3Hariri Institute for Computing and Computational Science and Engineering, Boston University 4 Department of Electrical and Computer Engineering at Princeton University.
Pseudocode No The paper describes algorithms and methods mathematically (e.g., equation 8 for particle gradient descent) but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes The implementation is available on the Git Hub repository https://github.com/hadidaneshmand/icml23_pgd
Open Datasets No The paper uses synthetic data generated internally for its experiments (e.g., 'We draw v1, . . . , vn at random from uniform[0, 1]'). It does not refer to or provide access to a standard, publicly available dataset.
Dataset Splits No The paper uses synthetic data and discusses theoretical convergence rates, but it does not specify explicit train/validation/test dataset splits like percentages or sample counts, which are typical for experiments on fixed datasets.
Hardware Specification No The paper does not explicitly describe the hardware used for its experiments, such as specific GPU or CPU models.
Software Dependencies No The paper mentions 'The implementation is available on the Git Hub repository' but does not specify any software dependencies or their version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes In particular, we use ξ(k) 1 i.i.d. from uniform[ 0.05, 0.05]. For the stepsize, we use γk = 1/k required for the convergence result in Theorem 5.1 (part b).