An Efficient High-dimensional Gradient Estimator for Stochastic Differential Equations

Authors: Shengbo Wang, Jose Blanchet, Peter W Glynn

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In addition to establishing the validity of our methodology for general SDEs with jumps, we also perform numerical experiments that test our estimator in linear-quadratic control problems parameterized by high-dimensional neural networks.
Researcher Affiliation Academia Shengbo Wang MS&E Stanford University Stanford, CA 94305 shengbo.wang@stanford.edu Jose Blanchet MS&E Stanford University Stanford, CA 94305 jose.blanchet@stanford.edu Peter Glynn MS&E Stanford University Stanford, CA 94305 glynn@stanford.edu
Pseudocode No The paper describes mathematical derivations and processes but does not include a clearly labeled "Pseudocode" or "Algorithm" block.
Open Source Code Yes Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [Yes] Justification: The code is submitted with the paper.
Open Datasets No Does the paper explicitly state that the dataset used in the experiments is publicly available or an open dataset? Answer: [No]
Dataset Splits No Does the paper explicitly provide training/test/validation dataset splits needed to reproduce the experiment? Answer: [No]
Hardware Specification Yes The computation time data was generated on a system equipped with a PCIE version of Nvidia Tesla V100 GPU, featuring 32GB of VRAM. Additionally, the system includes a 2-core CPU and 16GB of RAM
Software Dependencies No Does the paper provide a reproducible description of the ancillary software. A reproducible description must include specific version numbers for key software components? Answer: [No]
Experiment Setup Yes Does the paper explicitly provide details about the experimental setup, especially hyperparameters or system-level training settings? Answer:[Yes] Justification: The parameters of the control system in Section 4 are presented in the code. There is no hyperparameter that needs fine-tuning.