Aggregated Gradient Langevin Dynamics
Authors: Chao Zhang, Jiahao Xie, Zebang Shen, Peilin Zhao, Tengfei Zhou, Hui Qian6746-6753
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Simulated and real-world experiments are conducted to validate our analysis. Empirical results demonstrate the advantages of proposed variants over the state-of-the-art. We follow the experiment settings in the literature (...) and conduct empirical studies on two simulated experiments (...) and two real-world applications (...). |
| Researcher Affiliation | Collaboration | Chao Zhang,1,2 Jiahao Xie,1 Zebang Shen,3 Peilin Zhao,2 Tengfei Zhou,1 Hui Qian1 1College of Computer Science and Technology, Zhejiang University 2Tencent AI Lab, 3University of Pennsylvania |
| Pseudocode | Yes | Algorithm 1 Aggregated Gradient Langevin Dynamics. Strategy 2 PTU(...). Strategy 3 PPU(...). Strategy 4 TMU(...). |
| Open Source Code | No | The paper does not provide any specific links or statements indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | Two publicly available benchmark datasets are used for evaluation: Year Prediction MSD and Slice Location. two large-scale datasets criteo (27.32GB) and kdd12 (26.76GB) are used 3. (Footnote 3: https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets) |
| Dataset Splits | Yes | By randomly partitioning the dataset into training (4/5) and testing (1/5) sets, we report the test Mean Square Error (MSE) of the compared methods on Year Prediction MSD in Fig. 2. |
| Hardware Specification | No | The paper mentions 'physical memory' and 'CPU time' but does not specify any particular hardware details such as GPU models, CPU models, or specific cloud computing instances used for running the experiments. |
| Software Dependencies | No | The paper does not provide any specific software dependencies with version numbers (e.g., library names with their corresponding versions) that would be needed to replicate the experiments. |
| Experiment Setup | Yes | We set the sample size N = 500 and dimension d = 10, and randomly generate parameters ai N(μ, Σ) with μ = (2, , 2)T and Σ = Id d. In this experiment, we fix the Data-Accessing strategy to RA in AGLD (...). We run all algorithms for 2 104 data passes (...). In this task, we fix the Data-Accessing strategy to RA (...). we manually restrict the available physical memory to 16 GB and 8 GB for simulation. |