Functional Gradient Flows for Constrained Sampling
Authors: Shiyue Zhang, Longlin Yu, Ziheng Cheng, Cheng Zhang
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive numerical experiments across different constrained machine learning problems are conducted to demonstrate the effectiveness and efficiency of our method. |
| Researcher Affiliation | Academia | Shiyue Zhang School of Mathematical Sciences Peking University zhangshiyue@stu.pku.edu.cn Longlin Yu School of Mathematical Sciences Peking University llyu@pku.edu.cn Ziheng Cheng Department of Industrial Engineering and Operations Research University of California, Berkeley ziheng_cheng@berkeley.edu Cheng Zhang School of Mathematical Sciences and Center for Statistical Science Peking University chengzhang@math.pku.edu.cn |
| Pseudocode | Yes | Algorithm 1 CFG: Constrained Functional Gradient |
| Open Source Code | Yes | The code is available at https://github. com/Shiyue Zhang66/Constrained-Functional-Gradient-Flow. |
| Open Datasets | Yes | Following Lan et al. (2014), we also evaluate our method using the diabetes dataset discussed in Park & Casella (2008). ... We use the COMPAS dataset following the setting in Liu et al. (2021). ... we additionally experiment on the 276-dimensional larger dataset Blog Feedback Liu et al. (2020). |
| Dataset Splits | No | The paper describes dataset usage (e.g., "1000 particles", "1000x20 dataset") but does not provide specific train/validation split percentages, sample counts, or clear cross-validation setups. |
| Hardware Specification | Yes | The experiments are implemented on Intel 2.30GHz CPU with RAM 16384MB and NVIDIA Ge Force RTX 3060 Laptop GPU with total memory 14066MB. |
| Software Dependencies | No | The paper mentions that "All the experiments were implemented with Pytorch" but does not specify the version number of Pytorch or other software dependencies. |
| Experiment Setup | Yes | For CFG, fnet and znet are three-layer neural networks with Leaky Re LU activation (negative slope=0.1). The number of hidden units is 128, except for the RING on which 256 is used. Both neural nets are trained by Adam optimizer with learning rate 0.002, except for RING on which 0.005 is used. The number of inner-loop of gradient updates for fnet and znet is set to 10, except for the RING on which 3 is used. The total number of iterations is 2000 and the step size of particle is 0.005 except for the RING on which 0.01 is used. The band width is set to 0.05 except for BLOCK on which 0.001 is used. λ in the piece-wise construction of the velocity field is chosen to be 1. |