GO Hessian for Expectation-Based Objectives
Authors: Yulai Cong, Miaoyun Zhao, Jianqiao Li, Junya Chen, Lawrence Carin12060-12068
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Leveraging the GO Hessian, we develop a new second-order method for Eqγ (y)[f(y)], with challenging experiments conducted to verify its effectiveness and efficiency. The proposed techniques are verified with challenging experiments where non-rep gamma RVs are of interest. |
| Researcher Affiliation | Academia | Yulai Cong, Miaoyun Zhao,* Jianqiao Li, Junya Chen, Lawrence Carin Department of Electrical and Computer Engineering, Duke University |
| Pseudocode | Yes | Algorithm 1 SCR-GO for minγ Eq(x)qγ(y|x)[f(x, y)] |
| Open Source Code | Yes | Code will be available at github.com/YulaiCong/GOHessian. |
| Open Datasets | Yes | Variational Encoders for PFA and PGBN... Figures 4(b)-4(c) show the training objectives versus the number of oracle calls and processed observations... PFA on MNIST. |
| Dataset Splits | No | The paper describes using training datasets and showing training curves but does not explicitly provide details on train/validation/test splits (e.g., percentages, sample counts, or specific predefined splits). |
| Hardware Specification | Yes | The Titan Xp GPU used was donated by the NVIDIA Corporation. |
| Software Dependencies | No | The paper mentions software like PyTorch and TensorFlow but does not specify their version numbers or any other software dependencies with specific versions. |
| Experiment Setup | Yes | For both SGD and Adam, learning rates from {0.001, 0.005, 0.01, 0.05, 0.1, 0.5, 1} are tested with the best-tuned results shown. Other settings are given in Appendix J. |