SHPOS: A Theoretical Guaranteed Accelerated Particle Optimization Sampling Method
Authors: Zhijian Li, Chao Zhang, Hui Qian, Xin Du, Lingwei Peng
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on both synthetic and real data validate our theory and demonstrate the superiority of SHPOS over the state-of-the-art. We evaluate our method on a list of tasks, including both synthetic and real datasets. The empirical results demonstrate the superiority of our method over the state-of-the-art. |
| Researcher Affiliation | Collaboration | Zhijian Li1 , Chao Zhang2,3 , Hui Qian2,3 , Xin Du1 and Lingwei Peng2 1Information Science and Electronic Engineering, Zhejiang University 2College of Computer Science and Technology, Zhejiang University 3Alibaba-Zhejiang University Joint Research Institute of Frontier Technologies {lizhijian, zczju, qianhui, duxin, penglingwei}@zju.edu.cn |
| Pseudocode | Yes | Algorithm 1 Stochastic Hamiltonian Particle Optimization Sampling |
| Open Source Code | No | The paper does not provide a statement about releasing code or a link to a code repository for the described methodology. |
| Open Datasets | Yes | Four publicly available benchmark datasets from LIBSVM1, a3a, w8a, a8a, and ijcnn1 are used for evaluation. ... 1https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/ ... 6 datasets from UCI2 and LIBSVM. ... 2http://archive.ics.uci.edu/ml/datasets.php |
| Dataset Splits | No | The paper mentions using datasets for evaluation but does not specify training, validation, or test dataset splits (e.g., percentages, sample counts, or references to predefined splits). |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers. |
| Experiment Setup | Yes | We use 1000 particles that initialized by drawing from a Gaussian distribution with mean [ 4, 2]T and variance 0.252. ... We report negative log-likelihood versus the number of data passes with 50 particles on datasets a3a and w8a ... we use a Gamma(1, 0.1) prior for the inverse covariance and adopt a one-hidden-layer neural network with 50 hidden units. |