Variational Inference for Sequential Data with Future Likelihood Estimates
Authors: Geon-Hyeong Kim, Youngsoo Jang, Hongseok Yang, Kee-Eung Kim
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In Section 6, we report experimental results of our estimator with synthetic and polyphonic music datasets, to show its effectiveness compared to the state-of-the-art algorithms. |
| Researcher Affiliation | Academia | 1School of Computing, KAIST, Daejeon, Republic of Korea 2Graduate School of AI, KAIST, Daejeon, Republic of Korea. |
| Pseudocode | No | The paper describes its algorithm and methods through text and mathematical equations, but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the described methodology. |
| Open Datasets | Yes | We further conducted experiments on real-world datasets using four polyphonic music datasets (Boulanger lewandowski et al., 2012). |
| Dataset Splits | No | The paper mentions using validation performance for model selection but does not provide specific details on the dataset splits (e.g., percentages or exact counts for train/validation/test). |
| Hardware Specification | No | The paper does not provide specific details on the hardware used for running its experiments, such as GPU/CPU models or memory specifications. |
| Software Dependencies | No | The paper mentions the use of PyTorch for implementation but does not specify its version number or other software dependencies with specific versions. |
| Experiment Setup | Yes | We used four and eight particles for each algorithm. As for the learning rate, we report the best results among the choices in {3 10 4 , 1 10 4 , 3 10 5 , 1 10 5}. |