Sparse Variational Student-t Processes
Authors: Jian Xu, Delu Zeng
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the two proposed approaches on various synthetic and real-world datasets from UCI and Kaggle, demonstrating their effectiveness compared to baseline methods in terms of computational complexity and accuracy, as well as their robustness to outliers. We conduct experiments on eight real-world datasets, two synthetic datasets. These experiments include verification of time complexity, accuracy and uncertainty validation, regression on outlier datasets. |
| Researcher Affiliation | Academia | Jian Xu, Delu Zeng* South China University of Technology 381 Wushan Road, Tianhe District Guangzhou City, Guangdong Province, China 2713091379@qq.com, dlzeng@scut.edu.cn |
| Pseudocode | No | No pseudocode or algorithm blocks are present. |
| Open Source Code | No | No statement regarding open-source code availability is found. |
| Open Datasets | Yes | We use eight data sets to carry out the experiments, for which details can be seen in the appendix. We utilize all algorithms with a maximum iteration number of 5000 to minimize the negative ELBO. We evaluate the two proposed approaches on various synthetic and real-world datasets from UCI and Kaggle... |
| Dataset Splits | Yes | Each experiment was repeated using five-fold cross-validation, and the average value and range were reported for each repetition. |
| Hardware Specification | Yes | We opt to use the PyTorch platform and conduct all experiments on a single NVIDIA A100 GPU. |
| Software Dependencies | No | We opt to use the PyTorch platform and conduct all experiments on a single NVIDIA A100 GPU. No version specified for PyTorch or other libraries. |
| Experiment Setup | Yes | We utilize all algorithms with a maximum iteration number of 5000 to minimize the negative ELBO. We set the batch size to 1024. The learning rate is set to 0.01, and the data is standardized. The noise term ϵi is fixed at 0.10 in the experiments for fair comparison of the performance of each model . |