Extrapolative Continuous-time Bayesian Neural Network for Fast Training-free Test-time Adaptation
Authors: Hengguan Huang, Xiangming Gu, Hao Wang, Chang Xiao, Hongfu Liu, Ye Wang
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate ECBNN on both synthetic and real-world streaming data. Specifically, we conduct preliminary experiments on low dimensional synthetic toy data (Growing Circle), and then study the generalization of ECBNN in handling high dimensional synthetic video dataset based on handwritten digits (Streaming Rotating MNIST USPS). We finally evaluate our model on Oulu VS2, a realistic multi-view lip reading dataset. |
| Researcher Affiliation | Academia | Hengguan Huang1 Xiangming Gu1 Hao Wang2 Chang Xiao1 Hongfu Liu1 Ye Wang1* 1National University of Singapore 2Rutgers University |
| Pseudocode | No | The paper describes its methods through mathematical equations and textual explanations, but it does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | Our code will soon be available online: https://github.com/guxm2021/ECBNN |
| Open Datasets | Yes | We conduct a preliminary study on a toy dataset, Growing Circle, for a binary classification task; Growing Circle extends the Circle dataset in [30] to incorporate temporal changes. |
| Dataset Splits | Yes | The train, valid and test sets include 5250, 750 and 1800 sequences, respectively. |
| Hardware Specification | No | The provided paper does not include specific details about the hardware used for running experiments (e.g., GPU/CPU models, memory, or cloud instance types) in its main text. |
| Software Dependencies | No | The paper mentions 'PyTorch' in the context of automatic differentiation but does not specify its version or any other software dependencies with version numbers. |
| Experiment Setup | Yes | Following [30], λd is chosen from {0.2, 0.5, 1.0, 2.0, 5.0} (see implementations details in Appendix G). |