Sequential Density Ratio Estimation for Simultaneous Optimization of Speed and Accuracy
Authors: Akinori F Ebihara, Taiki Miyagawa, Kazuyuki Sakurai, Hitoshi Imaoka
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In tests on one original and two public video databases, Nosaic MNIST, UCF101, and Si W, the SPRT-TANDEM achieves statistically significantly better classification accuracy than other baseline classifiers, with a smaller number of data samples. 5 EXPERIMENTS AND RESULTS |
| Researcher Affiliation | Collaboration | 1NEC Corporation 2RIKEN Center for Advanced Intelligence Project (AIP) |
| Pseudocode | No | The paper describes the proposed algorithm in text and mathematical formulas but does not include any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code and Nosaic MNIST are publicly available at https://github.com/Taiki Miyagawa/SPRT-TANDEM. |
| Open Datasets | Yes | Evaluated public databases are NMNIST, UCF, and Si W. ... The code and Nosaic MNIST are publicly available at https://github.com/Taiki Miyagawa/SPRT-TANDEM. ... UCF101 action recognition database (Soomro et al., 2012) and Spoofing in the Wild (Si W) database (Liu et al., 2018). |
| Dataset Splits | Yes | Training, validation, and test datasets are split and fixed throughout the experiment. ... The training, validation, and test datasets contain 50,000, 10,000, and 10,000 videos with frames of size 28 28 1 (gray scale). |
| Hardware Specification | Yes | All the experiments are conducted with custom python scripts running on NVIDIA Ge Force RTX 2080 Ti, GTX 1080 Ti, or GTX 1080 graphics card. |
| Software Dependencies | Yes | We use Tensorflow 2.0.0 (Abadi et al. (2015)) as a machine learning framework except when running baseline algorithms that are implemented with Py Torch (Paszke et al. (2019)). |
| Experiment Setup | Yes | Hyperparameters of all the models are optimized with Optuna unless otherwise noted so that no models are disadvantaged by choice of hyperparameters. See Appendix H for the search spaces and fixed final parameters. |