Deep Recurrent Optimal Stopping
Authors: Niranjan Damera Venkata, Chiranjib Bhattacharyya
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 5 Experiments We compare our OSPG algorithm using deep neural networks (DNN-OSPG) and recurrent neural networks (RNN-OSPG) against the following model-free discrete-time optimal stopping approaches. |
| Researcher Affiliation | Collaboration | Niranjan Damera Venkata Digital and Transformation Organization HP Inc., Chennai, India niranjan.damera.venkata@hp.com Chiranjib Bhattacharyya Dept. of CSA and RBCCPS Indian Institute of Science, Bangalore, India chiru@iisc.ac.in |
| Pseudocode | Yes | Algorithm 1 Pseudocode for mini-batch computation of our temporal OSPG loss |
| Open Source Code | No | The paper does not provide any explicit statements or links indicating that the source code for the methodology described is publicly available. |
| Open Datasets | Yes | We select 17 multi-class time-series classification datasets from the UCR time-series repository [11] (see Appendix D for details of the datasets and selection process). |
| Dataset Splits | Yes | We train models on ten random 50% train-test splits, holding 20% of training data as a validation dataset. |
| Hardware Specification | Yes | All experiments were performed on a shared server configured with 2 Intel Xeon Silver 12core, 2.10 GHz CPUs with 256GB RAM and equipped with 6 NVIDIA 2080Ti GPUs. However, experiments were run on a single GPU at a time and no computation was distributed across GPUs. |
| Software Dependencies | No | The paper mentions software like Keras and Adam but does not specify their version numbers or the version of Python used, which are necessary for reproducible software dependencies. |
| Experiment Setup | Yes | Table 2 shows general hyper-parameter settings used for all experiments. |