When to Intervene: Learning Optimal Intervention Policies for Critical Events

Authors: Niranjan Damera Venkata, Chiranjib Bhattacharyya

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we demonstrate RNN-based OTI policies with experiments and show that they outperform popular intervention methods. and Experiments are performed on two real-world datasets
Researcher Affiliation Collaboration Niranjan Damera Venkata Digital and Transformation Organization HP Inc., Chennai, India niranjan.damera.venkata@hp.com Chiranjib Bhattacharyya Dept. of CSA and RBCCPS Indian Institute of Science, Bangalore, India chiru@iisc.ac.in
Pseudocode No The paper includes an architecture diagram (Figure 1) but does not contain any pseudocode or algorithm blocks.
Open Source Code No The code is proprietary.
Open Datasets Yes Turbofan Engine Failure Data [26, 32]: This dataset from NASA provides train and test data... Azure Predictive Maintenance Guide Data[25]: This is a dataset from a guide provided by Microsoft
Dataset Splits Yes For both datasets, we train on 70% of randomly selected co-variate time-series sequences and hold out 30% of the sequences for testing. From the training set a further 30% of the sequences are set aside as validation data to tune model parameters and policy thresholds. This process is repeated to produce 10 random train-validation-test splits.
Hardware Specification Yes All experiments were run on a Tensorbook Laptop with 32GB RAM and having a single NVIDIA Ge Force GTX 1070 with Max-Q GPU.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies used in the experiments (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes All RNNs share the same encoder architecture which is an LSTM with 128 step look-back and hidden state dimension of 16 units. WBI and TTE thresholds are tuned (individually, for 125 different settings of Cα {8, 10, , 256}) on each validation set using an empirical intervention policy risk: