Classification with Costly Features Using Deep Reinforcement Learning
Authors: Jaromír Janisch, Tomáš Pevný, Viliam Lisý3959-3966
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | On a set of eight problems, we demonstrate that by replacing the linear approximation with neural networks the approach becomes comparable to the state-of-the-art algorithms developed specifically for this problem. We evaluate and compare the method on several two- and multi-class public datasets with number of features ranging from 8 to 784. |
| Researcher Affiliation | Academia | Jarom ır Janisch, Tom aˇs Pevn y, Viliam Lis y Artificial Intelligence Center, Department of Computer Science Faculty of Electrical Engineering, Czech Technical University in Prague {jaromir.janisch, tomas.pevny, viliam.lisy}@fel.cvut.cz |
| Pseudocode | Yes | Algorithm 1 Training and Algorithm 2 Environment simulation. |
| Open Source Code | Yes | The source code is available at github.com/jaromiru/cwcf. |
| Open Datasets | Yes | We use several public datasets (Lichman 2013; Krizhevsky and Hinton 2009), which are summarized in Table 1. |
| Dataset Splits | Yes | We normalize the datasets with their mean and standard deviation and split them into training, validation and testing sets. Table 1: Used datasets. The table lists specific counts for #trn, #val, #tst for each dataset, e.g., mnist 50k 10k 10k. |
| Hardware Specification | No | The GPU used for this research was donated by the NVIDIA Corporation. Computational resources were provided by the CESNET LM2015042 and the CERIT Scientific Cloud LM2015085, provided under the program Projects of Large Research, Development, and Innovations Infrastructures. (No specific GPU model, CPU model, or detailed cloud instance specs mentioned). |
| Software Dependencies | No | The paper mentions 'Adam (Kingma and Ba 2015)' as an optimizer but does not specify version numbers for any software components, libraries, or programming languages used. |
| Experiment Setup | Yes | We keep other hyperparameters the same across all experiments, and their exact values are reported in Tables 2, 3 in the Appendix. Table 2: Global parameters (listing γ discount-factor 1.0, LR-start initial learning-rate 5 10 4, etc.). Table 3: Dataset parameters (listing hidden layer size, epoch length). |