Surveys without Questions: A Reinforcement Learning Approach
Authors: Atanu R Sinha, Deepali Jain, Nikhil Sheoran, Sopan Khosla, Reshmi Sasidharan257-264
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | On validation against actual survey data, proxy ratings yield reasonable performance results. The dataset is split randomly into two groups for training (75%) and testing (25%). |
| Researcher Affiliation | Industry | 1Adobe Research, India 2Adobe, India atr@adobe.com, jaindeepali@google.com,{sheoran, skhosla, rsasidha}@adobe.com |
| Pseudocode | No | The paper describes the methods using mathematical equations and text, and includes a diagram (Figure 1), but does not present structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any statement about releasing source code or provide a link to a code repository. |
| Open Datasets | No | The paper states, "Clickstream data from the website of a consumer electronics company are used," indicating proprietary data without providing any public access information, links, or citations for the dataset. |
| Dataset Splits | Yes | The dataset is split randomly into two groups for training (75%) and testing (25%). |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions using an LSTM model and RNN, but does not provide specific software names with version numbers for dependencies (e.g., 'TensorFlow' or 'PyTorch' with versions). |
| Experiment Setup | No | The paper mentions a fixed dimension of 150 chosen based on limited hyper-parameter tuning (50, 100, 150, 200) for state representation, and refers to a 'learning rate' (alpha) but does not provide specific values for these or other hyperparameters like batch size, epochs, or optimizer settings. |