Reinforced Approximate Exploratory Data Analysis

Authors: Shaddy Garg, Subrata Mitra, Tong Yu, Yash Gadhia, Arjun Kashettiwar

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Evaluations with 3 real datasets show that our technique can preserve the original insight generation flow while improving the interaction latency, compared to baseline methods. ... We show the effectiveness of our technique by extensively evaluating our solution on 3 real-world datasets and against several baselines.
Researcher Affiliation Collaboration Shaddy Garg1, Subrata Mitra1*, Tong Yu1, Yash Gadhia2 , Arjun Kashettiwar 2 1 Adobe Research 2 Indian Institute of Technology, Bombay
Pseudocode Yes Algorithm 1: APPROXEDA s Training Algorithm
Open Source Code No The paper makes no explicit statement about releasing its source code, nor does it provide a link to a code repository.
Open Datasets Yes Dataset and EDA sequences ... Flight (Transtats 2019) (also used by (Bar El et al. 2020)), Housing (Lianjia.com 2018) and Income (Bureau 2014)
Dataset Splits No The paper mentions setting aside '1k sequences generated by the simulator using full data as held-out set for evaluations,' but it does not specify explicit training, validation, or test splits for the raw datasets used in the overall model training or simulation.
Hardware Specification Yes All experiments used a 32 core Intel(R)Xeon(R) CPU E52686 with 4 Tesla V100-SXM2 GPU(s).
Software Dependencies No The paper mentions using specific models like 'A2C methodology' and 'BTM model' but does not provide version numbers for any software dependencies (e.g., Python, TensorFlow, PyTorch).
Experiment Setup Yes Parameters. We use K = 4 intents from BTM as it maximizes overall UCI score. ... We use values of β and γ as 1 to provide equal weightage to rewards after scaling them.