Learning Transferable Graph Exploration

Authors: Hanjun Dai, Yujia Li, Chenglong Wang, Rishabh Singh, Po-Sen Huang, Pushmeet Kohli

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate that our approach is extremely effective for exploration of spatial maps; and when applied on the challenging problems of coverage-guided software-testing of domain-specific programs and real-world mobile applications, it outperforms methods that have been hand-engineered by human experts.
Researcher Affiliation Collaboration Hanjun Dai" , Yujia Li , Chenglong Wang , Rishabh Singh , Po-Sen Huang , Pushmeet Kohli " Georgia Institute of Technology Google Brain, {hadai, rising}@google.com University of Washington, clwang@cs.washington.edu Deep Mind, {yujiali, posenhuang, pushmeet}@google.com
Pseudocode No No explicit pseudocode or algorithm block was found in the paper.
Open Source Code No The paper does not contain an explicit statement about providing open-source code for the methodology described, nor a link to a code repository for their work.
Open Datasets Yes We test our algorithms on two datasets of programs written in two domain specific languages (DSLs), Robust Fill [17] and Karel [18]. For Karel, we use the published benchmark dataset4 with the train/val/test splits;while for Robust Fill, the training data was generated using a program synthesizer that is described in [17]. Footnote 4: https://msr-redmond.github.io/karel-dataset/
Dataset Splits Yes Table 1: DSL program dataset information. DSL # train # valid # test Coverage Robust Fill 1M 1,000 1,000 Reg Ex Karel 212,524 490 467 Branches
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU/GPU models, memory, or specific cloud instances) used for running the experiments.
Software Dependencies No The paper mentions various software components and tools such as Z3 SMT solver, AFL, Neuzz, RNNs, MLPs, GNNs, and GGNN, but does not provide specific version numbers for any of these dependencies.
Experiment Setup Yes We train on random mazes of size 6 x 6, and test on 100 held-out mazes from the same distribution. The starting location is chosen randomly. We allow the agent to traverse for T = 36 steps, and report the average fraction of the maze grid locations covered on the 100 held-out mazes. To train this agent, we adopt the advantage actor critic algorithm [16], in the synchronized distributed setting. We use 32 distributed actors to collect on-policy trajectories in parallel, and aggregate them into a single machine to perform parameter update. exploration and testing problem for mobile apps... with a fixed interaction budget T = 15