End-to-End Goal-Driven Web Navigation
Authors: Rodrigo Nogueira, Kyunghyun Cho
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We extensively evaluate different variants of neural net based artificial agents on Wiki Nav and observe that the proposed goal-driven web navigation well reflects the advances in models, making it a suitable benchmark for evaluating future progress. |
| Researcher Affiliation | Academia | Rodrigo Nogueira Tandon School of Engineering New York University rodrigonogueira@nyu.edu Kyunghyun Cho Courant Institute of Mathematical Sciences New York University kyunghyun.cho@nyu.edu |
| Pseudocode | No | The paper describes the model and training process in text and with a graphical illustration (Fig. 1) but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code and datasets are publicly available at github.com/nyu-dl/Web Nav. |
| Open Datasets | Yes | We release a software tool, called Web Nav, that automatically transforms a website into this goal-driven web navigation task, and as an example, we make Wiki Nav, a dataset constructed from the English Wikipedia. The source code and datasets are publicly available at github.com/nyu-dl/Web Nav. |
| Dataset Splits | Yes | We divide those pairs into 113k training, 10k validation, and 10k test examples while carefully ensuring that no article appears in more than one partition. Table 1: Dataset Statistics of Wiki Nav-4-*, Wiki Nav-8-*, Wiki Nav-16-* and Wiki Nav-Jeopardy. Train, Valid, Test columns are provided with numbers. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running experiments, such as GPU or CPU models, or memory specifications. |
| Software Dependencies | No | The paper mentions models like LSTM and refers to previous work for word embeddings, but it does not specify any software names with version numbers, such as programming languages, libraries, or frameworks used for implementation or experiments. |
| Experiment Setup | No | The paper describes architectural choices (e.g., LSTM units, attention-based query representation) and training methods (e.g., supervised learning, SGD), but it does not provide specific numerical values for hyperparameters such such as learning rate, batch size, or number of epochs. |