Human Memory Search as Initial-Visit Emitting Random Walk

Authors: Kwang-Sung Jun, Jerry Zhu, Timothy T. Rogers, Zhuoran Yang, ming yuan

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Section 3, we apply INVITE to both toy data and real-world fluency data. On toy data our experiments empirically confirm the consistency result. On actual human responses from verbal fluency INVITE outperforms off-the-shelf baselines.
Researcher Affiliation Academia Kwang-Sung Jun , Xiaojin Zhu , Timothy Rogers Wisconsin Institute for Discovery, Department of Computer Sciences, Department of Psychology University of Wisconsin-Madison kjun@discovery.wisc.edu, jerryzhu@cs.wisc.edu, ttrogers@wisc.edu Zhuoran Yang Department of Mathematical Sciences Tsinghua University yzr11@mails.tsinghua.edu.cn Ming Yuan Department of Statistics University of Wisconsin-Madison myuan@stat.wisc.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link indicating the availability of open-source code for the described methodology.
Open Datasets No The data used to assess human memory search consists of two verbal fluency datasets from the Wisconsin Longitudinal Survey (WLS). The paper describes the WLS as the source but does not provide concrete access information (link, DOI, specific citation for access) for the preprocessed dataset used in the experiments.
Dataset Splits Yes We randomly subsample 10% of the lists as the test set, and use the rest as the training set. We perform 5-fold CV on the training set for each estimator to find the best smoothing parameter Cβ, CRW , CF E {101, 10.5, 100, 10 .5, 10 1, 10 1.5, 10 2} respectively, where the validation measure is the prefix log likelihood for INVITE and the standard random walk likelihood for RW.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU/CPU models, memory, specific cloud instances) used for running the experiments.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., programming languages, libraries, or specific solvers).
Experiment Setup Yes We perform 5-fold CV on the training set for each estimator to find the best smoothing parameter Cβ, CRW , CF E {101, 10.5, 100, 10 .5, 10 1, 10 1.5, 10 2} respectively... Let ηt = γ0(1 + γ0at) c. We use a = Cβ/m and c = 3/4 following [3] and pick γ0 by running the algorithm on a small subsample of the train set. We run ASGD for a fixed number of epochs and take the final βt as the solution.