Teaching Multiple Concepts to a Forgetful Learner

Authors: Anette Hunziker, Yuxin Chen, Oisin Mac Aodha, Manuel Gomez Rodriguez, Andreas Krause, Pietro Perona, Yisong Yue, Adish Singla

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We perform extensive evaluations using simulations along with real user studies in two concrete applications: (i) an educational app for online vocabulary teaching; and (ii) an app for teaching novices how to recognize animal species from images. Our results demonstrate the effectiveness of our algorithm compared to popular heuristic approaches.
Researcher Affiliation Academia University of Zurich, anette.hunziker@gmail.com, University of Chicago, chenyuxin@uchicago.edu, University of Edinburgh, oisin.macaodha@ed.ac.uk, ETH Zurich, krausea@ethz.ch, Caltech, {perona, yyue}@caltech.edu, *MPI-SWS, {manuelgr, adishs}@mpi-sws.org
Pseudocode Yes Algorithm 1 Adaptive Teaching Algorithm
Open Source Code No The paper provides links to websites for apps developed ([1] https://www.teaching-biodiversity.cc, [2] https://www.teaching-german.cc) but does not explicitly state that the source code for the methodology described in the paper is available at these links or elsewhere, nor does it provide a direct link to a code repository.
Open Datasets No For the German vocabulary teaching app, we collected 100 English-German word pairs... For the biodiversity teaching app, we collected images of 50 animal species. The paper describes datasets collected by the authors but does not provide concrete access information (e.g., link, DOI, repository) for these specific datasets. While it cites eBird and iNaturalist as motivations, it does not state their collected data is from these sources or provide access to their specific collected data.
Dataset Splits No The paper does not provide specific training/validation/test dataset splits. Experiments are described for simulations and user studies, which don't typically involve such splits for model training.
Hardware Specification No The paper does not explicitly describe any specific hardware (e.g., GPU/CPU models, memory, or cloud instances) used to run the simulations or power the applications.
Software Dependencies No The paper does not provide specific version numbers for any software components, libraries, or programming languages used in the experiments.
Experiment Setup Yes Specifically, for easy concepts the parameters are θ1 = (a1 = 10, b1 = 5, c1 = 0), and for difficult concepts the parameters are θ2 = (a2 = 3, b2 = 1.5, c2 = 0)... For the Biodiversity dataset, we set the parameters of each concept based on their difficulty level. Namely, we set θ1 = (10, 5, 0) for common (i.e., easy) species and θ2 = (3, 1.5, 0) for rare (i.e., difficult) species... For the German dataset... we chose a more robust set of parameters for each of the concepts given by θ = (6, 2, 0). We run our candidate algorithms with n = 15, T = 40...