Teaching Active Human Learners

Authors: Zizhe Wang, Hailong Sun5850-5857

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results with both simulated learners and real crowdsourcing workers demonstrate that our teaching algorithm has better teaching performance compared to existing methods. The section presents the experimental evaluation of our approach.
Researcher Affiliation Academia 1SKLSDE Lab, School of Computer Science and Engineering, Beihang University, Beijing, China 100191 2School of Software, Beihang University, Beijing, China 100191 3Beijing Advanced Innovation Center for Big Data and Brain Computing, Beihang University, Beijing, China 100191 wangzz@act.buaa.edu.cn, sunhl@buaa.edu.cn
Pseudocode Yes Algorithm 1 ALTA Input: X, H, P0, ϵ Output: A 1: A = 2: while F(A) < E[err| ] P0(h )ϵ do 3: x = argmaxx X F(A {x}) 4: A = A {x} 5: end while
Open Source Code Yes The source code and data are publicly available1. 1https://github.com/Brickkkkkk/ALTA AAAI21
Open Datasets Yes We conducted experiments on both simulated learners and real human learners with four datasets including Butterfly, Chinese Character, Woodpecker and Breast Cancer, which are widely used in existing work (Singla et al. 2014; Aodha et al. 2018). Breast Cancer The last one is the breast cancer dataset (Dua and Graff 2017). There are 569 samples in this dataset and each sample has 30 dimensions. Since there are only feature vectors in this dataset and no images available, we only conducted simulated experiments on this dataset. URL http://archive.ics.uci.edu/ml. Accessed on 2020-06-20.
Dataset Splits No For each dataset, we sampled 80% of the examples in each category as the teaching examples set X. We also created a test set with the rest examples. The paper specifies a training/test split but does not explicitly mention a separate validation split or its details.
Hardware Specification No The paper does not provide any specific hardware details such as GPU/CPU models, memory amounts, or types of computing resources used for running the experiments.
Software Dependencies No The paper does not provide specific software dependency details, such as library names with version numbers (e.g., Python, PyTorch, TensorFlow versions) needed to replicate the experiments.
Experiment Setup Yes The parameters of our learner model was set as α = 0.5, β = 0.001, γ = 1, η = 3. We ran the algorithm under different teaching set size |A| and plot the changing trend of the expected error. We also studied the effectiveness of B(x) and C(x) by conducting an ablation experiment.