Iterative Classroom Teaching

Authors: Teresa Yeo, Parameswaran Kamalaruban, Adish Singla, Arpit Merchant, Thibault Asselborn, Louis Faucon, Pierre Dillenbourg, Volkan Cevher5684-5692

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments validate our theoretical results and suggest that appropriately partitioning the classroom into homogenous groups provides a balance between these two objectives. and sections like Experiments Teaching Linear Models with Synthetic Data, Teaching How to Classify Butterflies and Moths, Teaching How to Write. Figures 3 and 4 present experimental results.
Researcher Affiliation Academia Teresa Yeo LIONS, EPFL teresa.yeo@epfl.ch Parameswaran Kamalaruban LIONS, EPFL kamalaruban.parameswaran@epfl.ch Adish Singla MPI-SWS adishs@mpi-sws.org Arpit Merchant MPI-SWS arpitdm@mpi-sws.org Thibault Asselborn CHILI Lab, EPFL thibault.asselborn@epfl.ch Louis Faucon CHILI Lab, EPFL louis.faucon@epfl.ch Pierre Dillenbourg CHILI Lab, EPFL pierre.dillenbourg@epfl.ch Volkan Cevher LIONS, EPFL volkan.cevher@epfl.ch
Pseudocode Yes Algorithm 1 CT: Classroom teaching algorithm
Open Source Code No The paper references an extended version (Yeo et al. 2018) which is an arXiv preprint, but there is no explicit statement or link confirming the release of open-source code for the methodology described.
Open Datasets No The paper mentions using a collection of 160 images of insects, data collected by (Singla et al. 2014) for labeling, and children's handwriting data collected from 1014 children, but it does not provide concrete access information (e.g., specific links, DOIs, or repository names) for these datasets to be publicly available.
Dataset Splits No The paper specifies accuracy parameters for convergence (ϵ = 0.1, ϵ = 0.2) but does not provide details on specific training/validation/test dataset splits (e.g., percentages, sample counts, or citations to predefined splits).
Hardware Specification No The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types, or memory) used to run its experiments.
Software Dependencies No The paper mentions the use of an LSTM model and its architecture but does not specify any software names with version numbers (e.g., Python, TensorFlow, PyTorch versions) for its implementation.
Experiment Setup Yes Setup We evaluate the following algorithms: (i) classroom teaching (CT) the teacher gives an example to the entire class at each iteration, (ii) CT with optimal partitioning (CTw P-Opt) the class is partitioned as defined in Section , (iii) CT with random partitioning (CTw P-Rand) the class is randomly assigned to groups, and (iv) individual teaching (IT) the teacher gives a tailored example to each student. An algorithm is said to converge when 1 i wt i w 2 2 ϵ. We set the number of learners N = 300 and accuracy parameter ϵ = 0.1. Average error and robustness of CT We first consider the noise free classroom setting with d = 25, learning rates between [0.05, 0.25], and DX = 2. and We set the classroom size N = 60. ... We set a constant learning rate of η = 0.05 for all students. ... We set accuracy parameter ϵ = 0.2. Also The model has 3 layers, 300 hidden units and outputs a 20 component bivariate Gaussian mixture and a Bernoulli variable indicating the end of the letter.