Iterative Machine Teaching
Authors: Weiyang Liu, Bo Dai, Ahmad Humayun, Charlene Tay, Chen Yu, Linda B. Smith, James M. Rehg, Le Song
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We also validate our theoretical findings with extensive experiments on different data distribution and real image datasets. To corroborate our theoretical findings, we also conduct extensive experiments on both synthetic data and real image data. In both cases, the experimental results verify our theoretical findings and the effectiveness of our proposed iterative teaching algorithms. |
| Researcher Affiliation | Academia | 1Georgia Institute of Technology 2Indiana University. |
| Pseudocode | Yes | Algorithm 1 The omniscient teacher |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code for the described methodology. |
| Open Datasets | Yes | We also validate our theoretical findings with extensive experiments on different data distribution and real image datasets. We further evaluate our teacher models on MNIST dataset. We extend our teacher models from binary classification to multi-class classification. The teacher models are used to teach the final fully connected (FC) layers in convolutional neural network on CIFAR-10. Using our teaching model, we analyze cropped object instances obtained from ego-centric video of an infant playing with toys (Yurovsky et al., 2013). |
| Dataset Splits | No | The paper mentions training and testing but does not explicitly specify training/validation/test dataset splits or cross-validation details for reproduction. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | No | Parameters and setup. Detailed experimental setup is given in Appendix B. We mostly evaluate the practical pool-based teaching (without rescaling). For fairness, learning rates for all methods are the same. |