Nonparametric Iterative Machine Teaching
Authors: Chen Zhang, Xiaofeng Cao, Weiyang Liu, Ivor Tsang, James Kwok
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we verify the correctness of our theoretical findings with extensive experiments in nonparametric scenarios. We test our RFT and GFT on both synthetic and real-world data, on which we find these two algorithms present satisfactory capability to tackle nonparametric teaching tasks. |
| Researcher Affiliation | Academia | 1School of Artificial Intelligence, Jilin University, China 2Max Planck Institute for Intelligent Systems, T ubingen, Germany 3University of Cambridge, United Kingdom 4Centre for Frontier AI Research and Institute of High Performance Computing, A*STAR, Singapore 5Hong Kong University of Science and Technology. |
| Pseudocode | Yes | Algorithm 1 Random / Greedy Functional Teaching |
| Open Source Code | Yes | Our source code is available at https://github.com/chen2hang/Nonparametric Teaching. |
| Open Datasets | Yes | Consider a digit (MNIST (Le Cun, 1998)) teaching instance, one can image a digit figure as a surface in 3D space... EMNIST from (Cohen et al., 2017)... we pick two facial figures form the ORL database (http://www.cam-orl.co.uk) |
| Dataset Splits | No | The paper does not explicitly provide training/test/validation dataset splits needed to reproduce the experiment. While it mentions using MNIST (both training and testing sets), it does not specify the split percentages or counts. |
| Hardware Specification | Yes | Our implementation is based on Intel(R) Core(TM) i7-8750H and NVIDIA GTX 1050 Ti with Max-Q Design. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., specific library versions or solver versions). |
| Experiment Setup | Yes | For this regression problem, we assume the loss function of the learner is square loss L = (y - f(x))^2... The learning rate ηt is fixed as 0.01. ... We set kernel as the popular and general RBF K(x, x') = exp(-||x - x'||^2 / (2 * sigma^2)). |