Nonparametric Teaching for Multiple Learners

Authors: Chen Zhang, Xiaofeng Cao, Weiyang Liu, Ivor Tsang, James Kwok

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Lastly, we conduct extensive experiments to validate the practicality and efficiency of MINT.
Researcher Affiliation Academia 1School of Artificial Intelligence, Jilin University, China 2Max Planck Institute for Intelligent Systems, Germany, 3University of Cambridge, UK 4CFAR and IHPC, Agency for Science, Technology and Research (A*STAR), Singapore 5Hong Kong University of Science and Technology, Hong Kong, China
Pseudocode Yes Pseudo code for multi-learner RFT and GFT The pseudo code for RFT and GFT in the vanilla and communicated MINT is given as following: Algorithm 1 Greedy (Random) Functional Teaching for the Communicated (Vanilla) MINT
Open Source Code Yes Our source code is available at https://github.com/chen2hang/MINT_Nonparametric Teaching.
Open Datasets No The paper uses common image types like 'grayscale figure', 'lion image with three channels in RGB format', and 'synthetic 1D Gaussian data' and 'synthetic bivariate mixture Gaussian data'. However, it does not provide concrete access information (specific link, DOI, repository name, formal citation with authors/year) for a publicly available or open dataset used for training, beyond linking to specific image files or describing synthetic data generation.
Dataset Splits No The paper does not explicitly provide details about training, validation, or test dataset splits. It describes the teaching process but not how the data is partitioned for evaluation purposes.
Hardware Specification Yes Our implementation relies on the Intel(R) Core(TM) i7-8750H processor and utilizes NVIDIA graphics cards, specifically the GTX 1050 Ti with Max-Q Design and RTX6000.
Software Dependencies No The paper does not specify version numbers for any software dependencies or libraries used in the implementation.
Experiment Setup Yes For all experiments, we align with [73] to set RBF K(x, x ) = exp x x 2 as the kernel and to take empirical (average) L2 norm defined in vector-valued Hilbert space to measure the difference between f Hd and f Hd, M(f, f ) = f f Hd = 1 d d j=1 (fi(xi,j) f i (xi,j))2. Input: Target f Hd, initial f 0 Hd, small constants ϵ, ϵ0 > 0 and maximal iteration numbers T, T0.