Knowledge Transfer with Jacobian Matching

Authors: Suraj Srinivas, Francois Fleuret

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We then show experimentally on standard image datasets that Jacobian-based penalties improve distillation, robustness to noisy inputs, and transfer learning.
Researcher Affiliation Academia 1Idiap Research Institute & EPFL, Switzerland.
Pseudocode No Not found. The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No Not found. The paper does not provide any concrete access to source code.
Open Datasets Yes on the CIFAR100 dataset (Krizhevsky & Hinton, 2009). ... Imagenet (Russakovsky et al., 2015) ... MIT Scenes dataset (Quattoni & Torralba, 2009).
Dataset Splits No Not found. The paper mentions using 'train', 'validation', and 'test' sets conceptually but does not specify the explicit data splits (percentages, counts, or predefined standard splits with citations) for reproduction. For instance, for CIFAR100, it says 'on the full CIFAR100 dataset' and 'small subsets' or 'number of data points per class' but not the standard train/test split.
Hardware Specification No Not found. The paper does not specify the hardware used for running its experiments.
Software Dependencies No Not found. The paper does not provide specific ancillary software details with version numbers.
Experiment Setup No Not found. While the paper discusses architectures (e.g., VGG, ResNet) and regularization strengths (which are varied in experiments), it does not provide specific, concrete hyperparameters like learning rate, batch size, number of epochs, or optimizer settings for its experimental setup.