HyDRA: Hypergradient Data Relevance Analysis for Interpreting Deep Neural Networks

Authors: Yuanyuan Chen, Boyang Li, Han Yu, Pengcheng Wu, Chunyan Miao7081-7089

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we verify that the approximation indeed results in small error and is closer to the whole-of-trajectory Hessian-aware measurements than IF. In addition, we quantitatively demonstrate that HYDRA outperforms influence functions in accurately estimating data contribution and detecting noisy data labels. The source code is available at https://github.com/cyyever/aaai hydra. Experimental Evaluation
Researcher Affiliation Collaboration 1School of Computer Science and Engineering, Nanyang Technological University 2Alibaba-NTU Singapore Joint Research Institute
Pseudocode Yes Algorithm 1: Hypergradient Computation
Open Source Code Yes The source code is available at https://github.com/cyyever/aaai hydra.
Open Datasets Yes We use three image recognition datasets in our experiments: MNIST (Lecun et al. 1998), Fashion-MNIST (Xiao, Rasul, and Vollgraf 2017), and CIFAR-10 (Krizhevsky 2009).
Dataset Splits No The training dataset contains N data points and is denoted as Dtrain = {xi, yi}N i=1... the test dataset with M data points is defined as Dtest = {xtest i , ytest i }M i=1 with Dtrain Dtest = . (The paper defines training and test datasets but does not provide specific split percentages or counts for reproducibility for train/validation/test splits, nor does it explicitly state the validation split details.)
Hardware Specification Yes tracking 20000 data points on a Dense Net-40 network for 1 epoch using Hessian-vector products took about 49 hours on a server with 2 Nvidia 2080Ti GPUs, an AMD Ryzen 7 3800X 8-Core CPU, and 32 GB RAM.
Software Dependencies No The paper mentions network architectures (Le Net-5, Dense Net-40) and general optimization methods, but it does not specify any particular software libraries (e.g., PyTorch, TensorFlow) or their version numbers, which are necessary for reproducible software dependencies.
Experiment Setup No The paper mentions training for '200 epochs' and refers to 'same hyperparameters as before' without explicitly defining these values in the main text. It also states 'Details of the datasets and networks are in the supplemental material,' implying that hyperparameter settings may be outside the main paper, thus not in the main text.