Estimating informativeness of samples with Smooth Unique Information

Authors: Hrayr Harutyunyan, Alessandro Achille, Giovanni Paolini, Orchid Majumder, Avinash Ravichandran, Rahul Bhotika, Stefano Soatto

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we test the validity of linearized network approximation in terms of estimating the effects of removing an example and show several applications of the proposed information measures. Additional results and details are provided in the supplementary Sec. A.
Researcher Affiliation Collaboration 1 Amazon Web Services, 2 USC Information Sciences Institute hrayrhar@usc.edu, {aachille, paoling, orchid}@amazon.com, {ravinash, bhotikar, soattos}@amazon.com
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes The implementation of the proposed method and the code for reproducing the experiments is available at https:// github.com/awslabs/aws-cv-unique-information.
Open Datasets Yes Kaggle. Dogs vs. Cats, 2013. URL https://www.kaggle.com/c/dogs-vs-cats/ overview.
Dataset Splits Yes In all datasets used the validation set also has 1000 samples.
Hardware Specification No The paper does not provide specific hardware details such as exact GPU/CPU models or processor types used for running its experiments.
Software Dependencies No The paper does not specify software dependencies with version numbers (e.g., Python, libraries, or frameworks with their respective versions) required to replicate the experiment.
Experiment Setup Yes For the MLP network on MNIST 4 vs 9 we set t = 2000 and η = 0.001; for the Res Net-18 on cats vs dog classification t = 1000 and η = 0.001; and for the Res Net-18 on i Cassava dataset t = 5000 and η = 0.0003.