A Framework to Learn with Interpretation

Authors: Jayneel Parekh, Pavlo Mozharovskyi, Florence d'Alché-Buc

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate our approach against several state-of-the-art methods on multiple datasets and show its efficacy on both kinds of tasks.
Researcher Affiliation Academia 1 LTCI, Télécom Paris, Institut Polytechnique de Paris, France
Pseudocode Yes Algorithm 1 Learning algorithm for FLINT
Open Source Code Yes Implementation of our method is available on Github 1.
Open Datasets Yes We consider 4 datasets for experiments, MNIST [34], Fashion MNIST [50], CIFAR-10 [30], and a subset of Quick Draw dataset [20].
Dataset Splits No The paper mentions using test data but does not explicitly provide percentages or counts for training, validation, and test splits in the main text. It defers some details to supplementary material.
Hardware Specification No The paper does not specify the exact hardware (e.g., GPU/CPU models, memory, or specific computing environments with detailed specifications) used for running experiments.
Software Dependencies No The paper mentions the use of deep neural networks and related concepts (e.g., LeNet, ResNet) but does not provide specific version numbers for any software dependencies or libraries used for implementation.
Experiment Setup Yes We set the number of attributes J = 25 for MNIST, Fashion MNIST, J = 24 Quick Draw and J = 36 for CIFAR. Further details about the Quick Draw subset, precise architecture, ablation studies about choice of hyperparameters (hidden layers, size of attribute dictionary, loss scheduling) and optimization details are available in supplementary (Sec. S.2).