Can Information Flows Suggest Targets for Interventions in Neural Circuits?

Authors: Praveen Venkatesh, Sanghamitra Dutta, Neil Mehta, Pulkit Grover

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental we empirically examine whether observational measures of information flow can suggest interventions. We do so by performing experiments on artificial neural networks in the context of fairness in machine learning, where the goal is to induce fairness in the system through interventions.
Researcher Affiliation Collaboration Praveen Venkatesh1*, Sanghamitra Dutta2* , Neil Mehta3 and Pulkit Grover4 1Allen Institute, 1University of Washington, Seattle; 2JP Morgan Chase AI Research; 1 4Department of Electrical and Computer Engineering, 4Neuroscience Institute, Carnegie Mellon University
Pseudocode No The paper describes its methods in detail and provides mathematical definitions, but it does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes Code to generate these results is available online at https://github.com/praveenv253/ann-info-flow.
Open Datasets Yes We also performed the same analyses on the Adult dataset from the UCI machine learning repository [58] and the MNIST dataset [59].
Dataset Splits No The paper states: 'The data was split, with 50% used for training the neural network, and 50% for estimating information flows on the trained network.' While it mentions nested cross-validation for SVM hyperparameters in information flow estimation, it does not explicitly provide the train/validation/test dataset splits for the main neural network training in the main text.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments in the main text.
Software Dependencies No The paper references libraries such as Scikit-learn, PyTorch, and SciPy in its bibliography, implying their use. However, it does not explicitly state specific version numbers for these or other software dependencies within the main text.
Experiment Setup Yes For simplicity, the neural network was taken to have just one hidden layer with three neurons, with leaky Re LU activations. The output layer is a one-hot encoding of the binary b Y and cross-entropy loss was used for training. ...all analyses are repeated across 100 neural networks trained on the same data but with different random weight initializations.