Measuring Dependence with Matrix-based Entropy Functional

Authors: Shujian Yu, Francesco Alesiani, Xi Yu, Robert Jenssen, Jose Principe10781-10789

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We also show the impact of our measures in four different machine learning problems, namely the gene regulatory network inference, the robust machine learning under covariate shift and non-Gaussian noises, the subspace outlier detection, and the understanding of the learning dynamics of convolutional neural networks, to demonstrate their utilities, advantages, as well as implications to those problems.
Researcher Affiliation Collaboration Shujian Yu1, Francesco Alesiani1, Xi Yu2, Robert Jenssen3, Jose Principe2 1 NEC Laboratories Europe 2 University of Florida 3 UiT The Arctic University of Norway
Pseudocode No The paper describes its proposed measures and their properties through mathematical formulations and text, but it does not include any explicit pseudocode or algorithm blocks.
Open Source Code Yes Code of our measures and supplementary material of this work are available at: https://bit.ly/AAAI-dependence.
Open Datasets Yes We resorted to the DREAM4 challenge (Marbach et al. 2012) data set for reconstructing GRN. [...] the source data is the Fashion-MNIST dataset (Xiao, Rasul, and Vollgraf 2017). [...] We select the widely used bike sharing data set (Fanaee-T and Gama 2014) in UCI repository. [...] We test on 5 publicly available data sets from the Outlier Detection Data Sets (ODDS) library (Rayana 2016).
Dataset Splits Yes We use the first three seasons samples as source data and the forth season samples as target data.
Hardware Specification No The paper describes the software setup, models, and training parameters, but it does not specify any hardware details such as CPU/GPU models, memory, or specific computing environments used for the experiments.
Software Dependencies No The paper mentions optimizers like Adam and SGD, but does not provide specific version numbers for any software libraries, frameworks, or programming languages used in the experiments.
Experiment Setup Yes The neural network architecture is set as: there are 2 convolutional layers (with, respectively, 16 and 32 filters of size 5 5) and 1 fully connected layers. We add batch normalization and max-pooling layer after each convolutional layer. We choose Re LU activation, batch size 128 and the Adam optimizer (Kingma and Ba 2014). [...] The model of choice is a multi-layered perceptron (MLP) with three hidden layer of size 100, 100 and 10 respectively. We use batch-size of 32 and the Adam optimizer.