Robust and Fast Measure of Information via Low-Rank Representation

Authors: Yuxin Dong, Tieliang Gong, Shujian Yu, Hong Chen, Chen Li

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct large-scale experiments to evaluate the effectiveness of this new information measure, demonstrating superior results compared to matrix-based R enyi s entropy in terms of both performance and computational efficiency.In this section, we evaluate the proposed low-rank R enyi s entropy and the approximation algorithms under large-scale experiments.
Researcher Affiliation Academia 1Xi an Jiaotong University, China 2Vrije Universiteit Amsterdam 3Huazhong Agricultural University, China dongyuxin@stu.xjtu.edu.cn, adidasgtl@gmail.com, yusj9011@gmail.com, chenh@mail.hzau.edu.cn, cli@xjtu.edu.cn
Pseudocode Yes Algorithm 1: Approximation via Random Projection; Algorithm 2: Approximation via Lanczos Iteration
Open Source Code No The paper includes a link to a GitHub repository (https://github.com/Gamepiaynmo/LRMI) in a footnote (1) related to a discussion about uniqueness in the appendix. However, it does not contain an explicit statement that this repository provides the source code for the main methodology described in the paper.
Open Datasets Yes We test the performance of matrix-based R enyi s IB (MRIB) (Yu, Yu, and Principe 2021) and our lowrank variant (LRIB) with variational approximation-based objectives using VGG16 as the backbone and CIFAR10 as the classification task.We evaluate the performance of matrix-based R enyi s mutual information (MRMI) and our low-rank variant (LRMI) with these methods on 8 widely-used classification datasets as shown in Table 3
Dataset Splits Yes All models are trained for 300 epochs with 100 batch size and 0.1 initial learning rate which is divided by 10 every 100 epochs.Following the settings of (Yu et al. 2019), we select α {0.6, 1.01, 2}, k {100, 200, 400} via crossvalidation, s = k + 50 and use the Gaussian kernel of width σ = 1 for matrix-based entropy measures.
Hardware Specification Yes Our experiments are conducted on an Intel i710700 (2.90GHz) CPU and an RTX 2080Ti GPU with 64GB of RAM.
Software Dependencies No The algorithms are implemented in C++ with the Eigen library and in Python with the Pytorch library. While specific software is mentioned, no version numbers are provided for Eigen, Pytorch, C++, or Python.
Experiment Setup Yes All models are trained for 300 epochs with 100 batch size and 0.1 initial learning rate which is divided by 10 every 100 epochs.Following the settings in (Yu, Yu, and Principe 2021), we select α = 1.01, β = 0.01, k = 10 and s = 20.