Scalable Infomin Learning

Authors: Yanzhi Chen, weihao sun, Yingzhen Li, Adrian Weller

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on algorithmic fairness, disentangled representation learning and domain adaptation verify that our method can effectively remove unwanted information with limited time budget.
Researcher Affiliation Academia Yanzhi Chen1, Weihao Sun2, Yingzhen Li3, Adrian Weller1,4 1University of Cambridge, 2Rutgers University, 3Imperial College London, 4Alan Turing Institute
Pseudocode Yes Algorithm 1 Adversarial Infomin Learning; Algorithm 2 Slice Infomin Learning
Open Source Code Yes Code is available at github.com/cyz-ai/infomin.
Open Datasets Yes US Census Demographic data.; UCI Adult data.; Dsprite. A 2D shape dataset [65]; CMU-PIE. A colored face image dataset [66]; MNIST MNIST-M.; CIFAR10 STL10.
Dataset Splits No The paper does not explicitly provide details about a validation dataset split (e.g., specific percentages or counts for a validation set), only mentioning training and testing data for its experiments.
Hardware Specification Yes All experiments are done with a single NVIDIA Ge Force Tesla T4 GPU.
Software Dependencies No The paper mentions deep learning frameworks like Tensorflow [35] and PyTorch [36] but does not provide specific version numbers for these or any other software dependencies.
Experiment Setup Yes Throughout our experiments, we use 200 slices. We find that this setting is robust across different tasks. An ablation study on the number of slices is given in Appendix B. The order of the polynomial used in (6) namely K is set as K = 3 and is fixed across different tasks.