FEAMOE: Fair, Explainable and Adaptive Mixture of Experts

Authors: Shubham Sharma, Jette Henderson, Joydeep Ghosh

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on multiple datasets show that our framework as applied to a mixture of linear experts is able to perform comparably to neural networks in terms of accuracy while producing fairer models.
Researcher Affiliation Collaboration Shubham Sharma1 , Jette Henderson2 and Joydeep Ghosh1 1The University of Texas at Austin 2Tecno Tree {shubham_sharma, jghosh}@utexas.edu, jette.henderson@gmail.com
Pseudocode Yes Algorithm 1 Learning FEAMOE
Open Source Code Yes We show experimentally (in the supplementary material 1) that FEAMOE can work comparably or better to adapt for drift. 1https://drive.google.com/file/d/1l2qz50Flvj4VAEvr Rr H4Gdy3QAm CRn Y/view?usp=sharing
Open Datasets Yes UCI Adult [Kohavi, 1996] and COMPAS [Pro Publica, 2016]. The large HMDA (Home Mortgage Disclosure Act) dataset [Bureau, 2020]
Dataset Splits No No specific dataset splits (e.g., percentages for training, validation, and test sets) are provided in the main text.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory amounts, or cloud computing instance types) are provided for the experimental setup.
Software Dependencies No The paper mentions "scikit learn" but does not provide a specific version number. No other software dependencies with version numbers are listed.
Experiment Setup Yes A two layer multilayer perceptron with 30 hidden units in each layer was trained for the UCI Adult dataset. A five layer multilayer perceptron with 50 hidden units in each layer is trained for the HMDA dataset as the baseline neural network. Experts are added every 4000 data points for the UCI Adult dataset. Hyperparameters associated with the fairness constraints are incremented in levels of 0.02 per expert for the UCI Adult dataset.