Mitigating Gender Bias in Face Recognition using the von Mises-Fisher Mixture Model
Authors: Jean-Rémy Conti, Nathan Noiry, Stephan Clemencon, Vincent Despiegel, Stéphane Gentric
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | extensive numerical experiments on a variety of datasets show that a careful selection significantly reduces gender bias. and 4. Numerical Experiments |
| Researcher Affiliation | Collaboration | 1LTCI, T el ecom Paris, Institut Polytechnique de Paris 2Idemia. |
| Pseudocode | No | The paper does not contain any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | We plan to release the code used to conduct our experiments. |
| Open Datasets | Yes | It has been trained on the MS1M-Retina Face dataset (also called MS1MV3), introduced by (Deng et al., 2019b) in the ICCV 2019 Lightweight Face Recognition Challenge. and We choose IJB-C (Maze et al., 2018)... and All the models are evaluated on the LFW dataset (Huang et al., 2008). |
| Dataset Splits | No | The paper mentions training the Ethical Module on the training set used to train the pre-trained models and performing a grid-search for hyperparameter selection on IJB-C, but it does not provide explicit training, validation, or test dataset splits (percentages or counts) needed for reproduction. |
| Hardware Specification | Yes | Using one single GPU (NVIDIA RTX 3090), the computation of the embeddings takes 4 hours and each training takes 8 hours. |
| Software Dependencies | Yes | high precision using a Python library for arbitrary-precision floating-point arithmetic such as mpmath (Johansson et al., 2021; Kim, 2021). |
| Experiment Setup | Yes | The MLP within our Ethical Module has an input layer of 512 units... a shallow MLP of size (512, 1024, 512) with a Re LU activation after the first layer... For each experiment, we train the Ethical Module during 50 epochs with the Adam optimizer (Kingma & Ba, 2014). The batch size is set to 1024 and the learning rate to 0.01. |