Disguise without Disruption: Utility-Preserving Face De-identification

Authors: Zikui Cai, Zhongpai Gao, Benjamin Planche, Meng Zheng, Terrence Chen, M. Salman Asif, Ziyan Wu

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We extensively evaluate our method using multiple datasets, demonstrating a higher deidentification rate and superior consistency compared to prior approaches in various downstream tasks.
Researcher Affiliation Collaboration 1University of California, Riverside, CA 2United Imaging Intelligence, Burlington, MA 3Rensselaer Polytechnic Institute, Troy, NY
Pseudocode No The paper describes the proposed architecture and process using figures and equations, but it does not include any pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not contain an explicit statement about releasing code or a link to a code repository for the described methodology.
Open Datasets Yes We train our models on VGGFace2 dataset (Cao et al. 2018)... Evaluation datasets include LFW (Huang et al. 2008)... Celeb A-HQ (Karras et al. 2017)... and WFLW (Wu et al. 2018).
Dataset Splits Yes Taking facial landmark detection as an example on the WFLW dataset (Wu et al. 2018) (98 landmarks per image), we split data into training/testing sets (7,500/2,500)
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions models and optimizers (e.g., HRNetv2-W18 model, Adam optimizer) but does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions).
Experiment Setup Yes We use an HRNetv2-W18 model... trained for 60 epochs with Adam optimizer (Kingma and Ba 2014) (β1 = 0, β2 = 0.999), learning rate 10 4, and batch size 64.