Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
QCS:Feature Refining from Quadruplet Cross Similarity for Facial Expression Recognition
Authors: Chengpeng Wang, Li Chen, Lili Wang, Zhaofan Li, Xuebin Lv
AAAI 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that our proposed method achieves state-of-the-art performance on several FER datasets. |
| Researcher Affiliation | Collaboration | 1Wisesoft Inc., Chengdu, China 2School of Computer Science Sichuan University, Chengdu, China 3Chinese PLA General Hospital, Beijing, China 4Medical School of Chinese PLA, Beijing, China |
| Pseudocode | No | The paper describes the methodology through prose and mathematical equations (e.g., Eq. 1-10) and figures, but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code https://github.com/birdwcp/QCS |
| Open Datasets | Yes | Datasets RAF-DB(Li, Deng, and Du 2017) is a real-world affective face database... FERPlus(Fer 2016) is extended from FER2013... Affect Net(Mollahosseini et al. 2017) is a large-scale FER dataset... |
| Dataset Splits | Yes | In the experiment, 15,331 images with 7 basic expressions(i.e. surprise, fear, disgust, happiness, sadness, anger, neutral) are choosed , of which 12271 are used for training and 3068 for testing. FERPlus(Fer 2016) is extended from FER2013(consists of 28,709 training images and 3,589 test images)... We utilized Affect Net-7/8 with 7/8 expressions, comprising 283,901/287,651 training and 3,500/4,000 validation images. |
| Hardware Specification | Yes | It is trained end-to-end on a single Nvidia RTX3090 GPU via Pytorch. |
| Software Dependencies | No | The paper mentions 'Pytorch' as the framework used, but does not specify any version numbers for Pytorch or other key software dependencies like Python or CUDA. |
| Experiment Setup | Yes | All face images are resized to 224 224 pixels for training and testing. We employ an Adam optimizer with Sharpness Awareness Minimization (Foret et al. 2020) to train the model (200 epochs for RAF-DB, 150 epochs for FERPlus and 80 epochs for Affect Net). For the DCS, the batch size is set to 48, the learning rate is initialized as 9e-6 and an exponential decay learning rate scheduling with a gamma of 0.98 is employed. For the QCS, the batch size and the initial learning rate are reduced to around half. |