Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Diversity-Authenticity Co-constrained Stylization for Federated Domain Generalization in Person Re-identification
Authors: Fengxiang Yang, Zhun Zhong, Zhiming Luo, Yifan He, Shaozi Li, Nicu Sebe
AAAI 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 Experiments 4.1 Experiment Setup 4.2 Comparison with State of the Art 4.3 Ablation Study 4.4 Further Experiments 4.5 Visualization |
| Researcher Affiliation | Collaboration | Fengxiang Yang1,2, Zhun Zhong3, Zhiming Luo1*, Yifan He4, Shaozi Li1, Nicu Sebe2 1Department of Artificial Intelligence, Xiamen University, China 2Department of Information Engineering and Computer Science, University of Trento, Italy 3School of Computer Science, University of Nottingham, UK 4Reconova Technologies Co., Ltd., China |
| Pseudocode | Yes | Algorithm 1: The Process of Our Local Training. Inputs: Training data from the i-th domain Xi and labels Yi. Local iteration number iter. STM ϕi Outputs: Feature Extractor for i-th domain θGi. and Algorithm 2: The Process of Our Fed DG re-ID method. Inputs: N decentralized domains. Their corresponding training data Xi and labels Yi (1 i N). Local iteration number iter. Total number of Epochs E. Outputs: Generalized Feature Extractor θS. |
| Open Source Code | Yes | Project: https://github.com/FlyingRoastDuck/DACS_official.git |
| Open Datasets | Yes | The details of all experiments, including the used datasets, evaluation protocols, and implementation details, are demonstrated in the supplementary. (mentions Market1501, CUHK02, CUHK03, MSMT17 datasets) |
| Dataset Splits | No | The details of all experiments, including the used datasets, evaluation protocols, and implementation details, are demonstrated in the supplementary. The paper does not explicitly state the specific training, validation, and test splits (e.g., percentages or exact numbers) in the main text. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'Res Net-50 as backbone' and 'Vi T' as model architectures but does not specify any software libraries or dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | The details of all experiments, including the used datasets, evaluation protocols, and implementation details, are demonstrated in the supplementary. and where λdiv and λau are balancing factors. and For SNR , we adopt its recommended hyperparameters and deploy SNR modules after each Res Net layer to ensure the best results are achieved. |