Interpret Your Decision: Logical Reasoning Regularization for Generalization in Visual Classification

Authors: Zhaorui Tan, Xi Yang, Qiufeng Wang, Anh Nguyen, Kaizhu Huang

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Theoretical analysis and experiments demonstrate that L-Reg enhances generalization across various scenarios, including multi-domain generalization and generalized category discovery. In complex real-world scenarios where images span unknown classes and unseen domains, L-Reg consistently improves generalization, highlighting its practical efficacy.
Researcher Affiliation Academia Zhaorui Tan1,2, Xi Yang1 , Qiufeng Wang1, Anh Nguyen2, Kaizhu Huang3 1 Xi an-Jiaotong Liverpool University 2 University of Liverpool 3Duke Kunshan University
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/zhaorui-tan/L-Reg_Neur IPS24.
Open Datasets Yes We test L-Reg with GMDG [50] on 5 realworld benchmark datasets: PACS [32], VLCS [18], Office Home [55], Terra Incognita [7], and Domain Net [42].
Dataset Splits Yes We operate on the Domain Bed suite [21] and leverage standard leaveone-out cross-validation as the evaluation protocol. We test L-Reg with GMDG [50] on 5 real-world benchmark datasets: PACS [32], VLCS [18], Office Home [55], Terra Incognita [7], and Domain Net [42]. Following MIRO [25] and GMDG [50], the Reg Net Y-16GF backbone with SWAG pre-training [47]) is used.
Hardware Specification Yes All experiments can be conducted on one NVIDIA Ge Force RTX 3090 GPU.
Software Dependencies No The paper does not specify version numbers for Python, PyTorch, or other libraries. It only mentions 'Reg Net Y-16GF backbone' and 'DINO (VIT-B/16)' for pre-trained models, but not software dependencies with versions.
Experiment Setup Yes We adhere to the parameters proposed by GMDG, particularly focusing on its recommended loss terms. Furthermore, we provide a detailed listing of the hyper-parameters pertaining to L-Reg, along with the tuned lr mult , as outlined in Table 9, to facilitate the reproducibility of our results.