Reconstruct and Match: Out-of-Distribution Robustness via Topological Homogeneity

Authors: Chaoqi Chen, Luyao Tang, Hui Huang

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on standard benchmarks show that REMA outperforms state-of-the-art methods in OOD generalization and test-time adaptation settings.
Researcher Affiliation Academia Chaoqi Chen1 Luyao Tang2 Hui Huang1 1College of Computer Science and Software Engineering, Shenzhen University 2School of Informatics, Xiamen University
Pseudocode No The paper describes the proposed method in text and mathematical equations in Section 3, but does not include a clearly labeled pseudocode or algorithm block.
Open Source Code No After internal review and patent approval, we will release the code.
Open Datasets Yes For OOD generalization, we leverage the three most widely used benchmark datasets. PACS [39]... Office-Home [73]... VLCS... Regarding test-time adaptation, we follow the common benchmarks [75, 30, 79] that utilize CIFAR-10/100 [35] and Image Net [14] as the ID (training) data. CIFAR-10/100C [26] and Imagenet-C [26] are used as OOD (test) data, comprising different corruptions applied to their original datasets.
Dataset Splits No Following common practice, the model selection is based on a training domain validation set.
Hardware Specification No We have provided these details in the supplementary.
Software Dependencies No The paper mentions using ResNet-50 and ResNet-18 for the backbone, stochastic gradient descent with momentum 0.9, and the Adam optimizer, but does not provide specific version numbers for any software libraries or dependencies.
Experiment Setup Yes The model is trained using stochastic gradient descent with momentum 0.9, and weight decay 10 4. The training batch size is set to 128. The learning rate is 10 4. ... λ in Eq. (3) is set to 0.01 in all experiments. α and β in Eq. (7) is set to 10 and 0.1, respectively.