Bridging OOD Detection and Generalization: A Graph-Theoretic View
Authors: Han Wang, Sharon Li
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results showcase competitive performance in comparison to existing methods, thereby validating our theoretical underpinnings. |
| Researcher Affiliation | Academia | Han Wang Department of Electrical and Computer Engineering University of Illinois Urbana-Champaign hanw14@illinois.edu Yixuan Li Department of Computer Sciences University of Wisconsin-Madison sharonli@cs.wisc.edu |
| Pseudocode | No | The paper describes algorithms and methods through text and mathematical equations but does not present a clearly labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | Code is publicly available at https://github.com/deeplearning-wisc/graph-spectral-ood. |
| Open Datasets | Yes | Following the setup of [5], we employ CIFAR-10 [14] as Pin and CIFAR-10-C [15] with Gaussian additive noise as the Pcovariate out . For Psemantic out , we leverage SVHN [16], LSUN [17], Places365 [18], Textures [19]. |
| Dataset Splits | Yes | For splitting training/validation, we use 30% for validation and the remaining for training. |
| Hardware Specification | Yes | We conduct all the experiments in Pytorch, using NVIDIA GeForce RTX 2080Ti. |
| Software Dependencies | No | The paper mentions 'Pytorch' but does not specify a version number for it or any other software dependencies, which is required for a reproducible description. |
| Experiment Setup | Yes | We use stochastic gradient descent with Nesterov momentum [22], with weight decay 0.0005 and momentum 0.09. We train the network with the loss function in Eq. 6 for 1000 epochs. The learning rate is 0.03 and the batch size is 512. We fine-tune for 20 epochs with a learning rate of 0.005 and batch size of 512. |