A Graph-Theoretic Framework for Understanding Open-World Semi-Supervised Learning
Authors: Yiyou Sun, Zhenmei Shi, Yixuan Li
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, SORL can match or outperform several strong baselines on common benchmark datasets, which is appealing for practical usage while enjoying theoretical guarantees. |
| Researcher Affiliation | Academia | Yiyou Sun, Zhenmei Shi, Yixuan Li Department of Computer Sciences University of Wisconsin, Madison {sunyiyou,zhmeishi,sharonli}@cs.wisc.edu |
| Pseudocode | No | The paper describes the Spectral Open-world Representation Learning (SORL) algorithm but does not present it in a pseudocode block or explicitly labeled algorithm section. |
| Open Source Code | Yes | Our code is available at https://github.com/deeplearning-wisc/sorl. |
| Open Datasets | Yes | Following the seminal work ORCA [7], classes are divided into 50% known and 50% novel classes. We then use 50% of samples from the known classes as the labeled dataset, and the rest as the unlabeled set. [...] on standard benchmark image classification datasets CIFAR-10/100 [35]. |
| Dataset Splits | Yes | Following the seminal work ORCA [7], classes are divided into 50% known and 50% novel classes. We then use 50% of samples from the known classes as the labeled dataset, and the rest as the unlabeled set. |
| Hardware Specification | Yes | We run all experiments with Python 3.7 and Py Torch 1.7.1, using NVIDIA Ge Force RTX 2080Ti and A6000 GPUs. |
| Software Dependencies | Yes | We run all experiments with Python 3.7 and Py Torch 1.7.1, using NVIDIA Ge Force RTX 2080Ti and A6000 GPUs. |
| Experiment Setup | Yes | For CIFAR-10, we set ηl = 0.25, ηu = 1 with training epoch 100, and we evaluate using features extracted from the layer preceding the projection. For CIFAR-100, we set ηl = 0.0225, ηu = 3 with 400 training epochs and assess based on the projection layer s features. We use SGD with momentum 0.9 as an optimizer with cosine annealing (lr=0.05), weight decay 5e-4, and batch size 512. |