Semi-Supervised Learning via Compact Latent Space Clustering
Authors: Konstantinos Kamnitsas, Daniel Castro, Loic Le Folgoc, Ian Walker, Ryutaro Tanno, Daniel Rueckert, Ben Glocker, Antonio Criminisi, Aditya Nori
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our approach on three benchmarks and compare to state-of-the art with promising results. |
| Researcher Affiliation | Collaboration | 1Microsoft Research Cambridge, United Kingdom 2Imperial College London, United Kingdom 3University College London, United Kingdom. |
| Pseudocode | Yes | Algorithm 1 Training for SSL with CCLP |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code or provide a link to a code repository. |
| Open Datasets | Yes | We consider three benchmarks widely used in studies on SSL: MNIST, SVHN and CIFAR-10. |
| Dataset Splits | Yes | We evaluate on the test-dataset of each benchmark, except for the ablation study where we separated a validation set... For this, we separate a validation set of 10000 images from the training set of each benchmark. |
| Hardware Specification | No | The paper mentions 'Tensor Flow GPU implementation' but does not specify any particular GPU model, CPU, or other hardware details used for the experiments. |
| Software Dependencies | No | The paper mentions 'Tensor Flow GPU implementation (Abadi et al., 2016)' but does not provide a specific version number for TensorFlow or any other software dependencies. |
| Experiment Setup | Yes | In all experiments we used the same meta-parameters for CCLP: In each SGD iteration we sample a batch (XL, y L) DL of size NL = 100... and a batch without labels XU DU of size NU = 100. We use the dot product as similarity metric (Eq. (2)), S =3 maximum steps of the Markov chains (Eq. (9)). LCCLP was weighted equally with the supervised loss, with w=1 throughout training... Exception are the experiments with |DL|=4000 on CIFAR, where lower w=0.1 was used... |