Interpolation Consistency Training for Semi-supervised Learning
Authors: Vikas Verma, Alex Lamb, Juho Kannala, Yoshua Bengio, David Lopez-Paz
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show that ICT achieves state-of-the-art performance when applied to standard neural network architectures on the CIFAR-10 and SVHN benchmark datasets. |
| Researcher Affiliation | Collaboration | 1Aalto University, Finland 2 Montreal Institute for Learning Algorithms (MILA) 3Facebook Artificial Intelligence Research (FAIR) |
| Pseudocode | Yes | Algorithm 1 The Interpolation Consistency Training (ICT) Algorithm |
| Open Source Code | Yes | 1Code available at https://github.com/vikasverma1077/ICT |
| Open Datasets | Yes | We follow the common practice in semi-supervised learning literature [...] and conduct experiments using the CIFAR-10 and SVHN datasets |
| Dataset Splits | Yes | The CIFAR-10 dataset consists of 60000 color images each of size 32 32, split between 50K training and 10K test images. [...] We select the best hyperparameter using a validation set of 5000 and 1000 labeled samples for CIFAR-10 and SVHN respectively. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU, CPU models, or memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using SGD with Nesterov momentum optimizer and MSE loss but does not provide specific software names with version numbers for frameworks, libraries, or programming languages. |
| Experiment Setup | Yes | We used the SGD with nesterov momentum optimizer for all of our experiments. For the experiments in Table 1 and Table 2, we run the experiments for 400 epochs. [...] The initial learning rate was set to 0.1, [...] The momentum parameter was set to 0.9. We used an L2 regularization coefficient 0.0001 and a batch-size of 100 in our experiments. |