Matrix Completion for Graph-Based Deep Semi-Supervised Learning
Authors: Fariborz Taherkhani, Hadi Kazemi, Nasser M. Nasrabadi5058-5065
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments Datasets and Pre-Processing. We conducted our experiments on the widely used MNIST (Le Cun et al. 1998), SVHN (Netzer et al. 2011), small NORB (Le Cun, Huang, and Bottou 2004) and CIFAR 10 (Krizhevsky and Hinton 2009) datasets. |
| Researcher Affiliation | Academia | Fariborz Taherkhani, Hadi Kazemi, Nasser M. Nasrabadi Lane Department of Computer Science and Electrical Engineering West Virginia University fariborztaherkhani@gmail.com, hakazemi@mix.wvu.edu, nasser.nasrabadi@mail.wvu.edu |
| Pseudocode | No | The paper describes algorithms but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper states 'We implemented our framework in Tensor Flow' but does not provide any links or explicit statements about open-source code availability. |
| Open Datasets | Yes | We conducted our experiments on the widely used MNIST (Le Cun et al. 1998), SVHN (Netzer et al. 2011), small NORB (Le Cun, Huang, and Bottou 2004) and CIFAR 10 (Krizhevsky and Hinton 2009) datasets. |
| Dataset Splits | Yes | For each of these datasets, we split the training set to two different sets of labeled and unlabeled samples. We ensure that all the classes are balanced such that each class should have the same number of labeled samples. We ran our model for 10 times with different random splits of the labeled and unlabeled data for each dataset, and we report the mean and standard deviation of the error rate. ... We select 100 samples in the training set as labeled and remaining of it as unlabeled. ... For this dataset, we choose randomly 1000 samples as labeled and rest of it, is used as unlabeled. ... we select 4,000 samples in the training set as labeled and rest as unlabeled. ... we select 1,000 samples in the training set as labeled and remaining is considered as unlabeled. ... We used 10-fold cross validation in each experiment to tune hyper-parameters in our model. |
| Hardware Specification | Yes | We implemented our framework in Tensor Flow and performed our experiments on two Ge Force GTX TITAN X 12GB GPUs. |
| Software Dependencies | No | The paper states 'We implemented our framework in Tensor Flow... We use Adam optimizer (Kingma and Ba 2014)... We use Soft Imput algorithm implemented in fancyimput package in python...' but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | Our CNN architecture has been shown in Fig.1; it is composed of three convolutional layers , three max pooling layers and one fully connected layer. There are 64 number of filters in the first convolutional layer and 128 number of filters in the second and third convolutional layers, respectively. The size filters in this architecture are 3x3 and the convolution stride is set to 1 pixel. ... The max-pooling layers are placed after each convolutional layers, respectively; the max-pooling is carried out on a 2x2 pixel window with a stride of 2. ... The parameters of the network are initialized by sampling randomly from N(0, 0.001) except for the bias parameters which are initialized as zero. ... We use Adam optimizer (Kingma and Ba 2014) with the default hyper-parameters values (ϵ = 10^-3 , β1 = 0.9, β2 = 0.999) in our experiments. The batch size in all experiments is fixed to 128, and we set γ to 0.1 experimentally... |