Modeling Hebb Learning Rule for Unsupervised Learning
Authors: Jia Liu, Maoguo Gong, Qiguang Miao
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on single-layer and deep networks demonstrate the effectiveness of NLM in unsupervised feature learning. |
| Researcher Affiliation | Academia | Key Laboratory of Intelligent Perception and Image Understanding, Xidian University, Xi an, China School of Computer Science and Technology, Xidian University, Xi an, China |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any specific links or statements about releasing open-source code. |
| Open Datasets | Yes | The widely used MNIST digit dataset is used here and we randomly select 10000 images from the training set in order to speed up the experiments. The experiments are implemented on the MNIST digit dataset and image patches. The image patches with the size of 12 12 are randomly extracted from two remote sensing images respectively. This dataset consists of 50000 training and 10000 test color images with the size of 32 32. |
| Dataset Splits | No | The paper describes training and test sets but does not explicitly provide details about a separate validation set or its split percentages/counts for reproducibility across all experiments. |
| Hardware Specification | No | The paper does not explicitly describe the hardware specifications (e.g., CPU, GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | Yes | In NLM, λ = 1 and the learning rate in stochastic gradient descent algorithm is set to 0.01. For the models, we set the number of hidden units to 196. We set the number of hidden units to 300, 400, and 500 for these learning models. The hierarchical architecture has 5 layers with 3 convolution layers and 2 max-pooling layers, i.e., 3-100-100-150-150-200. The size of filters in convolution layer is 5 5 and the pooling size is 2 2. |