Attentional Neural Network: Feature Selection Using Cognitive Feedback
Authors: Qian Wang, Jiaxing Zhang, Sen Song, Zheng Zhang
NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We obtain classification accuracy better than or competitive with state of art results on the MNIST variation dataset, and successfully disentangle overlaid digits with high success rates. We have validated the performance of a NN on the MNIST variation dataset. We obtained accuracy better than or competitive to state of art. |
| Researcher Affiliation | Collaboration | Qian Wang Department of Biomedical Engineering Tsinghua University Beijing, China 100084 qianwang.thu@gmail.com Jiaxing Zhang Microsoft Research Asia 5 Danning Road, Haidian District Beijing, China 100080 jiaxz@microsoft.com Sen Song Department of Biomedical Engineering Tsinghua University Beijing, China 100084 sen.song@gmail.com Zheng Zhang * Department of Computer Science NYU Shanghai 1555 Century Ave, Pudong Shanghai, China 200122 zz@nyu.edu |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | Yes | The training and testing code can be found in https://github.com/qianwangthu/feedback-nips2014-wq.git |
| Open Datasets | Yes | We used the MNIST variation dataset and MNIST-2 to evaluate the effectiveness of our framework. |
| Dataset Splits | Yes | Another hyper-parameter is the threshold ϵ. We assume that there is a global minimum, and used binary search on a small validation set. We used the standard training/testing split (12K/50K) of the MNIST variation set |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, memory, or cloud instance types) used for running its experiments. |
| Software Dependencies | No | The paper describes models and techniques used (e.g., RBM, backpropagation, 3-layer perceptron) but does not list specific software dependencies with version numbers (e.g., Python 3.x, TensorFlow 2.x). |
| Experiment Setup | Yes | The parameters to be learned include the feature weights W and the feedback weights U. ...training W and U uses half and full noise intensity, respectively. We found it important to use sparsity constraint when learning W to produce local features. Another hyper-parameter is the threshold ϵ. ...a 3-layer perceptron with 256 hidden nodes, trained on clean MNIST data... We ran one-iteration denoising... We trained two parameter sets separately... accuracy is obtained with 5 iterations |