Push-pull Feedback Implements Hierarchical Information Retrieval Efficiently

Authors: Xiao Liu, Xiaolong Zou, Zilong Ji, Gengshuo Tian, Yuanyuan Mi, Tiejun Huang, K. Y. Michael Wong, Si Wu

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Using synthetic and real data, we demonstrate that the push-pull feedback implements hierarchical information retrieval efficiently. We test our model in the processing of real images.
Researcher Affiliation Academia 1School of Electronics Engineering & Computer Science, IDG/Mc Govern Institute for Brain Research, Peking-Tsinghua Center for Life Sciences, Academy for Advanced Interdisciplinary Studies, Peking University, Beijing, China. 2State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, China. 3Department of Mathematics, Beijing Normal University, China. 4Center for Neurointelligence, Chongqing University, China. 5Department of Physics, Hong Kong University of Science and Technology, China.
Pseudocode No The paper does not contain a pseudocode block or a clearly labeled algorithm section.
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets Yes The dataset we use consists of Pβ = 2 types of animals, cats and dogs... A total of 1800 images... are chosen from Image Net. It has been shown that the neural representations generated by a DNN (after being trained with Image Net) capture the categorical relationships between objects... This indicates that the memory patterns are hierarchically organized. We therefore pre-process images by filtering them through VGG, a type of DNN [24]. Citation [24] is K. Simonyan, A. Zisserman. Very deep convolutional networks for large-scale image recognition. ar Xiv preprint, ar Xiv:1409.1556 (2014).
Dataset Splits No The paper mentions that "A total of 1800 images... are chosen from Image Net", but it does not specify any train/validation/test splits for the dataset, nor does it refer to predefined splits.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., GPU models, CPU types, memory) used to run the experiments.
Software Dependencies No The paper mentions using VGG for pre-processing images, but it does not list any specific software dependencies with version numbers (e.g., programming languages, libraries, frameworks with their versions) that would allow for replication.
Experiment Setup Yes The parameters used are: N = 2000, Pγ = 25, Pβ = 4, Pα = 2, b1 = 0.2, b2 = 0.1, τ = 5,aext = 1,a1 r = 1, a2 r = 2,a2 ext = 0.1, a+ = 1, a = 10, λ1=0.1, λ2=0.1. (Figure 3 caption). The parameters: N = 4096, a1 r = 1, a2 r = 2, a1 ext = 6, a2 ext = 1, a+ = 2, a = 1.5, a21 = 1. Other parameters are the same as in Fig.3. (Figure 4 caption).