GUST: Combinatorial Generalization by Unsupervised Grouping with Neuronal Coherence
Authors: Hao Zheng, Hui Lin, Rong Zhao
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate and analyze the model on synthetic datasets. |
| Researcher Affiliation | Collaboration | Hao Zheng Hui Lin Rong Zhao Center for Brain-Inspired Computing Research, Optical Memory National Engineering Research Center, Tsinghua University China Electronics Technology HIK Group Co. Joint Research Center for Brain-Inspired Computing, IDG / Mc Govern Institute for Brain Research at Tsinghua University, Department of Precision Instrument, Tsinghua University, Beijing 100084, China. |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks labeled as such. |
| Open Source Code | Yes | The code is publicly available at: https://github.com/monstersecond/gust. |
| Open Datasets | No | synthetic images composed of multiple Shapes[45] are generated for evaluation. |
| Dataset Splits | No | The paper mentions training on '54000 images' and evaluating AMI/Syn Score 'averaged over all testing images' during training, but does not specify explicit training/validation/test splits with percentages or counts, or refer to predefined benchmark splits. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types, or memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | τ2 = 10. The GUST is simulated 3-times longer than training... We add salt&pepper noise to the input... |