On Numerosity of Deep Neural Networks
Authors: Xi Zhang, Xiaolin Wu
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this paper, we prove the above claim to be unfortunately incorrect. The statistical analysis to support the claim is flawed in that the sample set used to identify number-aware neurons is too small, compared to the huge number of neurons in the object recognition network. By this flawed analysis one could mistakenly identify number-sensing neurons in any randomly initialized deep neural networks that are not trained at all. With the above critique we ask the question what if a deep convolutional neural network is carefully trained for numerosity? Our findings are mixed. Even after being trained with numberdepicting images, the deep learning approach still has difficulties to acquire the abstract concept of numbers, a cognitive task that preschoolers perform with ease. But on the other hand, we do find some encouraging evidences suggesting that deep neural networks are more robust to distribution shift for small numbers than for large numbers. |
| Researcher Affiliation | Academia | Xi Zhang Shanghai Jiao Tong University zhangxi_19930818@sjtu.edu.cn Xiaolin Wu Mc Master University xwu@ece.mcmaster.ca |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | Nasr et al. [22] trained a deep convolutional neural network to classify objects in natural images using the ILSVRC2012 Image Net dataset [25]. |
| Dataset Splits | No | The paper describes training and test sets but does not explicitly mention a separate validation dataset split. |
| Hardware Specification | No | The paper does not provide specific hardware details used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library or solver names with version numbers. |
| Experiment Setup | No | The paper does not contain specific experimental setup details such as concrete hyperparameter values, training configurations, or system-level settings, beyond mentioning the network architecture is the same as in [22] and details about image generation. |