Unsupervised 3D Object Learning through Neuron Activity aware Plasticity

Authors: Beomseok Kang, Biswadeep Chakraborty, Saibal Mukhopadhyay

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results show that the Ne AW Hebbian learning outperforms other variants of Hebbian learning and shows higher accuracy over fully supervised models when training data is limited. The experimental results evaluated on Model Net10 and Model Net40 (Wu et al., 2015) show that the proposed Ne AW Hebbian learning outperforms the prior Hebbian rules for efficient unsupervised 3D deep learning tasks.
Researcher Affiliation Academia Beomseok Kang, Biswadeep Chakraborty & Saibal Mukhopadhyay School of Electrical and Computer Engineering Georgia Institute of Technology, Atlanta, GA 30332, USA {beomseok, biswadeep, smukhopadhyay6}@gatech.edu
Pseudocode No The paper provides mathematical equations for learning rules (e.g., Equation 1, 2, 3, 4) but does not include any figure, block, or section explicitly labeled as 'Pseudocode' or 'Algorithm'.
Open Source Code No The paper does not provide any explicit statements about releasing source code for the methodology, nor does it include a link to a code repository.
Open Datasets Yes The proposed model is evaluated on Model Net10 and Model Net40 datasets, which include 10-class and 40-class 3D CAD objects for 3D deep learning (Wu et al., 2015).
Dataset Splits No The paper mentions using 'training epoch' and 'test datasets' but does not explicitly define or refer to a 'validation' dataset split for hyperparameter tuning or early stopping during training.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications.
Software Dependencies No The paper does not list any specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x, or specific libraries with versions).
Experiment Setup Yes For all the unsupervised learning rules, we set training epoch 50, learning rate 1e-2, and train batch 4; however, learning rate is proportionally increased when the amount of training data is limited. For example, learning rate is 1e-1 for 10% training data and 4e-2 for 25% training data. For the classifier, we use training epoch 100, learning rate 1e-3, and train batch 32 for both Model Net10 and Model Net40.