Self-Supervised Few-Shot Learning on Point Clouds

Authors: Charu Sharma, Manohar Kaul

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We present a comprehensive empirical evaluation of our method on both downstream classification and segmentation tasks and show that supervised methods pre-trained with our self-supervised learning method significantly improve the accuracy of state-of-the-art methods.
Researcher Affiliation Academia Charu Sharma and Manohar Kaul Department of Computer Science & Engineering Indian Institute of Technology Hyderabad, India charusharma1991@gmail.com, mkaul@iith.ac.in
Pseudocode No The paper describes the network architecture and methods but does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions 'Our code' in a footnote (footnote 3) but does not provide a specific repository link or an explicit statement of public availability for the source code.
Open Datasets Yes For self-supervised and FSL experiments, we pick two real-world datasets (Model Net40 [15] and Sydney4) for 3D shape classification and for our segmentation related experiments, we conduct part segmentation on Shape Net [24] and semantic segmentation on Stanford Large-Scale 3D Indoor Spaces (S3DIS) [25].
Dataset Splits No The paper describes training with a 'support set S' and testing with a 'query set Q', but does not explicitly mention a 'validation' set or its specific split for reproduction purposes.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper mentions software like Point Net, DGCNN, and linear SVM, but does not provide specific version numbers for any software dependencies or frameworks.
Experiment Setup No The paper describes network architecture layer sizes (e.g., MLP layers with 32, 64, 128 dimensions) and discusses the choice of the expansion constant ϵ (e.g., ϵ = 2.2). However, it lacks specific training hyperparameters such as learning rate, batch size, optimizer details, or number of epochs, which are crucial for complete reproduction.