Explore In-Context Learning for 3D Point Cloud Understanding
Authors: Zhongbin Fang, Xiangtai Li, Xia Li, Joachim M Buhmann, Chen Change Loy, Mengyuan Liu
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments to validate the versatility and adaptability of our proposed methods in handling a wide range of tasks. |
| Researcher Affiliation | Academia | 1Sun Yat-sen University 2S-Lab, Nanyang Technological University 3Department of Computer Science, ETH Zurich 4Key Laboratory of Machine Perception, Shenzhen Graduate School, Peking University |
| Pseudocode | No | The paper describes its methods through text and diagrams, but does not include any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | https://github.com/fanglaosi/Point-In-Context |
| Open Datasets | Yes | Firstly, we obtain samples from publicly available datasets, such as Shape Net [7], Shape Net Part [42] |
| Dataset Splits | No | The paper mentions training and testing but does not explicitly provide details about a distinct validation set split or its size/percentage for reproduction. |
| Hardware Specification | Yes | Test speed ... tested on one NVIDIA RTX 3080 Ti GPU. |
| Software Dependencies | No | The paper mentions the use of 'Adam W optimizer' and 'standard transformer', but does not specify versions of programming languages, libraries, or frameworks used for implementation, such as PyTorch or TensorFlow versions. |
| Experiment Setup | Yes | We sample 1024 points of each point cloud and divide it into N = 64 point patches, each with M = 32 neighborhood points. We set the mask ratio as 0.7. For PIC-Sep, we merge the feature of input and target at the third block. We randomly select a prompt pair that performs the same task with the query point cloud from the training set. We use an Adam W optimizer [23] and cosine learning rate decay, with the initial learning rate as 0.001 and a weight decay of 0.05. All models are trained for 300 epochs. |