Attention-Based Transformation from Latent Features to Point Clouds

Authors: Kaiyi Zhang, Ximing Yang, Yuan Wu, Cheng Jin3291-3299

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Considerable experiments on different datasets show that our methods achieve state-of-the-art results. and sections like Experiments, Datasets and Implementation Details, Reconstruction and Generation.
Researcher Affiliation Academia 1School of Computer Science, Fudan University, Shanghai, China 2Peng Cheng Laboratory, Shenzhen, China {zhangky20, xmyang19, wuyuan, jc}@fudan.edu.cn
Pseudocode No No pseudocode or clearly labeled algorithm block was found in the paper.
Open Source Code No The paper does not provide an explicit statement or link for open-source code availability.
Open Datasets Yes We evaluate AXform on three representative categories in the Shape Net (Chang et al. 2015) dataset: airplane, car, and chair. and We also evaluate AXform Net on the PCN (Yuan et al. 2018) dataset for point cloud completion.
Dataset Splits Yes We follow the train/val/test split in Shape Net official documents and use 2048 points for each shape during both training and testing.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running experiments.
Software Dependencies No The paper mentions 'Adam is used as the optimizer' but does not specify any software names with version numbers, such as programming languages, libraries, or frameworks.
Experiment Setup Yes All the experiments are performed for 200 epochs with a batch size of 32. Adam is used as the optimizer and the initial learning rate is 1e-4. and We set branch number K = 16 and train our method for 100 epochs with a batch size of 128. α increases from 0.01 to 1 within the first 25 epochs. Adam is used as the optimizer and the initial learning rate is 1e-3.