PAPR: Proximity Attention Point Rendering

Authors: Yanshu Zhang, Shichong Peng, Alireza Moazeni, Ke Li

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on both synthetic and real-world datasets demonstrate that PAPR outperforms prior point-based methods in terms of image quality when using a parsimonious set of points. Table 1 shows the average image quality metric scores. PAPR consistently outperforms the baselines across all metrics in both synthetic and real-world settings, without relying on specific initialization.
Researcher Affiliation Academia Yanshu Zhang , Shichong Peng , Alireza Moazeni, Ke Li APEX Lab School of Computing Science Simon Fraser University {yanshu_zhang,shichong_peng,seyed_alireza_moazenipourasil,keli}@sfu.ca
Pseudocode Yes Algorithm 1 Conditional IMLE Training Procedure
Open Source Code Yes More results and code are available on our project website.
Open Datasets Yes For the synthetic setting, we choose the Ne RF Synthetic dataset [22], while for the real-world setting, we use the Tanks & Temples [13] subset, following the same data pre-processing steps as in [48].
Dataset Splits Yes For the synthetic setting, we choose the Ne RF Synthetic dataset [22], while for the real-world setting, we use the Tanks & Temples [13] subset, following the same data pre-processing steps as in [48].
Hardware Specification Yes We train our model using Adam optimizer [12] on a single NVIDIA A100 GPU.
Software Dependencies No The paper mentions software components like Adam optimizer, U-Net architecture, and LPIPS metric, but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes We set the weight λ to 0.01 for all experiments. During training, we jointly optimize all model parameters, including pi, ui, τi, θK, θV , θQ and θR. We train our model using Adam optimizer [12] on a single NVIDIA A100 GPU. We set the parameter K = 20 for selecting the top nearest points, and the point feature vector dimension h = 64. All influence scores are initialized to zero. Starting from iteration 10, 000, we prune points with τi < 0 every 500 iterations.