Neural Image-based Avatars: Generalizable Radiance Fields for Human Avatar Modeling

Authors: YoungJoong Kwon, Dahun Kim, Duygu Ceylan, Henry Fuchs

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To demonstrate the efficacy of our NIA method, we experiment on ZJU-Mo Cap Peng et al. (2021b) and Mono Cap Habermann et al. (2020; 2021) datasets. First, experiments show that our method outperforms the state-of-the-art Neural Human Performer Kwon et al. (2021) and GP-Ne RF Chen et al. (2022) in novel view synthesis task. Furthermore, we study the more challenging cross-dataset generalization by evaluating the zero-shot performance on the Mono Cap Habermann et al. (2020; 2021) datasets, where we clearly outperform the previous methods. Finally, we evaluate on the pose animation task, where our NIA tested on unseen subjects achieves better pose generalization than the per-subject optimized animatable Ne RF methods. The ablation studies demonstrate that the proposed modules of our NIA collectively contribute to the high-quality rendering for arbitrary human subjects.
Researcher Affiliation Collaboration Youngjoong Kwon1, Dahun Kim2, Duygu Ceylan3, Henry Fuchs1 1University of North Carolina at Chapel Hill. 2Google Research, Brain Team. 3Adobe Research.
Pseudocode No The paper describes the methods using mathematical equations and textual descriptions, but it does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about releasing its source code, nor does it include a link to a code repository.
Open Datasets Yes We use ZJU-Mo Cap Peng et al. (2021b) for both tasks and ablation studies. Then we study our cross-dataset generalization ability by training on ZJU-Mocap and testing on Mono Cap datasets without any finetuning.
Dataset Splits Yes We follow the same training and testing protocols as in Kwon et al. (2021).
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., programming languages, libraries, frameworks, or operating systems).
Experiment Setup No The paper mentions details about the loss function and the number of points sampled for volume rendering ('uniformly sample a set of 64 points'), but it does not provide specific training hyperparameters such as learning rate, batch size, number of epochs, or optimizer settings.