Neural Polynomial Gabor Fields for Macro Motion Analysis

Authors: Chen Geng, Hong-Xing Yu, Sida Peng, Xiaowei Zhou, Jiajun Wu

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In our experiments, we collect diverse 2D and 3D dynamic scenes and show that Phase-PGF enables dynamic scene analysis and editing tasks including motion loop detection, motion factorization, motion smoothing, and motion magnification. Project page: https://chen-geng.com/phasepgf
Researcher Affiliation Academia Chen Geng1, , Hong-Xing Yu1, , Sida Peng2, Xiaowei Zhou2, Jiajun Wu1 1Stanford University 2Zhejiang University {gengchen,koven,jiajunwu}@cs.stanford.edu {pengsida,xwzhou}@zju.edu.cn
Pseudocode No The paper describes the architecture and training scheme textually and with figures, but it does not include any explicitly labeled pseudocode blocks or algorithms.
Open Source Code No The paper mentions a "Project page" URL in the abstract, but this is described as a high-level project overview page rather than a direct link to a source-code repository or an explicit statement that the code is released.
Open Datasets No The paper mentions using "Synthetic 2D videos" and collecting "Real 2D videos" and "3D Dynamic Scene" examples, including some from Mai & Liu (2022). However, it does not provide concrete access information (e.g., direct links, DOIs, specific repositories, or clear instructions for accessing) for these datasets, nor does it refer to them as well-known publicly available benchmark datasets with established access.
Dataset Splits No The paper mentions "multi-stage training" but does not provide specific details on how the dataset was split into training, validation, and test sets (e.g., percentages, sample counts, or references to predefined standard splits).
Hardware Specification Yes The models are trained on a NVIDIA A5000 GPU.
Software Dependencies No The paper mentions using architectures like U-Net and Patch GAN, and refers to prior work for certain components (e.g., polynomial neural field described in Yang et al. (2022)), but it does not list specific software libraries or frameworks with their version numbers (e.g., PyTorch 1.x, Python 3.x).
Experiment Setup No The paper describes a "Multi-stage Training" process with specific loss functions (L1, L2, Ladv) and mentions the duration of training stages ("The first stage of training takes around ten hours to converge. The second stage of adversarial training takes around three days to fully converge."). However, it does not provide concrete hyperparameter values such as learning rates, batch sizes, number of epochs, or specific optimizer settings.