HyperFields: Towards Zero-Shot Generation of NeRFs from Text
Authors: Sudarshan Babu, Richard Liu, Avery Zhou, Michael Maire, Greg Shakhnarovich, Rana Hanocka
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate Hyper Fields by demonstrating its generalization capabilities, out-of-distribution convergence, amorti-zation benefits, and ablation experiments. |
| Researcher Affiliation | Collaboration | 1Toyota Technological Institute at Chicago 2University of Chicago. |
| Pseudocode | Yes | The training routine is outlined in Algorithm F, in which at each iteration, we sample n prompts and a camera viewpoint for each of these text prompts (lines 2 to 4). |
| Open Source Code | No | We will release open-source code of our project in a future revision of the paper. |
| Open Datasets | No | The paper lists prompts used to train the model (Appendix D) and mentions using 'teacher Ne RFs' generated by other models (Dream Fusion, Prolific Dreamer), but it does not provide access information (link, DOI, citation) for a publicly available dataset of these generated Ne RFs or any other data used for training. |
| Dataset Splits | No | The paper describes training on a subset and holding out combinations for zero-shot prediction or fine-tuning, implying a train/test split, but it does not explicitly define or refer to a separate 'validation' dataset split with specific percentages or counts for hyperparameter tuning or model selection. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU models, CPU types, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions using 'Stable Diffusion' and 'BERT' models and an 'open-source re-implementation (Tang, 2022) of Dream Fusion' but does not provide specific version numbers for software dependencies or libraries (e.g., Python 3.x, PyTorch 1.x). |
| Experiment Setup | Yes | We use Adam with a learning rate of 1e-4, with an epoch defined by 100 gradient descent steps. |