Phase Transitions, Distance Functions, and Implicit Neural Representations
Authors: Yaron Lipman
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We present experiments for d {2, 3} with the phase transition loss (PHASE) and compare to relevant baselines: IGR (Gropp et al., 2020), DGP (Williams et al., 2019), SIREN (Sitzmann et al., 2020), FFN (Tancik et al., 2020), NSP (Williams et al., 2020). For evaluation we use the same metrics as in (Williams et al., 2020): one-sided and double-sided Chamfer (d C (Y1, Y2), d C(Y1, Y2)) and Hausdorff distances (d H (Y1, Y2), d H(Y1, Y2)), see the supplementary for exact definitions. |
| Researcher Affiliation | Collaboration | 1Facebook AI Research 2Weizmann Institute of Science. Correspondence to: Yaron Lipman <ylipman@fb.com, yaron.lipman@weizmann.ac.il>. |
| Pseudocode | No | No pseudocode or algorithm blocks were found. |
| Open Source Code | No | No explicit statement or link for open-sourcing the code for the described methodology was found. |
| Open Datasets | Yes | We evaluated our loss on the dataset in (Williams et al., 2019), and compared to relevant baselines. |
| Dataset Splits | No | The paper mentions training iterations and batch sizes but does not specify training, validation, or test dataset splits or how the data was partitioned for reproducibility. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running experiments. |
| Software Dependencies | No | The paper does not provide specific software dependency versions (e.g., library or solver names with version numbers) required to replicate the experiment. |
| Experiment Setup | Yes | In all the experiments we used ϵ = 0.01, and did a parameter search over λ, µ [0.2, 20]... u : Rd R is a multilayer perceptron (MLP) with 8 layer of 512 neuron each... We used either Re LU or Softplus activation with β = 100, and the geometric initialization of (Atzmon & Lipman, 2020a)... trained for 100k iterations and batch size of 15k, and Fourier features k = 6. |