Controlling Neural Level Sets
Authors: Matan Atzmon, Niv Haim, Lior Yariv, Ofer Israelov, Haggai Maron, Yaron Lipman
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We have tested our method on three different learning tasks: improving generalization to unseen data, training networks robust to adversarial attacks, and curve and surface reconstruction from point clouds. |
| Researcher Affiliation | Academia | Matan Atzmon, Niv Haim, Lior Yariv, Ofer Israelov, Haggai Maron, Yaron Lipman Weizmann Institute of Science Rehovot, Israel |
| Pseudocode | No | The paper describes methods using mathematical equations and textual explanations, but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide a direct link to a source code repository or an explicit statement about the release of its own source code for the methodology described. |
| Open Datasets | Yes | Experiments were done on three datasets: MNIST [18], Fashion-MNIST [31] and CIFAR10 [16]. For surface reconstruction, we trained on 10 human raw scans from the FAUST dataset [5] |
| Dataset Splits | No | The paper mentions training and testing on datasets but does not explicitly specify a validation dataset split or percentages for such a split. It states 'We randomly sampled a fraction of the original training examples and evaluated on the original test set.' |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper refers to PyTorch in its bibliography [25], but does not explicitly list software dependencies with specific version numbers (e.g., 'PyTorch 1.x' or 'Python 3.x'). |
| Experiment Setup | Yes | We performed 10-20 iterations of Equation 4 for each pi, i [n]. We used our method with the loss in Equation 12 to train robust models on MNIST [18] and CIFAR10 [16] datasets. The parameter εj fixed as εtrain in Table 1, λj to be 1, 11 for MNIST and CIFAR10 (resp.), and d = ρ as explained in Section 3.2. |