Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Revealing the 3D Cosmic Web through Gravitationally Constrained Neural Fields
Authors: Brandon Zhao, Aviad Levis, Liam Connor, Pratul Srinivasan, Katherine Bouman
ICLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We showcase our method on simulations, including realistic simulated measurements of dark matter distributions that mimic data from upcoming telescope surveys. Our results show that our method can not only outperform previous methods, but importantly is also able to recover potentially surprising dark matter structures. 4 EXPERIMENTS 4.1 COSMIC SHEAR SIMULATIONS 4.2 LARGE-SCALE STRUCTURE RECOVERY WITH KINEMATIC WEAK LENSING 4.3 LARGE-SCALE STRUCTURE RECOVERY WITH TRADITIONAL WEAK LENSING 4.4 RECONSTRUCTION OF NON-GAUSSIAN STRUCTURES |
| Researcher Affiliation | Collaboration | 1Department of Computing and Mathematical Sciences, California Institute of Technology 2Department of Computer Science, University of Toronto 3David A. Dunlap Department of Astronomy & Astrophysics, University of Toronto 4Center for Astrophysics, Harvard & Smithsonian 5Google Deep Mind 6Departments of Astronomy and Electrical Engineering, California Institute of Technology |
| Pseudocode | No | The paper describes the methodology in text and illustrates the pipeline with Figure 2, but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | All code is implemented in JAX and will be made publicly available. |
| Open Datasets | Yes | To simulate ground truth dark matter fields we used the low resolution Particle-Mesh N-body solver Jax PM (Initiative (2021)).... The MNIST experiment (Fig. 5) was done with 4 equally spaced (in redshift) lensplanes. |
| Dataset Splits | No | For the kinematic weak lensing experiment in Sec. 4.2, we simulate shape noise in line with current estimates for the instrument capabilities of the Roman Space Telescope (Xu et al. (2023)), with a galaxy number density of ngal = 4 arcmin-1 and a shape noise level of σshape = 0.035. For the traditional weak lensing survey in Sec. 4.3, we assume no estimation has been done on the intrinsic shape of a denser field of galaxies, corresponding to a shape noise level of σshape = 0.25 and a galaxy number density of ngal = 30 arcmin-1. Finally, we assume a realistically distributed galaxy sample as in Leonard et al. (2014) of 360,000 galaxies for kinematic weak lensing and 2,700,000 galaxies in the traditional weak lensing experiment; more details are in the appendix. The paper describes the generation and size of simulated data, but does not provide explicit training, validation, or test splits for this data, nor for the MNIST dataset used. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory specifications) are mentioned in the paper regarding the experimental setup. |
| Software Dependencies | No | All code is implemented in JAX... We optimize the network weights by minimizing the loss in Eqn. 7 via gradient descent using the Adam optimizer (Kingma (2014))... we used the low resolution Particle-Mesh N-body solver Jax PM (Initiative (2021)). The paper mentions software tools like JAX, Adam optimizer, and Jax PM, but does not provide specific version numbers for any of them. |
| Experiment Setup | Yes | In our experiments, we use a deep ensemble of 100 fully connected MLPs each with 4 layers, where each layer is 256 units wide with the ReLU activation function. The network output is passed through the sigmoid activation function followed by a final single node linear layer. In our experiments we use L = 2 for angular coordinates and L = 5 for the radial coordinate. We optimize the network weights by minimizing the loss in Eqn. 7 via gradient descent using the Adam optimizer (Kingma (2014)), with exponential learning rate decay from 1e-4 to 5e-6 over 100K iterations. |