Estimating Cosmological Parameters from the Dark Matter Distribution

Authors: Siamak Ravanbakhsh, Junier Oliva, Sebastian Fromenteau, Layne Price, Shirley Ho, Jeff Schneider, Barnabas Poczos

ICML 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This paper presents the application of deep 3D convolutional networks to volumetric representation of dark-matter simulations as well as the results obtained using a recently proposed distribution regression framework, showing that machine learning techniques are comparable to, and can sometimes outperform, maximum-likelihood point estimates using cosmological models . In all experiments, we use 90% of the data for training and the remaining 10% for testing.
Researcher Affiliation Academia School of Computer Science, Carnegie Mellon University, 5000 Forbes Ave., Pittsburgh, PA 15213, USA Mc Williams Center for Cosmology, Department of Physics, Carnegie Mellon University, Carnegie 5000 Forbes Ave., Pittsburgh, PA 15213, USA
Pseudocode No No pseudocode or algorithm blocks were found.
Open Source Code No The paper does not provide an explicit statement or link for the open-sourcing of their own code or methodology.
Open Datasets No We rely on direct dark matter simulations produced using different cosmological parameters and random seeds. For the first study we generate 500 cubic simulations with a size of 512 h 1Mpc with 5123 dark matter particles... The simulations in the second dataset are based on a particle-particle-particle-mesh (P3M) algorithm from Trac et al. (2015). The paper describes generating custom data but does not provide access information for it.
Dataset Splits Yes In all experiments, we use 90% of the data for training and the remaining 10% for testing. The free parameters δ, the bandwidth, and λ, the regularizer, were chosen by validation on a held-out portion of the training set.
Hardware Specification Yes Each simulation on average requires 6 CPU hours on 2GHz processors and the final raw snapshot is about 1GB in size.
Software Dependencies No The paper mentions software like COLA code, CAMB code, and HALOFIT, but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes We use Leaky rectified linear unit (Re LU)... We used the leak parameter c = .01 in f(X) = max(0, X) c. We used Average pooling in our model... Batch normalization (Ioffe & Szegedy, 2015) is necessary... Regularization is enforced by drop-out at fully connected layers, where 50% of units are ignored during each activation... For training with backpropagation, we use Adam (Kingma & Ba, 2014) with a learning rate of .0005 and first and second moment exponential decay rate of .9 and .999, respectively.