ContextNet: Deep learning for Star Galaxy Classification

Authors: Noble Kennamer, David Kirkby, Alexander Ihler, Francisco Javier Sanchez-Lopez

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We train and test our model on simulations of a large up and coming ground based survey, the Large Synoptic Survey Telescope (LSST) and compare to the state of the art approach, showing improved overall performance as well as better performance for a specific class of objects that are important for the LSST.
Researcher Affiliation Academia 1Department of Computer Sciences, University of California, Irvine 2Department of Physics and Astronomy, University of California, Irvine. Correspondence to: Noble Kennamer <nkenname@uci.edu>.
Pseudocode No The paper describes the architecture of the neural networks but does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Our code and information for data access can be found at https://github.com/Noble Kennamer/Context Net.
Open Datasets Yes We present our results on simulations of LSST observations using the Gal Sim image simulation package (Rowe et al., 2015), which was designed and developed by a large group of domain scientists.
Dataset Splits No Our training set consists of 5000 exposures each containing 1000 sources. The test set consists of 1000 exposures and each contain anywhere from 1200-2200 objects. The paper specifies training and test sets but does not mention a validation set or specific validation split information.
Hardware Specification No The paper describes the model architecture and training data but does not specify any hardware used for computation (e.g., CPU, GPU models, memory).
Software Dependencies No The paper describes neural network architectures (e.g., Conv, Dense, Elu, Sigmoid layers) but does not provide specific version numbers for software dependencies or libraries used (e.g., TensorFlow, PyTorch, Keras versions).
Experiment Setup No The local network takes in a cutout of dimension (28, 28) and the layers are Conv(Filters=64, kernel=(3, 3) Elu Conv(Filters=128, kernel=(3, 3) Elu Flatten Dense(20) Elu. ... The global network takes in the concatenation of the local features from 1000 objects in the exposure with layers Dense(1000) Elu Dense(1000) Elu Dense(1000) Elu. ... The prediction network takes in the local features for a single object and the global features and has the following architecture Dense(100) Elu Dense(100) Elu Dense(1) Sigmoid. The final output is the probability that the object is a galaxy and we use binary cross entropy to train. The paper describes the model architecture and loss function but lacks specific hyperparameters like learning rate, batch size, optimizer details, or other training configurations.