Deep Molecular Programming: A Natural Implementation of Binary-Weight ReLU Neural Networks
Authors: Marko Vasic, Cameron Chalk, Sarfraz Khurshid, David Soloveichik
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We illustrate such translation on the paradigmatic IRIS and MNIST datasets. Toward intended applications of chemical computation, we further use our method to generate a chemical reaction network that can discriminate between different virus types based on gene expression levels. In Section 4, we give simulation results on our chemical classifiers for IRIS, MNIST, and viral infection classification, and verify that their outputs match the neural networks they implement. |
| Researcher Affiliation | Academia | 1The University of Texas at Austin, USA. |
| Pseudocode | Yes | Algorithm 1 NNCompile( 1-weight neural network: nn) and Algorithm 2 reduce(CRN: crn) |
| Open Source Code | No | The paper mentions 'CRNSimulator' and provides a link to it (http://users.ece.utexas.edu/ soloveichik/crnsimulator.html), but this is a third-party tool used by the authors, not the source code for the methodology described in the paper itself. |
| Open Datasets | Yes | IRIS (Anderson, 1936; Fisher, 1936), MNIST (Le Cun et al., 1998), and Virus Infection. For the virus infection classifier, we used data from NCBI GSE73072 (GSE73072). The dataset contains microarray data capturing gene expression profiles of humans, with the goal of studying four viral infections: H1N1, H3N2, RSV, and HRV (labels). The URL for GSE73072 is https://www.ncbi.nlm.nih.gov/geo/query/acc.cgi?acc=GSE73072. |
| Dataset Splits | Yes | We split the original MNIST training set consisting of 60, 000 images into 50, 000 for the training set, and 10, 000 for the validation set. Finally, we have a total of 698 examples, split into 558 for training, 34 for validation, and 104 for testing. |
| Hardware Specification | No | The paper discusses computational efficiency for 'specialized deep learning hardware' in general, but does not provide specific hardware details (e.g., CPU/GPU models) used for running its own experiments. |
| Software Dependencies | No | The paper mentions 'CRNSimulator package (CRNSimulator)' and 'the published implementation of Binary Connect networks (Courbariaux et al., 2015)', but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | We use the square hinge loss (as in Binary Connect) with ADAM optimizer. We train on MNIST dataset for 250 epochs, measuring the validation accuracy at each epoch, and returning the model that achieves the best validation accuracy during training. We train on IRIS dataset for 10, 000 epochs, and return the best performing epoch. We train on the Virus Infection dataset for 200 epochs, and return the model that achieved the best validation set accuracy. We use an exponentially decaying learning rate. |