Learning Continuous Semantic Representations of Symbolic Expressions
Authors: Miltiadis Allamanis, Pankajan Chanthirasegaran, Pushmeet Kohli, Charles Sutton
ICLR 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform an exhaustive evaluation on the task of checking equivalence on a highly diverse class of symbolic algebraic and boolean expression types, showing that our model significantly outperforms existing architectures. |
| Researcher Affiliation | Collaboration | Miltiadis Allamanis1, Pankajan Chanthirasegaran1, Pushmeet Kohli2, Charles Sutton1,3 1School of Informatics, University of Edinburgh, Edinburgh, UK 2Microsoft Research, Microsoft, Redmond, WA, USA 3The Alan Turing Institute, London, UK |
| Pseudocode | No | The paper provides architectural diagrams and mathematical formulations for EQNET components in Figure 1, but no structured pseudocode block labeled "Algorithm" or "Pseudocode". |
| Open Source Code | Yes | Code and data are available at groups.inf.ed.ac.uk/cup/semvec. |
| Open Datasets | Yes | We provide the datasets online. |
| Dataset Splits | Yes | Then, to create SEENEQCLASS, we take the remaining 80% of the equivalence classes, and randomly split the expressions in each class into training, validation, SEENEQCLASS test in the proportions 60% 15% 25%. |
| Hardware Specification | No | The paper does not provide specific hardware details such as CPU/GPU models or cloud computing instances used for experiments. |
| Software Dependencies | No | The paper mentions "Spearmint (Snoek et al., 2012) Bayesian optimization package" but does not specify a version number for it or for any other software libraries or frameworks used. |
| Experiment Setup | Yes | The optimized hyperparameters are detailed in Table 4. All hyperparameters were optimized using the Spearmint (Snoek et al., 2012) Bayesian optimization package. The same range of values was used for all common model hyperparameters. (Table 4 includes details such as learning rate 10-2.1, minibatch size 900, representation size D = 64, dropout rate .11, etc.) |