Saul: Towards Declarative Learning Based Programming

Authors: Parisa Kordjamshidi, Dan Roth, Hao Wu

IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Table 1 shows the experimental results of running the learning and inference models described in Section 4.1 on the Co NLL-04 data for the Entity-mention-Relation (see Section 3).
Researcher Affiliation Academia Parisa Kordjamshidi, Dan Roth, Hao Wu University of Illinois at Urbana-Champaign {kordjam,danr,haowu4}@illinois.edu
Pseudocode No The paper includes Scala code snippets, but no structured pseudocode or algorithm blocks are provided.
Open Source Code Yes Saul (footnote 3): http://cogcomp.cs.illinois.edu/page/software view/Saul
Open Datasets Yes Table 1 shows the experimental results... on the Co NLL-04 data for the Entity-mention-Relation (see Section 3).
Dataset Splits Yes 5-fold cross validation, Co NLL-04 dataset
Hardware Specification No The paper does not specify any hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions that Saul is "written in Scala", but does not provide specific version numbers for Scala or any other software dependencies, libraries, or solvers.
Experiment Setup No The paper states in footnote 2: "Setting specific algorithms parameters can be done by the programmer, or automatically by Saul; these details are omitted." This indicates that specific experimental setup details, such as hyperparameters or training parameters, are not provided.