Learning of Structurally Unambiguous Probabilistic Grammars

Authors: Dolav Nitay, Dana Fisman, Michal Ziv-Ukelson9170-9178

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental As a proof-of-concept, in Section 6 we exemplify our algorithm by applying it to a small data-set of genomic data.
Researcher Affiliation Academia Dolav Nitay*, Dana Fisman, Michal Ziv-Ukelson Ben Gurion University dolavn@post.bgu.ac.il, dana@cs.bgu.ac.il, michaluz@bgu.ac.il
Pseudocode Yes Algorithm 1 Learn CMTA(T, C, H, B). ... Algorithm 5 Extract CMTA
Open Source Code No The paper does not provide any explicit statement about open-sourcing the code for the described methodology or a link to a code repository.
Open Datasets No The paper mentions using 'a small data-set of genomic data' and an 'MDR dataset' for demonstration, but it does not provide concrete access information (e.g., a link, DOI, or specific citation with author and year for the dataset) to make it publicly available.
Dataset Splits No The paper mentions applying the algorithm to 'genomic data' and 'gene-cluster grammars' as a demonstration, but it does not specify any training, validation, or test dataset splits.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, memory) used for running its experiments or demonstrations.
Software Dependencies No The paper does not specify any software dependencies with version numbers (e.g., Python, PyTorch, specific libraries or solvers) used for its implementation or experiments.
Experiment Setup No The paper describes a demonstration on genomic data, showing a learned PCFG. However, it does not provide specific details about the experimental setup, such as hyperparameters, optimizer settings, training configurations, or other system-level settings for the learning algorithm.