Learning Broadcast Protocols
Authors: Dana Fisman, Noa Izsak, Swen Jacobs
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We provide a learning algorithm that can infer a correct BP from a sample that is consistent with a fine BP, and a minimal equivalent BP if the sample is sufficiently complete. On the negative side we show that (a) characteristic sets of exponential size are unavoidable, (b) the consistency problem for fine BPs is NP hard, and (c) that fine BPs are not polynomially predictable. |
| Researcher Affiliation | Academia | 1Ben-Gurion University 2CISPA Helmholtz Center for Information Security |
| Pseudocode | No | The paper describes algorithms and procedures in prose (e.g., 'The inference algorithm I we devise constructs...', 'The CS generation algorithm G builds...'), but it does not present them in structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement about releasing source code or a link to a code repository for the described methodology. |
| Open Datasets | No | The paper is theoretical and does not conduct experiments with actual datasets. It discusses 'samples' in a conceptual sense for learning algorithms, not as specific publicly available datasets used in empirical studies. |
| Dataset Splits | No | The paper is theoretical and does not conduct empirical experiments, thus it does not mention training, validation, or test dataset splits. |
| Hardware Specification | No | The paper is theoretical and does not describe any specific hardware used to run experiments. |
| Software Dependencies | No | The paper is theoretical and does not specify any software dependencies with version numbers for experimental reproducibility. It mentions an 'SMT solver' conceptually but without version details. |
| Experiment Setup | No | The paper is theoretical and does not conduct empirical experiments, thus it does not describe an experimental setup with hyperparameters or system-level training settings. |