ROCES: Robust Class Expression Synthesis in Description Logics via Iterative Sampling

Authors: N'Dah Jean Kouagou, Stefan Heindorf, Caglar Demir, Axel-Cyrille Ngonga Ngomo

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results suggest that post training, ROCES outperforms existing synthesis-based approaches on out-of-distribution learning problems while remaining highly competitive overall.
Researcher Affiliation Academia Data Science Research Group, Paderborn University, Germany
Pseudocode Yes Algorithm 1 Learning Step
Open Source Code Yes We provide the source code, pretrained models, and open-access datasets for reproducible research.1 1https://github.com/dice-group/ROCES
Open Datasets Yes We provide the source code, pretrained models, and open-access datasets for reproducible research.1 1https://github.com/dice-group/ROCES
Dataset Splits No The paper discusses the test set composition ('the test set consists of 100 learning problems on each benchmark dataset') and mentions training, but it does not specify explicit training/validation/test splits (e.g., percentages or counts) for the model's training process or for hyperparameter tuning. While it refers to [Kouagou et al., 2023a] for complete statistics, this is for the test set, not the internal splits for training/validation.
Hardware Specification Yes We trained the learner fΘ using Algorithm 1 (that is, ROCES) on a virtual machine equipped with 64 AMD EPYC 9334 32-Core Processors @3.91GHz, and a NVIDIA A100 80GB GPU. Post training, we used a server with 16 Intel Xeon E5-2695 CPUs @2.30GHz and 128GB RAM to conduct experiments on CEL
Software Dependencies No The paper mentions specific models like 'Con Ex embedding model' and 'Set Transformer', but it does not provide specific version numbers for any software dependencies, such as programming languages, libraries, or frameworks (e.g., Python version, PyTorch/TensorFlow version).
Experiment Setup No The paper states, 'We report hyper-parameter settings in the supplemental material due to space constraints.' This indicates that the specific values for hyperparameters like learning rate, batch size, embedding dimension, and number of inducing points are not included in the main text.