Learning from Ontology Streams with Semantic Concept Drift
Authors: Jiaoyan Chen, Freddy Lecue, Jeff Z. Pan, Huajun Chen
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experiments show accurate prediction with data from Dublin and Beijing. |
| Researcher Affiliation | Collaboration | Jiaoyan Chen Zhejiang University China jiaoyanchen@zju.edu.cn Freddy L ecu e INRIA, France Accenture Labs, Ireland freddy.lecue@inria.fr Jeff Z. Pan University of Aberdeen United Kingdom jeff.z.pan@abdn.ac.uk Huajun Chen Zhejiang University China huajunsir@zju.edu.cn |
| Pseudocode | Yes | Algorithm 1: [A1]Significant Drift O, Sn 0 , ε, σmin and Algorithm 2: [A2]Prediction Model O, Sn 0 , ε, σmin, κ, N |
| Open Source Code | No | The paper does not provide any explicit statement about making its source code publicly available, nor does it provide a link to a code repository. |
| Open Datasets | No | The paper describes the 'Beijing Air Quality (BAQ) Context' and 'Dublin Bus Delay (DBD) Context' datasets, providing details on their characteristics and data collection, but it does not provide concrete access information (e.g., a specific link, DOI, repository name, or formal citation to a publicly available dataset with author and year) for these datasets. |
| Dataset Splits | No | The paper mentions 'Validation: Accuracy is measured by comparing predictions with real-world observations in cities.' It also states that a model is trained using 'samples of the form {(ei, gi) | i {1, . . . , N}}'. However, it does not specify explicit training, validation, or test dataset split percentages or methodologies (e.g., cross-validation folds, specific sample counts for each split). |
| Hardware Specification | Yes | The system is tested on: 16 Intel(R) Xeon(R) CPU E5-2680, 2.80GHz cores, 32GB RAM. |
| Software Dependencies | No | The paper mentions various software components and concepts like 'OWL (Web Ontology Language)', 'Description Logics (DL) EL++', 'Stochastic Gradient Descent method', 'ARIMA', 'Hoeffding Adaptive Tree (HAT)', and 'Leveraging Bagging (LB)'. However, it does not provide specific version numbers for any of these software dependencies, which are necessary for full reproducibility. |
| Experiment Setup | Yes | Algorithm 2 details input parameters like 'Lower limit ε (0, 1]', 'Minimum drift significance σmin', 'Proportion κ of snapshots with concept drift used for modelling', and 'Number of snapshots N'. The experimental results section also specifies 'N = 1, 500' and variances of (ε, σmin, κ) for different models: 'consistent model with (.9, .9, .1)', 'mixed model with (.5, .5, .5)', 'inconsistent model with (.1, .1, .9)'. |