Extended Property Paths: Writing More SPARQL Queries in a Succinct Way
Authors: Valeria Fionda, Giuseppe PirrĂ², Mariano Consens
AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We compare the two evaluation strategies on real data to highlight their pros and cons.We have implemented both a custom query processor for EPPs and the NREPPto SPARQL translation.Dataset and query set. We used a crawl of the FOAF social network ( 500MBs).Experiment 1: Running time. Fig. 6 shows the running times. |
| Researcher Affiliation | Academia | 1 De Ma CS, University of Calabria, Italy 2 We ST, University of Koblenz-Landau, Germany 3 MIE, University of Toronto, Canada |
| Pseudocode | Yes | Figure 3: EPPs evaluation algorithm. (Followed by detailed pseudocode blocks for EVALUATE, CLOSURE, BASE, and EVALTEST functions). |
| Open Source Code | Yes | We have implemented both a custom query processor for EPPs and the NREPPto SPARQL translation7. Available at http://extendedpps.wordpress.com |
| Open Datasets | Yes | Dataset and query set. We used a crawl of the FOAF social network ( 500MBs) obtained from the BTC20128 by traversing from the URI of T. Berners-Lee (TBL) foaf:knows predicates up to distance 4. 8http://km.aifb.kit.edu/projects/btc-2012 |
| Dataset Splits | No | The paper mentions using a dataset ('FOAF social network') but does not provide specific details about training, validation, or test splits (e.g., percentages, sample counts, or cross-validation setup). |
| Hardware Specification | Yes | The experiments have been performed on an Intel i5 machine with 8GBs RAM. |
| Software Dependencies | No | The paper mentions 'Jena ARQ' and its own custom implementations but does not specify any software names with version numbers for reproducibility. |
| Experiment Setup | No | The paper describes the dataset and query sets used, along with the average of 5 runs, but does not provide specific hyperparameter values or detailed system-level training configurations (e.g., learning rates, optimizer settings). |