Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Thou Shalt ASQFor and Shalt Receive the Semantic Answer

Authors: Muhammad Rizwan Saeed, Charalampos Chelmis, Viktor K. Prasanna

IJCAI 2016 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We have already performed a limited user study to measure the usability of our ASQFor interface shown in Figure 1. Our study included 10 users with varying expertise level in Semantic Web (average expertise level was 1.9/5 with 1.3 SD, 5 being the expert). All users were able to issue the first three queries using the interface and get the intended results without any clarifications. For Q4, 9/10 users were able to get all results. One user misinterpreted the query and selected fewer attributes and got different result. All users were able to complete the survey in under 6 minutes. In terms of Easy to Use , the users gave the query interface an astounding 4.6/5 rating (SD 0.48, 5 being extremely easy to use).
Researcher Affiliation Academia University of Southern California, Los Angeles, California EMAIL
Pseudocode No The paper describes the steps of its process but does not include any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link for the open-source code of the described methodology.
Open Datasets No The paper mentions using '1990 US census data stored in RDF' and describes its size ('68 attributes for 2, 458, 285 individuals in total'), but does not provide concrete access information like a link, DOI, or formal citation for this specific RDF version of the dataset.
Dataset Splits No The paper describes a user study but does not specify training, validation, or test dataset splits. The user study focused on usability rather than model training/evaluation splits.
Hardware Specification No The paper does not provide any specific details about the hardware used for running the ASQFor system or conducting the experiments.
Software Dependencies No The paper mentions technologies like SPARQL, RDF, and OWL, but does not list any specific software dependencies with version numbers (e.g., programming languages, libraries, or frameworks with their versions).
Experiment Setup No The paper describes the setup for a user demonstration and study, but it does not provide specific experimental setup details such as hyperparameter values, training configurations, or system-level settings for a computational model.