AskWorld: Budget-Sensitive Query Evaluation for Knowledge-on-Demand

Authors: Mehdi Samadi, Partha Talukdar, Manuela Veloso, Tom Mitchell

IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive experiments on real world datasets, we demonstrate Ask World s capability in selecting most informative resources to query within test-time constraints, resulting in improved performance compared to competitive baselines.
Researcher Affiliation Academia Mehdi Samadi Carnegie Mellon University msamadi@cs.cmu.edu Partha Talukdar Indian Institute of Science ppt@serc.iisc.in Manuela Veloso Carnegie Mellon University veloso@cs.cmu.edu Tom Mitchell Carnegie Mellon University tom.mitchell@cs.cmu.edu
Pseudocode Yes Algorithm 1 Ask World: Query Evaluation for Knowledge-on Demand
Open Source Code No The paper does not provide any specific links to source code or statements about its public release.
Open Datasets Yes For the experiments in this section, we use 25 categories randomly chosen from all the categories that are in common between Freebase [Bollacker et al., 2008] and NELL [Mitchell et al., 2015] knowledge bases.
Dataset Splits No The paper states: 'For each predicate, 200 random instances are provided as seed examples to train Ask World, and these are partitioned into two sets: classifier-training and policy-training.' and '50 instances are also randomly chosen as the test data for each predicate'. It does not explicitly mention a 'validation' split or cross-validation.
Hardware Specification No The paper does not provide specific details on the hardware used for running experiments (e.g., CPU/GPU models, memory specifications).
Software Dependencies No The paper mentions using Support Vector Machines (SVM) but does not provide specific software names with version numbers for any libraries, frameworks, or programming languages used.
Experiment Setup Yes For other parameters, we use a learning rate of 0.1, depth of 2 for each decision tree (depth of higher than 2 makes Greedy Miser inapplicable for small budget values), squared loss function, and a total of 300 regression trees in the final additive classifier. The result for Ask World (V*) is obtained by abstracting MDP using δ = 5, ordering queries using their information gain, and selecting top k% of features with non-zero information gain. In our experiments, we choose k = 50% which results in approximately 15M states in the MDP.