An Empirical Study of Knowledge Tradeoffs in Case-Based Reasoning

Authors: Devi Ganesan, Sutanu Chakraborti

IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The proposed measure is empirically evaluated on synthetic as well as real-world datasets. From a practical standpoint, footprint size reduction provides a unified way of estimating the impact of a given piece of knowledge in any knowledge container, and can also suggest ways of characterizing the nature of domains ranging from ill-defined to well-defined ones.
Researcher Affiliation Academia Devi Ganesan and Sutanu Chakraborti Indian Institute of Technology Madras, Chennai, India {gdevi, sutanuc}@cse.iitm.ac.in
Pseudocode No The paper describes algorithms and functions (e.g., 'solves function', 'footprint algorithm') but does not provide them in a structured pseudocode or algorithm block.
Open Source Code No The paper does not provide any links to open-source code for the described methodology or state that it will be released.
Open Datasets Yes We generated a synthetic case base (Table 1)... Next, we discuss empirical results on three real world datasets taken from UCI machine learning repository [Dheeru and Karra Taniskidou, 2017] namely Iris, Auto-MPG and Boston Housing and two textual datasets based on 20 Newsgroups [Lang, 1999].
Dataset Splits Yes In all the experiments on synthetic case base, the results are averaged from 5 fold train-test splits, and the relation between footprint size reduction and knowledge transfers is tested for statistical significance.
Hardware Specification No The paper does not specify any hardware details (e.g., GPU/CPU models, cloud instances, memory) used for running its experiments.
Software Dependencies No The paper mentions techniques like TFIDF and LSA but does not list any specific software libraries or their version numbers used for the experiments.
Experiment Setup No The paper mentions 'acceptable prediction error' values (e.g., 10%) but does not provide specific hyperparameters (like learning rate, batch size, epochs, optimizer settings) or other detailed system-level training configurations to reproduce the experiments.