Analogy-preserving functions: A way to extend Boolean samples

Authors: Miguel Couceiro, Nicolas Hug, Henri Prade, Gilles Richard

IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This noteworthy theoretical result is complemented with an empirical investigation of approximate AP functions, which suggests that they remain suitable for training set extension.In section 5, we define and empirically investigate approximate AP functions. We then show their suitability for training set extension in real world problems.Table 3 reports the values of ω and β for the Standard and the Klein modelings of analogy (respectively ωS, ωK, βS and βK) over three datasets from the UCI repository, namely the three Monk s problems [Lichman, 2013]. Results are averaged over 100 experiments, where the sample set S is each time randomly sampled with a size that is 30% that of the universe of possible instances.
Researcher Affiliation Collaboration Miguel Couceiro1, Nicolas Hug2, Henri Prade2,3 and Gilles Richard2,4 1. LORIA, University of Lorraine, Vandoeuvre-l es-Nancy, France 2. IRIT, University of Toulouse, France 3. QCIS, University of Technology, Sydney, Australia 4. BITE, London, UK
Pseudocode Yes Here is an algorithmic description of this process: 1. First, add every x S to ES(f). Then, for every a, b, c S such that f(a) : f(b) :: f(c) : y is solvable and such that there is x Bm \ S with a : b :: c : x, add x to ES(f) and save y as a candidate for xf. Technically x E S(f). 2. Then for every x E S(f), run a majority-vote procedure: set xf as the most common candidate among all solutions y (in case of a tie, then randomly pick one of the values). For elements in S, xf is simply set to f(x).
Open Source Code No The paper does not provide any specific link or statement regarding the availability of open-source code for the described methodology.
Open Datasets Yes Table 3 reports the values of ω and β for the Standard and the Klein modelings of analogy (respectively ωS, ωK, βS and βK) over three datasets from the UCI repository, namely the three Monk s problems [Lichman, 2013].
Dataset Splits No Results are averaged over 100 experiments, where the sample set S is each time randomly sampled with a size that is 30% that of the universe of possible instances. - This describes the *sample set* used for the analogical extension, not a typical train/validation/test split for model training. The paper does not specify a separate validation set.
Hardware Specification No The paper does not specify any details about the hardware used for the experiments.
Software Dependencies No The paper does not provide specific software dependencies or their version numbers.
Experiment Setup No The paper mentions that 'Results are averaged over 100 experiments, where the sample set S is each time randomly sampled with a size that is 30% that of the universe of possible instances.' and describes how 'ε-close functions' were generated, but it does not provide specific hyperparameters, optimizer settings, or detailed training configurations typically found in experimental setups.