Learning Set Functions with Limited Complementarity

Authors: Hanrui Zhang5749-5756

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct numerical simulations to show that our algorithm is likely to perform well in practice. ... In our experiments, we fix the cardinality of the ground set to be n = 1000 and the number of hypergraphs to be 10. In each hypergraph, we choose uniformly at random 100 disjoint sets of size chosen from {1, 2, . . . , d+1} uniformly at random. For each degree of complementarity d, we sample a function f in MPH-d in the way described above, draw 1000 sample sets where each element appears with probability 0.5, and plot the empirical distribution of f(X). As can be seen from Figure 1, the degradation of concentration is remarkably smooth as d grows from 0 to 15.
Researcher Affiliation Academia Hanrui Zhang Computer Science Department Duke University Durham, NC 27705 hrzhang@cs.duke.edu
Pseudocode Yes Algorithm 1: An algorithm that PAC-learns monotone Boolean-valued functions with limited complementarity. Algorithm 2: An algorithm that PMAC-learns monotone positive functions with limited complementarity. Algorithm 3: An algorithm that PMAC-learns monotone nonnegative functions with limited complementarity.
Open Source Code No The paper does not provide any explicit statements about the release of source code for the described methodology, nor does it include links to a code repository.
Open Datasets No The paper describes how functions and sample sets were generated for numerical simulations ("draw 1000 sample sets where each element appears with probability 0.5"), but it does not refer to a publicly available or open dataset with concrete access information (link, DOI, formal citation).
Dataset Splits No The paper does not provide specific dataset split information (e.g., percentages, sample counts for train/validation/test splits) needed for reproduction. The numerical simulations describe generating sample sets for plotting distributions, not for a conventional train/validation/test split.
Hardware Specification No The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers, needed to replicate the experiment.
Experiment Setup Yes In our experiments, we fix the cardinality of the ground set to be n = 1000 and the number of hypergraphs to be 10. In each hypergraph, we choose uniformly at random 100 disjoint sets of size chosen from {1, 2, . . . , d+1} uniformly at random. For each degree of complementarity d, we sample a function f in MPH-d in the way described above, draw 1000 sample sets where each element appears with probability 0.5, and plot the empirical distribution of f(X).