PAC Learning Linear Thresholds from Label Proportions

Authors: Anand Brahmbhatt, Rishi Saket, Aravindan Raghuveer

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We include an experimental evaluation of our learning algorithms along with a comparison with those of [25, 26] and random LTFs, demonstrating the effectiveness of our techniques.
Researcher Affiliation Industry Anand Brahmbhatt Google Research India anandpareshb@google.com Rishi Saket Google Research India rishisaket@google.com Aravindan Raghuveer Google Research India araghuveer@google.com
Pseudocode Yes Algorithm 1 Mean Covs Estimator. ... Algorithm 2 PAC Learner for no-offset LTFs over N(0, Σ) ... Algorithm 3 PAC Learner for homogenous LTFs from unbalanced bags over N(0, I) ... Algorithm 4 PAC Learner for LTFs in over N(µ, Σ)
Open Source Code Yes The implementations of the algorithms in this paper (Algs. 2, 3, 4) and of the random LTF algorithm are in python using numpy libraries. The code for the SDP algorithms of [25, 26] for bag sizes 2 and 3 is adapted from the publicly available codebase1. 1https://github.com/google-research/google-research/tree/master/Algorithms_and_Hardness_ for_Learning_Linear_Thresholds_from_Label_Proportions (license included in the repository)
Open Datasets No The paper generates synthetic data for its experiments: 'for each dataset (i) sample a random unit vector r and let f(x) := pos r Tx , (ii) sample m training bags from Ex(f, N(µ, Σ), q, k)'. It does not use or provide links to pre-existing public datasets.
Dataset Splits No The paper describes using 'training bags' and 'test instances' but does not explicitly mention a separate validation set or a specific split for validation.
Hardware Specification Yes The experimental code was executed on a 16-core CPU and 128 GB RAM machine running linux in a standard python environment.
Software Dependencies No The implementations of the algorithms in this paper ... are in python using numpy libraries. The paper mentions 'python' and 'numpy' but does not specify version numbers for either.
Experiment Setup Yes For dimension d {10, 50}, and each pair (q, k) {(2, 1), (3, 1), (10, 5), (10, 8), (50, 25), (50, 35)} and m = we create 25 datasets as follows: for each dataset (i) sample a random unit vector r and let f(x) := pos r Tx , (ii) sample m = 2000 training bags from Ex(f, N(µ, Σ), q, k), (iv) sample 1000 test instances (x, f(x)), x N(µ, Σ).