Learning from Aggregate responses: Instance Level versus Bag Level Loss Functions
Authors: Adel Javanmard, Lin Chen, Vahab Mirrokni, Ashwinkumar Badanidiyuru, Gang Fu
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our analysis enables us to theoretically understand the effect of different factors... Additionally, we propose a mechanism for differentially private learning... We also carry out thorough experiments to corroborate our theory and show the efficacy of the interpolating estimator. 5 NUMERICAL EXPERIMENTS Numerical verification of the theory In our first set of experiments, we corroborate our theory derived in Section 2.2 with simulations. |
| Researcher Affiliation | Collaboration | 1Google Research, 2University of Southern California |
| Pseudocode | Yes | Algorithm 1 Label differentially private learning from aggregate data |
| Open Source Code | No | The paper does not provide any concrete access information for source code, such as a specific repository link, an explicit code release statement, or a mention of code in supplementary materials. |
| Open Datasets | Yes | Boston Housing dataset. To investigate the optimal value of the regularization parameter ρ in the interpolating loss, for different bag sizes, we conduct numerical experiments on the Boston Housing dataset... (Harrison Jr & Rubinfeld, 1978). |
| Dataset Splits | No | The paper uses the Boston Housing dataset and refers to 'test loss' but does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning into train/validation/test sets. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions using a 'feed-forward neural network' and 'Re LU' but does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | Yes | We use a feed-forward neural network to learn the housing prices in this dataset. The network has four hidden layers, each with 64 neurons. The activation function for all hidden layers is Re LU. The output layer has one neuron which outputs the predicted housing price. |