Optimal Approximation - Smoothness Tradeoffs for Soft-Max Functions

Authors: Alessandro Epasto, Mohammad Mahdian, Vahab Mirrokni, Emmanouil Zampetakis

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This improvement is also observed in experiments with real world data as it is shown in Figure 1. We validated these theoretical results with an empirical study we report fully in the full version of the paper. Here we briefly outline our results in Figure 1, where we show improved objective vs sensitivity trade-offs for the power mechanism in an empirical data manipulation tests.
Researcher Affiliation Collaboration Alessandro Epasto Google Research aepasto@google.com Mohammad Mahdian Google Research mahdian@google.com Vahab Mirrokni Google Research mirrokni@google.com Manolis Zampetakis MIT mzampet@mit.edu
Pseudocode No No explicit pseudocode or algorithm blocks were found in the provided text.
Open Source Code No No statement regarding the release of source code for the described methodology was found.
Open Datasets No Although the paper mentions 'data collect from the DBLP dataset', it does not provide a specific citation with author names and year, a link, or repository information for public access to this dataset.
Dataset Splits No The paper discusses experimental results but does not specify training, validation, or test dataset splits or their percentages/counts.
Hardware Specification No No specific hardware details (such as GPU/CPU models or types) used for running the experiments were mentioned in the paper.
Software Dependencies No No specific software dependencies with version numbers (e.g., library or solver names) were mentioned in the paper.
Experiment Setup No The paper does not provide specific experimental setup details such as hyperparameter values, training configurations, or model initialization settings.