Simplification and Improvement of MMS Approximation

Authors: Hannaneh Akrami, Jugal Garg, Eklavya Sharma, Setareh Taki

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we significantly simplify the analysis of this algorithm and also improve the existence guarantee to a factor of (3/4 + 1/36, 3/4 + 1/(4n-1))). Furthermore, we present a tight example of this algorithm, showing that this may be the best factor one can hope for with the current techniques.
Researcher Affiliation Collaboration Hannaneh Akrami1,2 , Jugal Garg3 , Eklavya Sharma3 and Setareh Taki4 1Max Planck Institute for Informatics, Germany 2Graduiertenschule Informatik, Universit at des Saarlandes, Germany 3University of Illinois at Urbana-Champaign, USA 4Grubhub, USA
Pseudocode Yes Algorithm 1 normalize((N, M, v)), Algorithm 2 approx MMS(I, α), Algorithm 3 bag Fill(I, α)
Open Source Code No The paper does not provide any explicit statements or links about releasing source code for the methodology described.
Open Datasets No This is a theoretical paper focused on algorithm analysis and approximation factors; it does not utilize datasets in an empirical context. Therefore, no information about public dataset availability is provided.
Dataset Splits No This is a theoretical paper that does not involve empirical experiments with dataset splits. No specific information about training, validation, or test splits is provided.
Hardware Specification No This is a theoretical paper focused on algorithm analysis and mathematical proofs, not empirical experiments. Therefore, no hardware specifications for running experiments are provided.
Software Dependencies No This is a theoretical paper focused on algorithm analysis and mathematical proofs, not empirical experiments. Therefore, no specific software dependencies with version numbers are provided.
Experiment Setup No This is a theoretical paper focused on algorithm analysis and mathematical proofs, not empirical experiments. No specific experimental setup details such as hyperparameters or training configurations are provided.