Implicit Bias of Mirror Flow on Separable Data

Authors: Scott Pesme, Radu-Alexandru Dragomir, Nicolas Flammarion

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We analyse several examples of potentials and provide numerical experiments highlighting our results.
Researcher Affiliation Academia Scott Pesme EPFL Radu-Alexandru Dragomir Télécom Paris Nicolas Flammarion EPFL
Pseudocode No The paper does not contain pseudocode or a clearly labeled algorithm block.
Open Source Code No The code behind the experiments is straightforward and can easily be reproduced.
Open Datasets No As shown in Figure 1 (Middle), we generate 40 points with positive labels and 40 points with negative labels. Starting from β0 = 0, we run mirror descent with the exponential loss ℓ(z) = exp(−z) and with the three following potentials: (i) ϕGD = ‖·‖2, (ii) ϕMD1 = cosh-entropy, (iii) ϕMD2 = Hyperbolic entropy.
Dataset Splits No The paper describes generating a toy 2d dataset but does not provide specific training/test/validation split percentages or sample counts.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. The NeurIPS checklist states: "It is clear that our experiments can easily be reproduced by any computer."
Software Dependencies No The paper does not provide specific version numbers for any software dependencies or libraries used in the experiments. The NeurIPS checklist states: "The considered potentials and loss are given. The value of the step-size is not given as it does not have any relevance."
Experiment Setup Yes As shown in Figure 1 (Middle), we generate 40 points with positive labels and 40 points with negative labels. Starting from β0 = 0, we run mirror descent with the exponential loss ℓ(z) = exp(−z) and with the three following potentials: (i) ϕGD = ‖·‖2, (ii) ϕMD1 = cosh-entropy, (iii) ϕMD2 = Hyperbolic entropy.