A Wasserstein Minimax Framework for Mixed Linear Regression
Authors: Theo Diamandis, Yonina Eldar, Alireza Fallah, Farzan Farnia, Asuman Ozdaglar
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we support our theoretical results through several numerical experiments, which highlight our framework s ability to handle the federated learning setting with mixture models. |
| Researcher Affiliation | Academia | 1Department of Electrical Engineering & Computer Science, MIT, USA 2Faculty of Math and Computer Science, Weizmann Institute of Science, Israel. |
| Pseudocode | Yes | Algorithm 1 WMLR; Algorithm 2 F-WMLR |
| Open Source Code | Yes | We implement2 Algorithms 1 and 2 in Section 3 for both the centralized and federated learning settings. (Footnote: 2https://github.com/tjdiamandis/WMLR.) |
| Open Datasets | No | We set d = 128, draw xi from N(0, I), set noise variance σ2 = 1, and draw β uniformly at random from the spherical shell SSNR = {z | z = SNR}. (This indicates synthetic data generation, not public dataset usage with access info). |
| Dataset Splits | No | The paper describes generating synthetic data and the total number of samples used (e.g., n = 100,000) but does not provide specific details on how these samples are partitioned into training, validation, or test sets. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies or libraries used in the implementation. |
| Experiment Setup | Yes | We set d = 128, draw xi from N(0, I), set noise variance σ2 = 1, and draw β uniformly at random from the spherical shell SSNR = {z | z = SNR}. We search over regularization parameter λ, and step sizes are αmax = 1/2λ and αmin = αmax/10. For each maximization, we perform several communication rounds to solve the maximization problem at each EM step via gradient ascent. We stop this inner maximization when the norm of the gradient is under the threshold ν = 0.01 or after 50 iterations. |