Generative Learning for Solving Non-Convex Problem with Multi-Valued Input-Solution Mapping
Authors: Enming Liang, Minghua Chen
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Simulation results for solving non-convex problems show that our method achieves significantly better solution optimality than recent NN schemes, with comparable feasibility and speedup performance. (Abstract) and "6 EXPERIMENTS" section. |
| Researcher Affiliation | Academia | Enming Liang and Minghua Chen School of Data Science, City University of Hong Kong |
| Pseudocode | No | The paper does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks, nor does it present structured steps formatted like code. |
| Open Source Code | Yes | The code is available at GL Code. |
| Open Datasets | Yes | We use similar experimental settings as outlined in (Ardizzone et al., 2018) and (Sun et al., 2017) to gather datasets for the inverse kinematics problem and wireless power control, respectively. and For data collection, we sampled Erdos Renyi random graphs (Erdos et al., 1960) as the instance dataset. |
| Dataset Splits | No | It specifies the number of training and testing instances ('We collected 10,000 training instances and conducted tests on an additional 1,024 instances' and 'Our dataset comprised of 10,000 training instances, and we conducted tests on an additional 1,024 instances'), but does not provide specific details on how these splits were performed (e.g., percentages, random seed, or the use of a validation set). |
| Hardware Specification | No | The paper mentions that training can be 'efficiently executed on a GPU' but does not specify any particular GPU models, CPU models, memory configurations, or other specific hardware details used for the experiments. |
| Software Dependencies | No | The paper mentions software like the 'Adam optimizer' and 'Gurobi solver', and models such as 'Transformer encoder' and 'denoising diffusion implicit model (DDIM)', but it does not provide specific version numbers for any of these software dependencies. |
| Experiment Setup | Yes | Specifically, we sample 1,000 initial points and generate candidate solutions simultaneously in a batch. For the Diffusion and Rect Flow model, we have set the discretization steps to 10. and We evaluate our generative framework under different parameter settings, specifically varying the number of sampled solutions m {1, 10, 100, 200, 500, 800, 1000} and discretization steps k {1, 10, 100, 200, 500, 800, 1000}. |