GSO-Net: Grid Surface Optimization via Learning Geometric Constraints
Authors: Chaoyun Wang, Jingmin Xin, Nanning Zheng, Caigui Jiang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on developable surface optimization, surface flattening, and surface denoising tasks using the designed network and datasets. The results demonstrate that our proposed method not only addresses the surface optimization problem better than traditional numerical optimization methods, especially for complex surfaces, but also boosts the optimization speed by multiple orders of magnitude. |
| Researcher Affiliation | Academia | 1National Key Laboratory of Human-Machine Hybrid Augmented Intelligence 2National Engineering Research Center of Visual Information and Applications 3Institute of Artificial Intelligence and Robotics, Xi an Jiaotong University chaoyunwang@stu.xjtu.edu.cn, {jxin, nnzheng}@mail.xjtu.edu.cn, cgjiang@xjtu.edu.cn |
| Pseudocode | No | The paper describes the network architecture and pipeline but does not include any pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code and dataset are available at https://github.com/chaoyunwang/GSO-Net. |
| Open Datasets | Yes | We have created the first dataset for grid surface optimization and devised a learning-based grid surface optimization network specifically tailored to geometric images, addressing the surface optimization problem through a data-driven learning of geometric constraints paradigm. The code and dataset are available at https://github.com/chaoyunwang/GSO-Net. |
| Dataset Splits | No | The paper mentions generating 10401 grid surfaces for 'deep learning training and testing' but does not specify exact train/validation/test splits, percentages, or absolute counts for each subset in the main text. It states 'Details of the dataset are given in the supplementary material' but this information is not directly provided. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for its experiments. |
| Software Dependencies | No | The paper discusses deep learning networks and modules (e.g., IMDB module) but does not list any specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | In the GSO-Net, the Conv bar represents initial convolutional layer, and the red bar represents the feature extraction module, referring to the IMDB module in the image denoising network (Hui et al. 2019), skip connection is used between codec structures, the specific parameters are analyzed in the experimental section. When NC was set to 16X (16,32,64,128) and NB was set to 4, we achieved perfect learning performance with a parameter size of only 2.82M. The overall loss function Lossdevelopable is as follows: Lossdevelopable = win lossin + wfair lossfair +wgc lossgc where win, wfair, wgc are the weighting coefficients for the corresponding loss function. By properly setting these weighting coefficients, the trained model can achieve favorable overall performance in terms of surface developability, proximity and smoothness. |