Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
ConFIG: Towards Conflict-free Training of Physics Informed Neural Networks
Authors: Qiang Liu, Mengyu Chu, Nils Thuerey
ICLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The Con FIG method... is evaluated across a range of challenging PINN scenarios. Con FIG consistently shows superior performance and runtime compared to baseline methods. We also test the proposed method in a classic multi-task benchmark, where the Con FIG method likewise exhibits a highly promising performance. Source code is available at https://tum-pbs.github.io/Con FIG |
| Researcher Affiliation | Academia | Technical University of Munich Garching, DE 85748 EMAIL; SKL of General AI 2 Peking University Beijing, CN 100871 EMAIL |
| Pseudocode | Yes | Algorithm 1 M-Con FIG; Algorithm 2 PCGrad method; Algorithm 3 Con FIG update with Adam optimizer; Algorithm 4 MA-Con FIG |
| Open Source Code | Yes | Source code is available at https://tum-pbs.github.io/Con FIG |
| Open Datasets | Yes | We employ the widely studied Celeb A dataset (Liu et al., 2015), comprising 200,000 face images annotated with 40 facial binary attributes |
| Dataset Splits | No | The accuracy metric is the MSE between the predictions and the ground truth value on the new data points sampled in the computational domain that differ from the training data points. Table 23: The number of data points and training epochs for PINNs experiments. Data points are sampled using Latin-hypercube sampling and updated in each iteration. |
| Hardware Specification | Yes | All the experiments in this study were conducted using an NVIDIA RTX A5000 GPU with 24 GB of memory. |
| Software Dependencies | No | Our experiments are based on the official test code of the FAMO method, and our Con FIG method implemented in the corresponding framework. This study uses the Py Torch implementation, which utilizes singular value decomposition (SVD) to calculate the pseudoinverse. |
| Experiment Setup | Yes | The neural networks are fully connected with 4 hidden layers and 50 channels per layer. The activation function is the tanh function, and all weights are initialized with Xavier initialization (Glorot & Bengio, 2010). All other cases follow a cosine decay strategy with the initial and final learning rate of 10-3 and 10-4, respectively. We also add a learning rate warm-up of 100 epochs for each training. All the methods except M-Con FIG use the Adam optimizer. The hyper-parameters of the Adam optimizer are set as β1=0.9, β2 = 0.999, and ε = 10-8, respectively. The number of data points and training epochs for each case are listed in Tab. 23. |