Augmented Experiment in Material Engineering Using Machine Learning
Authors: Aomar Osmani, Massinissa Hamidi, Salah Bouhouche9251-9258
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive empirical evaluation shows that using machine learning approach guided by analytic models, it is possible to substantially reduce the number of needed physical experiments without losing the approximation quality. |
| Researcher Affiliation | Academia | Aomar Osmani1, Massinissa Hamidi1, Salah Bouhouche2 1 Laboratoire LIPN-UMR CNRS 7030, Univ. Sorbonne Paris Nord 2 Industrial Technologies Research Center, CRTI-DTSI {ao, hamidi}@lipn.univ-paris13.fr |
| Pseudocode | No | The information is insufficient. The paper describes models and equations but does not include any pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code and dataset to reproduce experiments are available at https://github.com/nx-project/augmented Experiments. |
| Open Datasets | No | The information is insufficient. The paper describes the dataset collected but does not provide a specific link, DOI, or formal citation to a publicly available or open dataset. |
| Dataset Splits | No | The information is insufficient. The paper mentions 'traintest splits' and performing 'validation on the set of experiments', but does not provide specific percentages or counts for training, validation, and test datasets. |
| Hardware Specification | No | The information is insufficient. The paper mentions the 'SDTQ600 industrial instrument' used for data collection, but does not specify the hardware (e.g., GPU, CPU models, or memory) used for training the machine learning models. |
| Software Dependencies | No | The information is insufficient. The paper states that experiments are 'implemented using Tensorflow framework' but does not provide a specific version number for TensorFlow or any other software dependencies. |
| Experiment Setup | Yes | We construct neural networks by stacking 3 Fully Connected/Re LU layers with dropout probability 0.5 and two regression outputs (for weight and temperature). [...] The networks are trained for 1000 epochs on the training data and evaluated on the test set. The learning rate is set to 0.0001. [...] weights of the neural network are optimized using the Adam algorithm (Kingma and Ba 2014). |