Property-Aware Relation Networks for Few-Shot Molecular Property Prediction
Authors: Yaqing Wang, Abulikemu Abuduweili, Quanming Yao, Dejing Dou
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on benchmark molecular property prediction datasets show that PAR consistently outperforms existing methods and can obtain property-aware molecular embeddings and model molecular relation graph properly. |
| Researcher Affiliation | Collaboration | Yaqing Wang1 Abulikemu Abuduweili1,2 Quanming Yao3 , Dejing Dou1 1Baidu Research, Baidu Inc., China 2The Robotics Institute, Carnegie Mellon University, USA 3Department of EE, Tsinghua University, China |
| Pseudocode | Yes | The complete algorithm of PAR is shown in Algorithm 1. |
| Open Source Code | Yes | Codes are available at https://github.com/tata1661/PAR-NeurIPS21. |
| Open Datasets | Yes | We perform experiments on widely used benchmark few-shot molecular property prediction datasets (Table 1) included in Molecule Net [43]. |
| Dataset Splits | Yes | This T is then formulated as a 2-way K-shot classification task with a support set S = {(x ,i, y ,i)}2K i=1 containing the 2K labeled samples and a query set Q = {(x ,j, y ,j)}N q j=1 containing N q unlabeled samples to be classified. |
| Hardware Specification | No | The paper states: 'Parts of experiments were carried out on Baidu Data Federation Platform.' This indicates a computing environment but does not specify any particular hardware details such as GPU models, CPU types, or memory amounts. |
| Software Dependencies | No | The paper mentions using 'RDKit [46]' for molecular graphs but does not provide a specific version number for it or any other software dependencies. |
| Experiment Setup | Yes | Appendix B: 'We train our models with Adam [48] optimizer with initial learning rate 0.001. We set the inner loop learning rate to 0.01 and outer loop learning rate to 0.001. We train the model for 100 epochs, and the batch size is 4 tasks. The number of iterations T in the adaptive relation graph learning module is set to 2.' |