Dynamic Many-Objective Molecular Optimization: Unfolding Complexity with Objective Decomposition and Progressive Optimization

Authors: Dong-Hee Shin, Young-Han Son, Deok-Joong Lee, Ji-Wung Han, Tae-Eui Kam

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate the superior performance of our method using the practical molecular optimization (PMO) benchmark. The source code and supplementary material are available online. ... We evaluated the performance of our proposed method using the practical molecular optimization (PMO) benchmark [Gao et al., 2022]. ... Table 1 presents the HV and R2 performance with standard deviations for each method across many-objective optimization scenarios ... Figure 4 displays the average HV improvement curves for the top 8 methods. ... As shown in Table 2, we have conducted an ablation study to investigate the impact of key techniques on the performance of our method
Researcher Affiliation Academia Dong-Hee Shin , Young-Han Son , Deok-Joong Lee , Ji-Wung Han and Tae-Eui Kam Department of Artificial Intelligence, Korea University {dongheeshin, yhson135, deokjoong, danielhan, kamte}@korea.ac.kr
Pseudocode Yes The pseudo-code for the entire process is in the supplementary material 7.2.
Open Source Code Yes The source code and supplementary material are available online. https://github.com/Molecular Team/Dy Mol
Open Datasets Yes We evaluated the performance of our proposed method using the practical molecular optimization (PMO) benchmark [Gao et al., 2022].
Dataset Splits No The paper mentions the use of the PMO benchmark and an oracle call budget, but it does not specify explicit training, validation, or test dataset splits (e.g., percentages or sample counts for each split) within the main text.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions employing REINVENT as a backbone generative model but does not specify its version or any other software dependencies with version numbers.
Experiment Setup No The paper mentions “oracle call budgets are strictly limited to 10,000 evaluations” and states that “More information on competing methods, experimental settings, and hyperparameter configurations is available in the supplementary material 7.3.” This indicates that such details are not in the main text.