General Optimization Framework for Recurrent Reachability Objectives
Authors: David Klaska, Antonin Kucera, Vit Musil, Vojtech Rehak
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We design an efficient strategy synthesis algorithm for recurrent reachability objectives and demonstrate its functionality on non-trivial instances. ... Our experiments show that the algorithm can solve instances requiring non-trivial insights and produce solutions close to theoretical optima. ... The algorithm outcomes are shown in Fig. 3. For every β, we perform 100 trials (i.e., construct 100 strategies with one memory state), and report the corresponding MP + β DMP value. |
| Researcher Affiliation | Academia | David Klaˇska , Anton ın Kuˇcera , V ıt Musil and Vojtˇech ˇReh ak Faculty of Informatics, Masaryk University, Brno, Czech Republic david.klaska@mail.muni.cz, {tony, musil, rehak}@fi.muni.cz |
| Pseudocode | Yes | Algorithm 1 Strategy optimization |
| Open Source Code | Yes | 1The code for reproducing the results is available at https:// gitlab.fi.muni.cz/formela/2022-ijcai-optimization-framework. See also the latest version at https://gitlab.fi.muni.cz/formela/regstar. |
| Open Datasets | No | The paper describes experiments on specific graph instances (e.g., Fig. 1a, Fig. 2, Fig. 4a) but does not refer to them as publicly available datasets with concrete access information (e.g., links, DOIs, or formal citations with author/year). |
| Dataset Splits | No | The paper refers to '100 trials' and 'two memory states' in its experiments, but it does not specify any dataset splits for training, validation, or testing (e.g., percentages or sample counts). The experiments are on specific graph configurations rather than split datasets. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments (e.g., GPU/CPU models, memory, cloud resources). |
| Software Dependencies | No | The paper mentions using 'PyTorch Library [Paszke et al., 2019]' but does not specify a version number for PyTorch or any other software dependencies. |
| Experiment Setup | No | The paper describes the general approach of the algorithm (gradient descent, Adam optimizer, Gaussian noise, Softmax, Cutoff functions) but does not provide specific numerical hyperparameters (e.g., learning rate, batch size, number of epochs) used in the experiments. |