Computational Approaches for Stochastic Shortest Path on Succinct MDPs
Authors: Krishnendu Chatterjee, Hongfei Fu, Amir Goharshady, Nastaran Okati
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we present experimental results to demonstrate the effectiveness of our approach on several classical examples from the AI literature. |
| Researcher Affiliation | Academia | 1IST Austria 2Shanghai Jiao Tong University 3Ferdowsi University of Mashhad |
| Pseudocode | No | The paper describes algorithmic steps in prose (e.g., "The Algorithm Upper Bound" sections) but does not provide structured pseudocode blocks or algorithms labeled as such. |
| Open Source Code | No | The paper states that the approach was implemented in Java and mentions optimization libraries used, but does not provide a direct statement about open-sourcing their own code or a link to a repository. |
| Open Datasets | No | The paper uses classical AI examples (e.g., Gambler's Ruin, Robot Planning) as problem instances for their method, but these are not referred to as publicly available datasets with access information (links, DOIs, citations with authors/year) in the typical sense for machine learning experiments. |
| Dataset Splits | No | The paper does not specify training, validation, or test dataset splits. The examples are problem formulations used to demonstrate the method, not partitioned datasets for model training and evaluation. |
| Hardware Specification | Yes | The results were obtained on an Intel Core i5-2520M machine, running Ubuntu. |
| Software Dependencies | No | The paper mentions specific software (lpsolve, Java ILP, JOptimizer) but does not provide version numbers for these dependencies, which are necessary for reproducible software setup. |
| Experiment Setup | No | The paper discusses the theoretical basis and application to example problems but does not provide specific experimental setup details such as hyperparameters, learning rates, or training configurations. |