Optimization of Probabilistic Argumentation with Markov Decision Models
Authors: Emmanuel Hadoux, Aurélie Beynier, Nicolas Maudet, Paul Weng, Anthony Hunter
IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We report on the experimental evaluation of these techniques. and We ran experiments to test the scalability of the approach proposed in previous sections. |
| Researcher Affiliation | Academia | Sorbonne Universités, UPMC Univ Paris 06 / CNRS, UMR 7606, LIP6, F-75005, Paris, France, SYSU-CMU Joint Institute of Engineering, Guangzhou, China SYSU-CMU Shunde International Joint Research Institute, Shunde, China, Department of Computer Science University College London, London, UK |
| Pseudocode | No | The paper describes methods and optimization schemes in textual form (e.g., in Section 5), but it does not contain any formally labeled 'Pseudocode' or 'Algorithm' blocks with structured, code-like steps. |
| Open Source Code | Yes | We developed a library (github.com/EHadoux/aptimizer) to automatically transform an APS into a MOMDP and we applied the previously described optimizations on the problem. |
| Open Datasets | No | The paper discusses specific problem instances (Example 1, Example 7, and Dvorak problem from [DBAI group, 2013]) but does not provide access information (URL, DOI, or a formal citation for a dataset with authors and year) for any publicly available dataset used in their experiments. The DBAI group citation refers to the problem's source, not a dataset. |
| Dataset Splits | No | The paper does not provide information about training, validation, or test dataset splits, as its focus is on optimizing a policy for an argumentation problem rather than on traditional machine learning dataset partitioning. |
| Hardware Specification | No | The paper does not provide any specific details regarding the hardware used for running the experiments, such as GPU or CPU models, memory specifications, or cloud computing resources. |
| Software Dependencies | No | The paper states 'we used MO-SARSOP [Ong et al., 2010], with the implementation of the APPL library [NUS, 2014]' but does not provide specific version numbers for these software components. |
| Experiment Setup | No | The paper describes the general approach and optimization schemes but does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs), optimizer settings, or other system-level training configurations. |