Interactive Robot Transition Repair With SMT
Authors: Jarrett Holtz, Arjun Guha, Joydeep Biswas
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate SRTR in four ways. 1) We compare SRTR to an exhaustive search; 2) We show how the number of corrections affect RSM performance and that SRTR requires only a small number of corrections to perform well; 3) Using three RSMs, we show that SRTR does not over-fit and performs well in new scenarios; and 4) We use SRTR to improve the performance of a real-world robot. |
| Researcher Affiliation | Academia | Jarrett Holtz, Arjun Guha, and Joydeep Biswas University of Massachusetts Amherst {jaholtz,arjun,joydeepb}@cs.umass.edu |
| Pseudocode | Yes | Figure 5: The core SRTR algorithm. |
| Open Source Code | No | The paper does not provide any explicit statement about releasing the source code for the described methodology or a link to a code repository. |
| Open Datasets | No | The paper mentions creating 'training dataset by simulating the attacker in 40 randomly generated scenarios' but does not provide concrete access information (e.g., link, DOI, citation) for this or any other public dataset used. |
| Dataset Splits | No | The paper mentions 'training dataset' and 'test scenarios' but does not specify validation splits or other dataset splits (e.g., percentages or counts for training, validation, and test sets) needed for reproduction. |
| Hardware Specification | No | The paper mentions 'using 100 cores' in Table 1 but does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running experiments. |
| Software Dependencies | No | The paper mentions using 'Z3 [Bjørner et al., 2015]' as a Max SMT solver, but it does not specify a version number for Z3 or any other software dependency. |
| Experiment Setup | No | The paper mentions 'H R+ is a hyperparameter' but does not provide specific hyperparameter values or detailed training configurations (e.g., learning rates, batch sizes, optimizer settings) in the main text. |